var/home/core/zuul-output/0000755000175000017500000000000015134161315014525 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015134206722015473 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000431720015134206647020265 0ustar corecore qikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD P|3ڋI_翪|mvşo#oVݏKf+ovpZjb% oo/q3m^]/o?8.7oW}ʋghewx/mX,ojŻ ^Tb3b#׳:}=p7뼝ca㑔`e0I1Q!&ѱ[/o^{W-{t3_U|6 x)K#/5ΌR ώI8&xėv=E|;FmZl8T*v (6pk**+ Le*gUWi [ӊg*XCF*A(-aD~JwFPO7M$n6iXύO^%26lDt#3{f!f6;WR.!$5 J:1*S%V!F([EbD]娍ԹiE03`Cfw&:ɴ@=yN{f}\{+>2^G) u.`l(Sm&F4a0>eBmFR5]!PI6f٘"y/(":[#;`1}+7 s'ϨF&%8'# $9b"r>B)GF%\bi/ Ff/Bp 4YH~BŊ6EZ|^߸3%L[EC 7gg/碓@e=Vn)h\\lwCzDiQJxTsL] ,=M`nͷ~Vܯ5n|X&pNz7l9HGAr Mme)M,O!Xa~YB ɻ!@J$ty#&i 5ܘ=ЂK]IIɻ]rwbXh)g''H_`!GKF5/O]Zڢ>:O񨡺ePӋ&56zGnL!?lJJYq=Wo/"IyQ4\:y|6h6dQX0>HTG5QOuxMe 1׶/5άRIo9r;:J_T{*T\hVQxi0LZD T{ /WHc&)_`i=į`PÝr JovJw`纪}PSSii4wT (Dnm_`c46A>hPr0ιӦ q:Np8>R'8::8g'h"M{qd 㦿GGk\(Rh07uB^WrN_Ŏ6W>Bߔ)bQ) <4G0 C.iTEZ{(¥:-³xlՐ0A_Fݗw)(c>bugbǎ\J;tf*H7(?PЃkLM)}?=XkLd. yK>"dgӦ{ qke5@eTR BgT9(TڢKBEV*DDQ$3gFfThmIjh}iL;R:7A}Ss8ҧ ΁weor(Ё^g׬JyU{v3Fxlţ@U5$&~ay\CJ68?%tS KK3,87'T`ɻaNhIcn#T[2XDRcm0TJ#r)٧4!)'qϷכrTMiHe1[7c(+!C[KԹҤ 0q;;xG'ʐƭ5J; 6M^ CL3EQXy0Hy[``Xm635*V I{a 0Ҟҝ>Ϗ ,ȓw`Ȅ/2Zjǽ}W4D)3N*[kPF =trSE *b9ē7$ M_8.Ç"q ChCMAgSdL0#W+CUu"k"圀̲F9,,&h'ZJz4U\d +( 7EqڏuC+]CEF 8'9@OVvnNbm: X„RDXfיa }fqG*YƩ{P0K=( $hC=h2@M+ `@P4Re]1he}k|]eO,v^ȹ [=zX[tꆯI7c<ۃ'B쿫dIc*Qqk&60XdGY!D ' @{!b4ִ s Exb 5dKߤKߒ'&YILұ4q6y{&G`%$8Tt ȥ#5vGVO2Қ;m#NS8}d0Q?zLV3\LuOx:,|$;rVauNjk-ؘPꐤ`FD'JɻXC&{>.}y7Z,).Y톯h7n%PAUË?/,z_jx܍>М>ӗom$rۇnu~Y݇̇TIwӜ'}׃nxuoỴRZ&Yzbm ]) %1(Y^9{q"4e?x+ [Vz;E|d1&ږ/0-Vb=SSO|k1A[|gbͧɇد;:X:@;afU=Sru CK >Y%LwM*t{zƝ$;ȾjHim @tBODɆj>0st\t@HTu( v e`H*1aK`3CmF1K>*Mk{_'֜dN${OT-n,'}6ȴ .#Sqη9]5zoX#ZVOy4%-Lq6dACYm*H@:FUф(vcD%F"i ' VVdmcOTKpwq.M?m12N[=tuw}opYG]2u<ΰ+a1tHayɒ aY(P*aaʨ@ΰ<pX X{k[%Egl1$9  ֲQ$'dJVE%mT{z`R$77.N|b>harNJ(Bň0ae3V#b,PY0TEu1L/]MTB4$`H6NI\nbǛ*AyA\(u|@ [h-,j7gDTÎ4oWJ$j!frH_HI\:U}UE$J @ٚeZE0(8ŋ ϓ{BpY]Q4`Iz_*2coT'ƟlQ.Ff!bpRw@\6"yr+i37Z_j*YLfnYJ~Z~okJX ?A?gU3U;,ד1t7lJ#wՆ;I|p"+I4ˬZcն a.1wXhxDI:;.^m9W_c.4z+ϟMn?!ԫ5H&=JkܓhkB\LQ"<LxeLo4l_m24^3.{oɼʪ~75/nQ?s d|pxu\uw?=QR -Mݞίk@Pc n1æ*m$=4Dbs+J \EƄզ}@۶(ߐ/ۼ𹫘qݎt7Ym݃|M$ 6.x5 TMXbXj-P\jА޴y$j`ROA"EkuS#q * CƂ lu" yo6"3껝I~flQ~NCBX`]ڦÞhkXO _-Qy2$?T3ͤEZ긊mۘ$XD.bͮW`AީClСw5/lbl[N*t*@56."D/< {Dۥ sLxZn$N(lYiV =?_e^0)?]{ @| 6+#gPX>Bk2_@L `CZ?z3~ }[ tŪ)۲-9ֆP}b&x Uhm._O 4m6^^osVЦ+*@5Fˢg'!>$]0 5_glg}릅h:@61Xv` 5DFnx ˭jCtu,R|ۯG8`&ו:ݓ3<:~iXN9`2ŦzhѤ^ MW`c?&d.'[\]}7A[?~R6*.9t,綨 3 6DFe^u; +֡X< paan}7ftJ^%0\?mg5k][ip4@]p6Uu|܀|Kx6خQU2KTǺ.ȕPQVzWuk{n#NWj8+\[ ?yiI~fs[:.۽ '5nWppH? 8>X+m7_Z`V j[ s3nϏT=1:T <= pDCm3-b _F(/f<8sl, 0۬Z"X.~b٦G3TE.֣eմi<~ik[m9뀥!cNIl8y$~\T B "2j*ҕ;ێIs ɛqQQKY`\ +\0(FęRQ hN œ@n|Vo|6 8~J[,o%l%!%tyNO}}=ʬ-'vlQ]m"ifӠ1˟ud9)˔~BѤ]һS8]uBi( Ql{]UcLxٻa,2r(#'CDd2݄kTxn@v7^58þ Ţ&VY+yn~F8I !6WB3C%X)ybLFB%X2U6vw8uUF+X|YukXxVO(+gIQp؎Z{TcR@MSRδ~+1æ|mq՗5$B᲋eY(|*磎\Dži`dZe j'V!Mu@ KV{XץF .Jg< ƜINs:b zĄu3=Az4 u5'og^s7`Rzu-anOIq;6z( rx߅ euPvIɦ7聀t>G;_H;2ʗ6 h6QװxmR JQUbTP2j˔Ni)C)HKE"$ӝ!@2<Bq 2oh80,kNA7,?ע|tC3.㤣TiHEIǢƅaeGF$ u2`d)/-st{E1kٌS*#¦۵_Vu3ЩpRIDr/TxF8g4sѓ{%w .ʕ+84ztT:eEK[[;0(1Q@ET0>@wY)aL5ׄӫ A^%f+[`sb˟(]m`F3 W((!5F-9]dDqL&RΖd}})7 k11 K ;%v'_3 dG8d t#MTU']h7^)O>?~?_ȿM4ə#a&Xi`O}6a-xm`8@;of,![0-7 4f kUy:M֖Esa./zʕy[/ݩqz2¼&'QxJE{cZ7C:?pM z*"#窾+ HsOt۩%͟A498SwWv|jNQ=-[ӓIgJ8@o2k'Hr~4Z(I8!H G8HNW%1Tќ^?'H(^jJ=䄸-m!AdEږG)շj#v;#y/hbv BO Iߒ {I7!UՆGIl HƗbd#HAF:iI }+2kK:Sov3b:1)'A6@\2X#Ih9N ̢t-mfeF;gUаQ/ .D%ES*;OLRX[vDb:7a}YF30H #iSpʳ]'_'ĕ -׉6tfЮ$zͪO_sYq+q艻*vzh5~Yy;,DiYTP;o./~^.6+zZFD& m@WXe{sa 2tc^XS?irG#^ŲDI'H_Ȯ;RJ&GT.Kwj;of¬zHmmS2ҒN'=zAΈ\b*K ڤUy""&D@iS=3&N+ǵtX^7ǩX"CA⥎å+4@{D/-:u5I꾧fY iʱ= %lHsd6+H~ Δ,&颒$tSL{yєYa$ H>t~q؈xRmkscXQG~gD20zQ*%iQI$!h/Vo^:y1(t˥C"*FFDEMAƚh $ /ɓzwG1Ƙl"oN:*xmS}V<"dH,^)?CpҒ7UΊ,*n.֙J߾?Ϲhӷƀc"@9Fў-Zm1_tH[A$lVE%BDI yȒv $FO[axr Y#%b Hw)j4&hCU_8xS] _N_Z6KhwefӞ@蹃DROo X"%q7<# '9l%w:9^1ee-EKQ'<1=iUNiAp(-I*#iq&CpB.$lٴާt!jU_L~Tb_,֪r>8P_䅱lw1ù=LAЦz38ckʖYz ~kQRL Q rGQ/ȆMC)vg1Xa!&'0Dp\~^=7jv "8O AfI; P|ޓܜ 8qܦzl5tw@,Mڴg$%82h7էoaz32h>`XT>%)pQ}Tgĸ6Coɲ=8f`KݜȆqDDbZ:B#O^?tNGw\Q.pPO @:Cg9dTcxRk&%])ў}VLN]Nbjgg`d]LGϸ.yҵUCL(us6*>B 2K^ sBciۨvtl:J;quӋkKϮ듃ԁ6Y.0O۾'8V%1M@)uIw].5km~Ҷ綝R(mtV3rșjmjJItHڒz>6nOj5~IJ|~!yKڮ2 h 3x}~ے4WYr9Ts] AA$ұ}21;qbUwRK #}u'tLi'^Y&,mCM)eu㠥Ѻ\a}1:V1zMzT}R,IA e<%!vĉq|?mtB|A ?dXuWLGml?*uTC̶V`FVY>ECmDnG+UaKtȃbeb筃kݴO~f^⊈ 8MK?:mM;ߵoz+O~e3݌ƺ(ܸf)*gCQE*pp^~x܃`U'A~E90t~8-2S󹞙nk56s&"mgVKA: X>7QQ-CDC'| #]Y1E-$nP4N0#C'dvܸȯ.vIH"ŐR ;@~y>Kv{) 9AG ćͩ$.!б~N8i"1KФ\L7/,U@.ڮO?mُa ې!rGHw@56DǑq LA!&mYJ*ixz2*{_;IYJXFfQ* 0kA".mݡ"3`Rd1_u6d逖`7xGMf}k/⨼0Κ_pLq7k!dT x삖A7 u/~&ӄMu.<|yi I?@)XJ7{ޱ?Q]{#\4ZfR-dVaz./f+yGNMGOK?2_~3\z=y}^G$*A! IcuR.o=MZ9zu b#s9@*иrI@*qQN||Ix;I}&ݢ6ɢ}{]x}_o>Mm8S]~(EX{޹na4p9/B@Dvܫs;/f֚Znϻ-Vڄ`[nUgu$ B6 [^7 |Xpn1]nr CC5`F`J `rKJ;?28¢E WiBhFa[|ݩSRO3]J-҅31,jl3Y QuH vΎ]n_2a62;VI/ɮ|Lu>'$0&*m.)HzzBvU0h} -_.7^nya+Cs 6K!x^' ^7 l 2Jj.S֔(*CjaS:vp/N6I*x8"EȿQa[qVM/)fpOj4r!:V_IG^nILVG#A7jF};qPU嗈M9VS;a+Ӧ8E8zmMs*7NM~@6 ' 8jp*:'SOANa0rӍ?DT%l)gvN}JT(Ȋqm|dc+lQai,|Dߟ|, d#EjZܴv]pEO7}&gbXԈedKX :+Z|p8"81,w:$TiVD7ֶ]cga@>\X=4OZSܿ* %xccDa.E h :R.qɱMu$ơI8>^V Y. ,BLq~z&0o- ,BLqfx9y:9244ANb n\"X>Y`bb*h%)(*_Gra^ sh6"BzƾH( ."e)B QlKlXt҈t9՚$ضz]'.!-r"1MCĦʸ"66pE{ =CNc\ESD[T4azry !5yY~ :3;Y[Iȧ q:i Ǟ/"8Wxç,vܰtX-LE7 |-D`JLw9|fb>4Nu ߏ3ap5k_JA+A.A~ C~`[KaQ-Ģn9ѧf q:cT >to^ X]j?-ȇlCf0hM`~ ó}0W@o  K[{d+`ze"l |d;L2k%x90ݙ^Oe ]nHfS+.4<#/5߁ݛǪ0q,7FeV/!; 瓠 Li% z}ɯww"O-]J`sdN$@"J`Y13K/9`VTElsX|D^c%֯T][$m;ԝ!,Z5f`XFzȁ=nrSA8; P=uY}r/27OUa%~0;үM3Tu ȩ*'3IC~LG,?.?C3tBYpm_g.~>3ʄ55[c&-Wgy_jVo,?s w*n\7[cpMY<~/"˘oV܉T6nn \_ߋV_}Z=k-nn sn.*upw pX\_ U-C_wS!|q?E-S_w$-#9?mh{R 4ѭm_9p -h2 dֲ 1"j {]]Nk"䁖%5'32hDz O\!f3KX0kIKq"H~%.b@:Oec6^:V8FDza5H`:&Q5 ^hI8nʁu EA~V O8Z-mYO!tO֠υ9G`6qmJc,Qh: ݢKNw2taC0Z' O > f-`:F_Ѫ2)sCj1THɩhS-^p b~?.>, `0!E%ҏ:H =VՑӄ| Ć.lL t1]}r^nʂI-|i*'yW='W6M$oeB,޳X$I6c>EK# 15ۑO2Jh)8Vgl0v/eNEU"Ik dRu˜6Uǖ xs%P ع omWl҈sApX!^ Ɩgv{Xn|$̇d`>1Ljn떚F+B9l"UP۾u2Ja>0c0Vvގj$]p^M+f~@9{bOe@7ȱ^%u~-B竟} |23 Z.`oqD>t@N _7c$h3`lg\)[h+pHBr^J |r\8czEnv@qZbRT1e8V Scc6:$[|a.fpU`ZR֩bKgTlѩynۢ, "1LӰW&jDkM~# (C>ϭQ3{ߤ%EN;?P%ٱm -{2k 8Vbv"wŏݙmn&O1^'}plM)0\n ή ?Cֲa9H] lX9^vCο -vd+OUgRy2Я\ B0!% #>bJPUck\Ul'F瘏Y4Ew`[x٘p,>9V"R1I>bJ` UL'5m1Ԥ:t6I >jz(:W֪Ƹ)!fꠗe[XLE4atGS1px#S]MF˦NJPYDX%ܠꡗhl}i9f?q>b-E'V"mNf""ŦK9kǍ-vU #`uVi<s)/=r=nlӗЩsdLyVIUI':4^6& t,O669Ȁ,EʿkڍfC58$5?DX 4q]ll9W@/zNaZf% >Ę_"+BLu>'Ɩ=xɮ[⠋X((6I#z)2S zp&m?e8 "(O+:Y EaSD]<^(]В|Ǚ8"oRs?]\McZ0ϕ!1hKS`h0O{!L-w]ln2&0Ǚ'0=.T4G7! H/ͺ|@lX)+{{^s1V63 ۗI"*al NJ`Q8B\pup6_3XqCXznL9:{o qcuו8`n{ave=}OR9~yL Z1=W8>É R$|L ]OfJl˪VVg:lDԒ͢Zu[kWۗw{{7st08`J0ꨴU1|z:9dX)z2!S:'q9| 76"Q;D*04Zٚ ?V¼r8/G:T6Fw/ɚ~h?lUc3MکEen壹n\殸,˛_uu.Jssu/*47U0)l?R_^Uon̝f-nnZTeuu nn/*0׷տ·sHH?Et _I`[>>0ւcz/Adh.$@bѨLtT=cKGX nݔ͆!`c|Lu_~ǴT?crO e9d ljB?K_z>p%'3JQK-͗R>KkΤOq,*I|0]Sj%|-Ԟ = Ʃ%>H&t;9`>$& nIdE Ͻq*nŘʰNҁػ߶d#8qg2 &)Wx[ERnĊ v:F Q=JB2*,yDƪ0~"XmTcUlA΂ 0~ƪQgDQ)oM9vX1ͣqrAX% d|O OIى=Gn^_i//ٮkeE%ŕ9eqe ;yg2,M,55/ܳ6nkc}0Ԍۆ YӤ5䖮_qŢy(yVT3WVs xN,˃s=4x˗ZS!RBlf<wx2ow'UYdA^.on`xp?GBZȭKO7"q2϶=Gh/IOShy^ǧr 5;525)zLUξA`wM v @OoXD/a==֛OvnX7"Bw5*G_zc- /ɳ gK{GBohT&sߥ.o3VW,\Da5C=p oPO1)ʘuSc۫yDD YOT̂em5YFEOWKE6Jό},t]װ曁 na5Ss=Wer_8ay1jg8Yd~FAuNϓX]>!b"(m@UGXd=z"": -f󉢬YS P(HyE /D:"X%` (V1)YeؤZI\@aM\u1~-)[7\ɢ,(g+/YJx%ZWZ7Tg0+& $JN-'Ki%u!^_B~Yf >0*`~tg(B$J&a]~ ^b/mo~Vh]E`pA!8%b~A6y:#] @dc^UM~}>9γ׺=9xL %Ãd)q+EX*D2[OEB=s]L1܅6^aőGP)9wuޝ?9I? g/88X!s!8 N2/sBGSYAY7)K9ݹ|RGFtټ yU$<4F7Z)}+o!Bxq 7#κN\4Vopl(zLc_Oi ΁诺GGY9+ ˙߽&"ѵgG1Fp}Dw-o<- /, /QbtpbnF?aoiWї2mI=},EuD,?BV/MkzSׂmT2?%X\-"I4܎ 3O8.nH`tB'g% GY,Κ}~.ٴƂ HIK/:?OYAEBqumLGbqLbSM _:gE+Ëe󜆥vTWYiwU߰IP,݀Fu`i-u].N/ m؊GiD'Bϵ0E37sX+`|.L<7?roBWKce^ESg#zZqV5Anj ~\k6!NtO'[ ަٷ%%WhpQDaDuV'm J=L>6=I`9TF_hx%ⳀoIuy%yL[ OZ| QuP ,C7YV A]6N Lۨ,Z.IX\^1-!/$A !%zװ?>AqEO'o) B"w6].+~x守i)u6G =ɥE/7pC 6b/34%&!-+DEMُ3UU]5TABԝ2 F-Tj C4(p7MxLd3 c3A\iXwH\}pJnw.GLю6 ZfSb12O4%#6c^>-Oo.#Zql^"{ۇ'8GW@6YZgW+uZPEҠ_nFM&Rhh]B1.$7pE$4(M]pQe"UȲÃ%Q hkz4$!G$pW즹ftE"SolFUjGSwT8}p|p7wfy3 K ,w]DőC)+5 8@Uq|v5]MWYVrs9& ?`O[iDJȁOς45}h4kG@4ן@tmW JQ,t#3xfysg'JC!;0Z R@8=}A<71QBs3<ӔNڇKm˛S*YS+֓'ص - E)čV *6.!Tv_簡>~X;k ~xHg؉lt>]ìBldyp _زy[,ѹ뢤'VlUvfԾPI›R_="du\kS @X hVe:bR{^ٿDVe-E kݑnJUuS3Sj;T ,3LQa0ǂN4.QR\ g-Y N@B/,*hWMp-ʠ_D|N\aCQu?NSӈĝ݈ctӥ lTY0WpB,n˔_}kKv֖loƨģ9Z_Cdˁ y,BoY Nk"58ڢ麻]5zM1ҨzmՖD$bjd+IH3pPZ;ItmA-S>j_e~pɪk\C˺P[5%#j.nI2MWNs]%[0m)4QkTeڦ:QuDewNm붶ni2BseUS8^1 \lD2ap[" Gۑr [ؽkA;;<Y&6W (heƵ",.~vr.7@`]IJ]at:m<)!hcm:tܔ#p5OjvCSڎѦ|"NعCߖ0=~}>_ʋ #P7ʦHc2bߤ[Ubӵf YeRX`PyG"hxc083eݣѭ)ݛ+YҜ kb -L[:9r5ôBOtCia50aUУL쮽JCBֈ8S4+!)+ܒU(Pm%ݦs#jȔ_rl)ꐒ*Ʀ,>_OqH9D1e`!xׇtgG*>C켅ID$p@׹d=ݑ^%dm4DKw$zTr[E$)+u3{@Z޷C40}R[p}K#趷yI\ij-١ξa,R]omD n@3 CKA%ʱD dayC$,U}.= V5 "aInASvUצ'YY^ex  ,$%6Kت$-nK"aI{dIۚta)sp0m\bqk̅Je,,o`+GBh(6Sڲwf C]UW"_DUCROW|@fO6L5ߖ U\,2!al3$+HPv m{R "UY%,7(v/<+mY=eᕣuZH^,ϤQZK?rU"ea"f17J|kf?2ù4lLoq36߾P|9M3&}l&O Y ܘLM'CrlU21Ջf;p=G]:tuZ*$kF؞=OWX?=4=Pm{  ^5@ޞ44=ܣG14 ;@޾ = $m# .I2p񿣪̤u[) PgMՓ$J;ٻ6-W(UݩݽFjU~]{OuzXIL CwÜZ`U[SD9keڳBo8)7O0'<]X|]Ԣ-`!AbenQ7z.bm$ށu;"CF:ޭ[  x,b`nwG{}|qTdo͏{??{20/iơ=,nnw^VܞPw~eld;=( Q_Ɲ v2@TSGg!̈w/u4}:%@2Y6F~nA{(̝n/GY;櫞g :xfa}` >CV)B0 n̵5y n<a8wsq_:P4`@ |, sҍ`xLqԝ8s Wn(#p-}!:2{[hxJvq5DNY;8~u}A4\:u;( 3\`8N_ܨ y (vT{- 6 љǻ @4G^(85}q0:W]l8' @Bi<-u1]*sY?<)9ȫs8X:tS츭b^&kS Sa7mC\4ޭO6WunvNtcmm5w;^Ergu=pW~|dp[pG[=@L7߈ĸk)*!n/YܥyrW:uᄸqq,jDڹ_ @ttH͵/ĵ8xsawK\ ګ! ųUP߆:؟&RF{8#),)0-V  4dj3q[p<,`a8+ I cYf*@EjAthյ2B%8Ƒ܉F}K/CDo<"w4oa/q՚* SxӜ亸9h# 4 VqH́`qeɰWϊrh3\(F͎Xq;+X=܈ PHG"3̺K!R 22vm3mﵢj[dlqn-ahF Mۍ9fkrz8rux0*r!&%&MꙌe97}*E‚U`l񘌡 f=bآDg-ћvK9{8JYp7"xPսB'5E{0.Gocb}g^ՍŸ`D_f1lݿ[CS۳,T~i8r|0ؖǗ,>:L?_KG & ,Yۑe- G`^$ױ *号R8n>~9:L'Yp\#|~Է8ƛeh՟ԂS6 lᮔK6_QYD"۞g;b B<q*'aM5m:!҉] ?gimDe6}gQ3a2nL4(́Ge,|z[gא{"wVOaQ>AKRUSM@phն lc=O=<.{}zY<5Jrs97P4>;< fcU2(&6=OK/ ,j G,nG^T0 &!+rpZ,X8qTYah!8N@ 8iWya9`jBV6, CWܠY`IvXd|/dmn@tmn <؟fks*|܋KҒAc/H@G9JMټUb>Bή *6c;`!Ƨl!r }0֪ZO`/`\ZSi {q% ֦AFVT⺤W=K#.QvFm 58՚ ڧ"kj~0v SU-(i J6l.(NPLA&kRIg4*ٲUslԮv}Nn۶nMZK7n'*}tAY[PlsAvg 6n jo v@P-: Nzu8?_.S x '20F^ |haCP($u<0J{77h JJ(_ &E13SS+sAL*>o=qe%̦Xowl?*װeu oߋtpsJ7#x Z.¸ǂa$a(IOD4 NIVC%Dͅ6y.yyߩyg! rSlNH)pL\Nմ* ^*q c" #PP; f~ٿj.Zq-0xZP{)%?$_d)`~tULk+0&t ?Ȱt wbalDF&pi| i-aFn~~iױqjJk{ToJ=R)q ޛkmi9.۪f5u):G;=ܺlp\ T y!߼+>߿{m9_[| gQyĔ4>QTM DZSx痛Hc{,7r+f5?kj4˚~W@' I{w` 6%&<Å]iYeUH"W(m]۬[  @ge^!Z]oVz ٔ T7-Iզ4үywLQMumR# h ;R,9xvNE"ٓ5bDdRje*Ҍrf2GuL+ ZG}۵V// u8ITOQՊj;Mo9ly{jDZ]Q=j0 ~ ÇG0~ATֵJw\3_g4em"CiYE'P.eT֦U?%^^Z^uJYW|,َp pkYnXF~]jV3qמ' $`$e~d5E86Ġ݇UΓS@0~9zYˑkM5<%Po~qm@a|Uݙ(w&bcl6‚C@YMi<5>ՓcDVa)WT3?Fx}8޲߬wUFgb^%|GLм |ioLRSIzGx[rٷz SzDncN (Lߚr _[_^#ӏ\7?QJۿG_뫞n=s_πQz;ދwgVC9U޺y^fstxH7#k{]Uytw׳3Jc]בN'qw׉ 'zօl PJlt#\[4o+B&3, BueUnRF;2m-,Ǟn&iN,Nך8aJ[Zti,Kϛ:5q#_~e8yDRCq䡶lb%%8ER ,$bq8b$Ď э/ "7c%;&1Q1ݵYrֳtRQmiCKm#15ESn,D޾ȒF{THĈa{!a'q>Z7/hp,e-FuUSI3oHLa(^*d1Y4"5HK`T1̪Qi '(7V]h8"$]6:5Z%٘nY(`iE#'rQp1G@iX4ZڜV>R76M`;!|Z[ DC0b4L8;Ig`vG563R«f`I μM+#}v$8w:gDN5ز@e%xAꈴI.,kX[pڀ ^;c6SŠ1 #F@`[]ѪLnLMHc%rpEb  \H8*BI2K1I|M6󤑊#BKK8*ePeMeA_{^N .^[eLiooIpTchgT iۃ^W&JƂAu#F;Ihe`=X9D: B7š`PHӂt*R$Y .YV 3w!T g)7QH8g8dum|Q!uFOB3c$XI0SQ<ÞklLc')4E<2HHb$zYE3piv3*<8=kPG1z[Ƹ1"N;;TI?~"A+{gAu\jؠXk8SWb[iLb+Yj8"$]Yw$8@e=ޝP T YCH/[ -wc{jǞUG`@[ULsZ]BHbDnUh\;ؕq)}ճd*ZpKd#J$BkƞCjo@$bf% 5TQRI[=IO$8E}zę`L\#1 F+Vqڷ ?tU>d9^(t\.*CxtQAɱjjE i.fpd&F{5H$bUIDp*\:#55&ۂT!bQ- ⻲:iŠƣS<,2teCfk70 B1OTtw^|t4] ~Q97X\pm$΋I8d %cbX[#gE~&Q-+411yNfʙ*5vbQw`l}C搝nQfd1f:9Oc#50 5+MA *k7-5H87$-Gs)M,Z 0TT2F ;:>Է}=q#U[ҢB>~} ,rUX[Þ4&W$<΀Goaf} ]gzeĆqk֬6ev);85υI/^h7˱hp9Y3L(2Q1ԹWϵ ve$XG ^Ԇ*C["Wg[^u 痽nEy?|hJ2+&e =JJF/fPod[hLO+y`Ĉ!72Œ52I#ur3$ &}JD!I!CqrsK[VԷPPV%)$Tue!& ⯝95byHi9[kWhNR;͙Bk?,o~DbXyGЫPܢf&4KGE#>#>ЈO|خc$h^ z!7"艿S%W':N@Wq,ࢎ--|~[agFS^9)|FBP12;3Kw3L"áJ$CEs6-|,xetޤOwom0䵴gU_m}5z(zIpyݕ;ʥo npڦS M Ώ.TT8)CP#`0Y" rH #‘yxy>Plj~2"a+c7\[8<ᆩ@ ^W`Ώ-XBy<Zb0`͘Ma3~014U8*" _%-nBIzDcl}GKBc<|-lC8BN7[H^0l4 e 5$H=![v=^>q3Z΅Z^ G*T)ghCTbB*s4T{tɈ3 A-| x~iS8ځs5PNz;j'1߀ߗ#D8zU!bkȔLz70Oϛ߱ ׶U'FR/O{Hb7] Ja*-sAĉe.$)g'c04rQ<'s &g!tB²&fK1CG7r%@Hw`:kiL|t"r0nbJُ=KmN&0;)hs[\;}˨SB2G <yS)Vk۪-d8K3yIzK{:,ZBpheEpt1X~XB>Y\7՞IJ{vZvc,W2^ڴ5{$:<O[ Gzo"I5Ztrp Jf5&.yoݿ8xtt G(}b_:$*[δv Q Mk8Gn0%l!8|pǵ qvԖ,jr Y1D%HDpm/M낷d?4B5gd40O†Eqͼi<<>l+/-jl XDLy-o2>ӧբ&_%JK{)}y-v磇b%D*J,yr#=$8nοzޙP$:Q 8]<"OqX9x4b˺>Pe2N*ɻiHbAI8ٺV]7Apyېx]tJ)Mw<2q-8yi3{?ߠSJ&8Z βq?}@'|N.#NiKe+њlztz}{jvR{Y 7g<lr&ޝ/ϋxu4mruWxZfWWr`l8HB6 UmŚc4N򛁝ëӹLOOR~Ob~tU=&peKX| < ?V04a$Kp\؍;0իdt $f<6 JP։)#8E 9r0w0TFS>ryOB?ptSO>4?C{ZWN9W!Ge1 lEg3|+wrЙsǙe1kTB1[p,!v31_vt/ }.Zʢ1~,SQdɮƑ&d4Gxh۳kVG"߹DDrD9˪ ɗl|LAw"ȝ]@P~\BpxEo,X]3ױUh6Tx1?MyN:[2"3|셷Aqra&7~*9Oѳ*?%'x:DS(pRxa}k\,lRRa\P>4<@~)BzI0.JI~~)ow.㫼w|cp@#pbWc4~G2>} Xg#-ؐddo :8I*#..E#?'zI7rGadz5+И \f4Jۋn}w`~9UUB=FP~L%<:vUKT;F]-#+˫vw`fbOYM>M&hYe~6b?Fc'kCheBYN}\JΊ{)oZ*^#0>%q/~Fa=Y\E #Ŧ^}n̶ǎ|_i [Ws%r卞4%r¼~3zFA߶. Io5@iwYsGMFb F(3 N.ŀW%7²̴Ĝ*ʏ\IڌYz4;:W:F}Q{%7k[4oЍ7rږy3E*`h>bXH"HrinqB5/(yK-M8\)uS-l0ꪲ T!- nUh*[sgXlJH+ Ղ:q"=<kpeQ YE+;nׇgmpŔlǨy2@=ܠnߠ5mMTA "m`hCf \5ޠĪ=7[oioC1* Dmb *{.j g-~Lbު]v7\sDz]-)o6,I0*)S 9%9/XAiM&#K'#SYm'hA<ʁ$y7EEOxqdV:e vm'pd/k$@U ~0cBme7W&i5Z/mL%q<'L74l'#)]vu(^e1]?Lt67夂~z0EPЊvxy tYdE.7x;Qn^2Ǐqx7KY8@Qɦ,Ƕp?v> L =fq `ٍɍX3R:s r Pܸ8-•/Z~QT4ֶX#{nap29OڼkQO ej.;*x;֕;vpXOQ?M^*hCyT5Νh&Iq[qO`) +ý:,!(Rn̅YXtɺWvcxv]@uD`-N28u;/r_!źOƉ8~^i{vl n`w|m(XQ<1=` y xh9kjevuᰚ2J D@1Y%͞ W0W}]`K͂aY=LC>\Swk'nKl DCN-2u&+ .8'bbN*| =̻gN ۯ>[L?A94V*@jvFι=G^7&z!giNu.R;W.!(k4D"D ٗ^yIX n㋫1\6f9f..F}isJ*Z5"|251P35f=s̝*ڬg(ӝ$/pMŮ}X{^my%s[bg;V@}'BWo#zv/  rse 5mXoU9 7Y39EƛF2qK'-:~a)!$ǫX5t[R_\J ];v(/ǾQ3ӗKV(ED*T,KQ-8Q`cѠt¡ |Q+/NB6p7@֭6^2Mȏ'Ҹ6P(EE씹ڕ?\3xz11j #SawV1q[lء0:CXSoumu+u+8iq;]q0u3UGs䔦L K/Z0hJLh$6 [7HaD 0"*C:Ah(D Nc/Nnu+u+WGV5Cy0{BZ)GD"ƒ׆y0`TTX3VTK3V([ŒyyjyzC\?֔br,_ݺN\qm%"lS~zC2*yzCpL&>R.p2Kۏ*4<5~O.2[ V߯QT^z;X~(.E»l|vc`GUKG `3[;4<$Թ?sh6\Uh?9JGZ8Kv\ɗ ?"S~6š\MR"·j9 [q@@x-zL ߌyśhzqIuJ]]en ADt5@iK>U2(uBobѣ$J5a9M٢4ϟ~~k 6B^8ޞ]x_4u~seZJ9*j$\pNiil5𐤡=hP㻗UOOэ؟ë[J7b:kQgs.Fo=:1L'_Ϡ<͠ ^uNڠU Jj(ߗis4C:aڀȷXք:Q\EꀴQCfwY`L&,AR89[I=-0a*si`/c.1R{ g S Ru<%0dOH c&Hj(L fMCCstHokYgj#VE81ukB~Nf@j1UkEowd7ZM.o6hڤ jRLܫRd~ b b+J 9OYv#?MYm){1no-"W$=o7, e\o^ Wz(^K6zDp*F ߿=e~T?40~E3JURN*Υ- Q@`EDkB(.Um XB6bw0h Ch~aHevE`_  %KfYJQM_GJ.h3){/1I}+$a4\VЪ!|\?->߷Jyx1hS0Qe I ZavSD ' "VKJ NRc*8.MAyN[۶,F:8k"Z`[37mKi9I \3No a\8+% kuޱ`i@hrZ@VQ8uw )œۃC-` . qZZc)n5!L2J=@pk\qD75io )0)8Yn&*u;)*ܧ *Zj˔4n0JYqd a: ƝT3\j c8(;WNi0-SK;E lদ6@{LW&r-&4Ԥ҈!D,sX TɪI#`J9`BJqbS"8H N,sG`Jd5O8˒&L#s&B0 PXBpj( z@*ւHp,q{QWQ$Em#_!0~?5qع>5DƞjRfHqf(,>ztk"X7!d L(8> Ng/f0(mdЧlїDH9Hbt&i .0hmDi0hk0-ѧ{>a)H&I8J`,QJ,RRP3cYͅ4mLbNIE$2,VVJǀp5" t`Jjk6>2Ty>gTX?2FXQ*J+iV BVTւf[ֱ\IEv^h6?>9#YBm>lLȒHq"@j`q16i <`y*13%hF[c͢a{$D̊2(۪=XǘDx2x,",!rQQbXT.\Mz$x_ yA #n]w%k yaGn0VT5 ;WhBjufRi(4PhG'XMP؀q 2T?q-H ֟7J՟C\q2 }$(L-edG:!Lj#.J4a4v\Lm*F{,K"&Jl;m4c~11i 3+82dĩ?rص01"hdTBvDTȉWBRJ!"<-QǘU6rϘR'OcLq!",ޙ\N)b*4KYKTU j@vILFW#:;^Z> 70#Zp"t1U7V>!Ƣ{Je/ %ݴ5&TI񄎘q;Jᄏ:27Ʌ8w׮uԇP=a/V՜urN׋P>v؀PM;4@Io֟BwV{Lu+i# ulJևVIcyЕf˛Sۓ;lIL3o3pU PM'p#}n`FsѵkR}60U9 He\2A#6VG kbz#.}V"Tvh '/Y g=,؊H& H==nA m%?Mc̀KA+ ?2:ifӗ坠S=["_%/O~8@9*I iK=[z3'L=4 :0Qb9Ԯ._tr?gzަӰ<7Mu`}νQ؏,J Wir=Z6 R0(RM5ؠ}IK;y`w鷛!r2,uׄE8x^?kCevKɪD/dXXj>?a;oH k!RdQ3*M(Zu(%سxYQ`ztcE5SYyF#5N!34p5szzh.KFbaά#vfwMu%1Gj%8(xϷ4G\P%_3/w01܍3Y~r+FCdE4:VyZAHāIT>gJxtNo9n¨6o 9WL'7 `("2?@axCo;DEʣ%j7UR=iaLʋ_~qlT<%Tg̍,-wI.n2<%#/$n0R^Q Z5X$fD"[v{ Gjn}ݻ7%z\O'Q ݨxp;{z- ^Qr~]4ӳb@]9Wy5s0&78U[L([)9Ro鷌c,Zfew-$?ϛ4GX=5-03#(A$gƙ% `OY`݂pV_Yt=^{&p<u 6#Zo>Im8hxLPY;IV|%ɪ_| bۿR̒axs\s.X9R1ZrX}>rEV ~ ߾z1c0@DrA >! ڰcda 껅0l}+3a *5> BZFU*`(O|9S&_`{-`U ))NxH7B%c ˔H\f|xVyI(${%5yc` Mjg>%BkƟϮ\t wRtiMФnZk,F(5 &B7M/g0L^+U`ζMՐVvw4Er<(,۾׋FL^@|5.X5Auv(`. =Վ'/*H/["v2G/TU"8GS|j1f@rrYqN c~У%y%.UB@BSqn+3.&& Ǵ4,-OWboyRmz0h5 >Czʟ->0&nXD.E_hd,DiÏ'a5>Ng$p=PJ%"b~9-/eW}8w^56\G@rA<&ঀ@TPX$:6i\dg?\' ,ȣiϾI]t|ˉAu"~a `>>16&T)ZͧaSS>Lz[W9dԟҕWg^sH>WZPO~B7BJ4(d<(s^FiȥwlYŌWwb-Z|;We sY2gQpY&1y?<,AM yP?--+mFByȪFi2bm B+erMdjI83X(8ICcHXI 0ẐZX8¬k!Tw2xB(3'hBc؂LJ'ڙ6,m$=[xܝ mPmrK'] ojm\DžMs BV lna r(ʽs.%&k5mc̓9\k$T<*{ɝf{ʼn},+ܒ<.Bx 'u,7;,7԰?|lJBl)!++]Vps<ſț+ww<楳i:F*zZ\_Ro供z+9/rWS0`Gnnpc;?$4wzU$Vx}};7c]HI_NQ(xoM pybUPk9ŝ92:_5.e`|/0uWYr^ 1z'?T#G(Phc%"&i"̨ʘԱ(6z>W/o xx0._!#5e OQx:A]d]ŇNfI=Y⵸oѸp3)`ma?.u' 44}Vɏq ]__^ ^NowwηIPm.szO?[e )AW|8_Nw1RZCQfp=oOd+?L'b /r܏8G\KxfBg-+ZzeW0&L0KܹKMϒhE=gf`uUco8b4L|6܄}8'u u䞢@~|0s*Щ2psRGX]X\'dqavF$rG9k$޷ F|Ƴ"٘P"ؙ(Z #}5\o.W\@^1Z3XIQoPx?Ӻ&z3wCW \5?O?_dfi֯Yj4M5X$.K`&_lZ,|!ږ?M=[oPbO,d~}oE hOKã*!7i:t($a,E*Xeq n:{P^~I- .!3bA4"DQ #+èNٰs:TPZ(e<JwW+.ۋq'SY̲%ˆ%42atBm sx+zEB\)jygƯ2_+݉[lsҌY`U!, ;4q,^-j殤d9oܗçwxo#ҐlD*H:/'* fEZֶSqȃUfF _DOˉD,qo M'Gőԭ.W+<>w{dx7H%q5Z bVk̈́Qt\OtM#mjv~'m~ s:8'sý'(R2r|rdBƛ1}.ёvqsk;бhe ܫ\"xK5 @-V2F&d6lg\v*oeIp{>9¿Pv$q %dBĈ"'V{JaԪaYyӻ^p9F,ۺwE >-pUH L|} qBe\H "D7UבѶ@}Uy'["ڪ-pּ=4d(P] (K|``-8oq|O}++jW=a6cc'H[4:tE.=/|`FJE ϕc*ݰn;i+V.I{[3Ms⃤1mTij}L;)h} x (W깆\*GYna5?[sBDrHH۔%ؾ7M5{&c1+A2-H4qY&xN{&4}S-湑jqsUT;s-DH0eT$h)&XrG|6e҇G-3zɾc+njJHڐiL^! i`&|ߴyt]\mY_>c@*1_<'b~= q!S0l/?KRw+&2Nq 8A|AXC;C;sWˈu` r 8^`ciG>W]" j67*l(Ე6xbDxfԀΙFGPs<ڎ29ih1NE$qe#LpL5<78ύ":8%"Wi hyE֑I+Nf|-x5:}϶F%gvpK,7uoid> nxGOkbyK/fާ+:N1Y1㞳qr:-9|W~H>1(7'77-5`$S8RC<ꡙiTG<7.ۣg5ɥ*js@wʘ CuCҖPi7-&9X W Z#/)sxjC0"-g2$TcQӴg;]_*VFo'dt !,[T쵱~n ;#3 %~gc{:u1ώ |8erwYwax={ٯwvCXH2>z8%)z- _6v|Ьֳ( zL/>,_"܎R)m;㽿˯47זp!3ct.Aq""[(\G *arBFXeOE_FQw2:J-Qƭ^&j 8Idp:!z<(2NcDLk؜*ykkUs+8r;(HR$: y|W1sU.oz֏w}36U@%FEd/CJњ r2NPcDL&;Ư:1fT 8e(p/Xw+=%P) \.yS?*\?Ծ oְE6VZ[PuMv|KE EЙFBO ܚ?yc7E>Sێ<ҽ[T򄌎{V-owB K[CB"9B[qr<7|͂ܬ_X $Ä9]0Sחp+Bą*UΓc!ٸQYA;+(*g6٫<ћ|ו\Ù zy^ދiB͊Qz?S͕[p˺x{5!WJv|3?+&PPM2:UB%a [I x+4Ÿq9G m+K $oZSh7>  nȳEJ/T0}J m N *xʆC*Y( *#8bqPq[{clTh1!WAr]hQ`oi(L5FQsa1].8Y}mU`Lq8gN'yn 3qr E>Wv1 0 2:naP/vTaQ@42:^ Tc cwnXv0Ϊh!˒ʤ"'S~C|MZ%sptWӲOv5F|{v2Z56Xkˑ~fC0rTxv ;;E>WnT?n(b, ]u=??i 22zNgږқ `9O+9 O )kNC92zm5+nQgm;U<hKPʉuo*08J8/J 者..2Nc|QsUL7A <zD9*+Uv |_H}E x-DYe㳋"fM 9u$OlaJR z>xp*SHRRG5!ͨZO?;IkCT< j5l+ [,]qrrA%14\6lx}-Y&wqA E>Wъ.pA¿cM3\{m wL2yBFǙerM䏻fTYdBF%;*qFC >ɂ8"':o.614|Rf-~;Riä*w|AK+,=!*vI0ck/1h2:Ђ>[Pǯ[z?Dۥջj9.vRW~b$׻ AT8!Q,ٱ~T2Bx2 @䪬-UT%.'2Nc|Q3U.vlܺ8 UZauPDpRь=!ZjpF~zݒUh;CѲ!Rؕߛo0__YN*X0qgm"j%˦FE>W {'01Arl}wT`BFϵe3{Z(㈱" tL.S嚪&Q|6DsӐ2%Ekl7@0g!bkC*ƙU\ժ PcZҏ>~mUqV7ilG0KdK/8;Dni ή̒iV?ڇu&%m M̊)!F˙ Hd>Wt:xϙk\;%?>o^#6sն!-2a]p"H&{rBF'm8FuiQ S=.⡵iO֫Nl] k3C=ߎQM5?Wﱯ5ܡ5@ ~Mz^ P .Et ][nx "&D|8 BFڹ圿ܨ.Tt z IP \MdBɂ#x-8| eכֿ5 >p <HK&{E,Fwb2yxdL%'Q &';\eDC" yL 2_ e 8,7VlrN]FR < H3 Q6|V![|<SzmVגvlDv51¸ kz<7{AkUCmQc[n澼ܙ&SL9! Ҩxm lu .R,<tyrUW=yn[SFD覟储>[;Սng~|*t\5(7_E"0iٸ{~[}ٗb rsh˖-+0ƵǢ9ЪY*8{-vߊGxGUlJL Zdf@es.8z?nB*9R֜0(DFF|DW0\ck qsum0qP{GeGMKmFSK.r]!F2wUs)ٴ f>onA5/JR]δ;q̵;R\{؊E(s2!csbiC Ut8n^?؊1=uD۔L҅?yWFnR l *?m٭&/0hQ$M$۩ט!9!a|~PYT݃ߠ G&UTQR(1 )*M#"bPy<8dg9?jU,B ˕4YxRhDnaƀ'yFh ?*Ⲝ-1˻iư9>x~7,+"?OfYg+ϣcjޓFlS`pە<,'0 =OgyR"Q"m0ϽHCw;+0\ߢ;d1!wb0_omwU@bwX񺦞C0SWݽ&NXgL4Zb Z ) gl *Y/r]jDVaAav htD{s3mI 籐}e]QFue3!4{" "0N\dCP0ܫ4˿jmlXқPm©ВY}a\ /$xpLOsLaSZ+O1>{y1=LCId#^wx9ԭ!bGn{Cj' FO'3pCKQ“s| 3\qkgqP#> x_H2b+<.<9tJ?h3qLPUoEaN<_̇h܇/,nL^x!fdR+`ڨRb$82ny. z#^]\<0Ɔ,J. 2ERvÂDž'WӤ37T3K΂Y8$,`,'NkJsR+?W;+`F^ق˅爟4H@ Sd֋&F\/пb`c #w!ۓٗsA9oaܬ81«"ͯ8yGJǫ"P`J ָhKqYy& )6Akx2 yX3ˣ( 4ofs[]}A3#ڭ]` e6h- 3 'qN Z=kY$TӻE:2. 8 p(.$BY"ȩGl),#ɸHf0uCݮ8 qfӫJ0{JLxӦIic ڱ(O+;xk^nnrEe݋F[uͬFv u1]"k?Da, fG5K%p[0OlXpZ TA1`o*5R6A p};u)wiHY.H($B;CėKClA m4 1V(J#FS1B95j /z %M/VhnB+t\L0p߭-, %TE`=!ދ z$ϤLa f*`Ty(u&+媇 j$^""=^?l-GfpCqPG00F nnva]fi87h˟= c0SN0'@u  zLٳ겶5?oc<b]0< h( G3>+fm ˔R?f*3\_҇9C,Lozcb$gV`KljsUM|˟uϻA16Džr%^ AuLAf# :?G1 z*tS!F[T_{Z;Ya`ic4Ɉ^b,˘$50)_[CORGW,U=Ted5zdqN-蔪PO gbDQb)عe!2kרRYo|2#(@JVW.&74&>wh NM[>, M6LkŠ@cė1#JXP r߻7f8ݥ5v1ĝB_LؔGKTEϛiǔ֭zQbOpS[cP_cdu㤵*G^b[e>?˓5ˊG~m# 1vC-F/Z0v')\V7%J|%OJ\ LۑG/߆:yZn1zYJ%,#IytH+Igb ǙWnWk0 "ڈ Ze|y5˹JIC~\D+fĵZh V L GUt(iT*QOIWcߓT7n Ԧ޾"Ei:=NNոi<ʯx[N^q}X.3 3UQon,|-adI3:Pu2/UѼ5Ji;1!]r[A2\_Sh VQM7HJv7)10V:"llHar/B{I D߆Hb;KK`3̾G9!KWgf Q6'A ubb QiK)`7l~oҩbEtj1z)S,1NTzu;ƳYwke b􁨠) k֣'2iʜٺ,3>JNneTV\cd}rhN5upkDm ӻNN~+[^֨NĆ:ٜA#P'?-F/ "m2%BNIIkO!^#cļh(.gLRg<*3,|S1$:Х 6x-=ZosC`,W1Mb[P/FvkGb5*Xi6`,Œ,2d`|t|I9>|9VLطԃ<դpTŞ"N¯e!E+/:fVMbїT2e}xƼGKou d$&+$XQ n75Ma~kmj胛Z^$ &IK0]ŞL¶B9!(HY,k xRg`RF> |(.ܧyJQ؝bPJ؉ oR$[ֽeljSAhĞ*N5Pʮ<~6K;kȣ!Pr?p*_iص{]3 NOZ?{Sr0Ȩd0Q`\ 8Wf`-?lP/r#}1X??)(}99ۧ)ц:n-F/׹ڰ#R#uQc0&TA=:ڄQO aJ"l堦$`>lgUTS/ [A"F_b좖 :kehFO5zY9xy5~ _p3VΏœ߱`>>lЉrUAQO~vEK1h,>,paeE@~v7`*J(jBLoܚon?ݢw:}n1zٴJh܁h#xR9Ät]y \Y6F6%j#)dʺf_>źؓف"ɓ/9i/-sOn8<W=­v'.됣@AEM(( 0a4^t/2JLfKV[{\~wnK~mG6bG|0y7,¸{?=PJC扡1 >YF+ kX[K4ɯ>{]ÿ,tT82N]#]E0ʒ^>o*ZN}hM-vlc`iU2Tg6ފ2˘嶯(Fs|݆zy מbѾb-F2vAڊ}_-v Lz#>t Ke?x?lEkYf'bUݏn:ض퍗Y}x<` n:|7uS׶9,2y+NA22KN-[R2w42g\_{c*t˸2s&ը~~ʎVshf|-WmC"oKKn{uzNumMe{*0\8mяVZ_ʏUaAX3g\;&2foڦSWwK~nÇxb7w'â%@a iVY|ak/ӰbKpŗ_'Tػƍ$W>q-2E&Hflv4cȒ#߯DZ"%DpbUWwUuSżSXCv6|IU`˦dЮe7z>T=՛ӫF߯??"_>z@뀙ƭ&we,~:\L~X֍/żc(^҂xaJ΃#AUyέ!es@üHSV,Tݺ9w2njsxgSշt>-@\V7Y'__z/qE: Ŋ&wN7;[L5o14e׫^y.wƵkEgcرW]qp,HQݫA_+Z@(gbehu!V,H5zMYry꼻YSuy7-w4-77QiZ+9N>K5|8GEklGp#T%(,*X8h:1+m ܅r4Of.XybٍKYq1qQPeY^焪7 8;E}qS%yآ'<$:mG!R{+ 22+mV F$qNJg {wJ S/l lzpfqgj%mWyZNv\!r sB4"B ;&jNz}y[KzuP_sB*su3Ɛ9N 9P{%^̦nqKs382 CEBW)xy"Zǒ,4l#XSԄ58Uf!#-EyϔTEd1@1U!$(Eݴ%xBgܩ\X-2H‡10 AB+81ǵƈaR˟RzB|AYR3 DfEbq^{C{$zV ׄJ-9Of)>P&qQ@ l< j"'1s\gia-be `1f#r\.fzfA^lф4M! Aƫ(+%G!@屎[(qAXuRzI!>MUq=/;ˮ _S|yr'C>ܠ,"L+Y=ՏΤ:K >A2d(aJa.+d>1k ჈  -) .B}嗴<0|WcM͆t%$>\'WU1JD;JUSCӢ4H$Ëن1=@֮\e"f"V!MG KX}^?Tԫ=4fM!Nǩ1hAr,Zw1"f%u(An>~'Wΐw3ؔM.SBmS qՊTa$\yz:zpLw3FP93sp >Pj1 G1z`ae<+BN[emh=);͒bt3uz EM=f"v`G -% p{|~\6x)Kn*("S(xB }}}J=MD q +bPGIO>#~ԣAPUI2'u40T}$պ^#k.HM?狢ڶE}vC);;?x QS:mq+׭q3?@<\eľ5֊tnlF=Rk>.agor,q72w'{YB_\ƬYrY`33ɕ1b,BX#>Gcs([lV8)E2oe˻qp18Mc$TR!޼Xɠ9%x%2@g#GNG@vza[IFJG CeS2 &US%-=A+q}/HHMT1ʫ!%/9r`p[e֮~c$ I?8y/tE֠kDǀVc܋~ysQ.~B֏4b;p[ԅ+>V[qY?l}/ ] -hqq~eRH2y2ꥳ^RNRNw;18Bn0!X[NÊ S&ys#8oiϴSҋu>rs6;@,fs[0M_Z lx zcoai7 Ɨ pz!zt5|xDntؒz`Bޫ1Dg?`xwkS||.}dpirdW^3>pG0$)!Am"TSzTY2sLh*"6B)+gl<1Toz0roG5g rBbL Dy&}uGqA)3熑#Bƃ&qP6 hi+ݫ.`tέ#DiZo{V19Xh^:V>Y/s*c$c `','XXwcZ:$xm@KZ s3{s*$F 2y!ku(/HԔ7QkIw|J'siI6Vz+&lf35M.^P`cQ>P&4CJ5ʶ鮜'_W 5)ĩ*fq3Oi~l,mb*n4+=u8N(Ybq/9ʼ2x';R0+-S0T2RZ撈S0TC挤^L`1C9ݓ U` PsEhHY0Jn0Rw^k<;1-+I7`;Hg`GF`$Uzxߍ&f3HHYXm݄QQSvb\\MDY[ NjTG\)$HWDSJl')'*Ά<2)tc249Aj,"ʆ@ކޯN%ƃBܛcSa(]GI{xy ƌo,^ǫ!\l=ZilA+ $k"U?,}Xw?:9żc(ףq'AcۏJh0ϼ6``bIβ8\fXr-Q>P7J/0{|UN,|$5a6Χ:@g4a)f+,Lf0@7;[Lqt>اx#΂̴z-XQ㈢l+,m:#uk˛&][T\ΊTb-Wꉈ@2 __X֔L kTSFȐBPJ N}c1=Sqcy2i-u6[S a)ITlW':z3#U+7s z#(͉ #Nr0n ~!nc.IUy-[DŽ⺩O݌18zvfxIHN bS^47c0.l񱊾eden% u4B{IJр][QDZl@ ea pZ( 3?35&PJ!>eJ+odrm ^NŖ褽|[0Hm wF_sQO<\NzMUz1%@t`Jsl׏O}u '4q}Gȫ 1g?[‡?[xvk=Z [5x[έՉܣ0pjzF<rg6%2HzK}?TMOНB|j:5kSWe`Tտ&߾.ZKjy߁G>1ߞS\gbO*mZl,,ˉQ$1uR!;%~[Kep/sz=X;8߶Q.7gA"n/ߟ>v&OOMRLq"Iǵnѓ?yÄ 1a36x c}sX8n\8ɈZf 77nvggQlx1hǕ&pBRO֑ͫrSʔaܽLjZ]Tf׽*=;#X8J PC[qpwJ߽;Ԧ3ULM9LM!>a/ Sq05 q_Q`"_GxrM{=y;ylon"F z-#Cq ln+r??_ji!eP}`$9TҒ.*{`h{ڣ:p|a\9wiQ抻2)vW(bVRg"]ş&V+6r:/6$$LKvY$meI-nsWK%M޶$s/# /΃ȳ9垏& O/b$%H$"-ÜBbQ0Q^&xvCHĀ!e/HHz\7CFep8?ma ]U`n_@eX*۟ |8zfVA҂0lzqZ#(|6(HGK}+V\{~Ĉ$M#ec C 9jy[l@iU+PlT"7Z)k8n Z#G z+ "ZeH#E Eյ;`[!dUC$cȐO֮m<*U-JFkKv`I9L8t+qs \#03 F2tN\B/8u>_[)|"՛a;hǗb=KI ޵*9qJQ}>8x GL/~ 5 BNԣn)W6 *\UI6x;oE. [vp;~}Ux D(8 !SAo1jT[G6b{s+$6xd'>rMJ{ FFrDF6߲΍l.~1lo~xh-`,̼Eo?0tؒ޿Rv}/[{sΣSSe͓j7GJA-FûzZ+vy2 Zw޵E;{.;dOͻzE v:&2,ur6 i%}*ǯ ;?axu$󘕟B {B_ -Ta8u{"y0le4js8w=SFc-$VƀϷf5M`go[%'jCFep mM4IYi=_T+QުD󭚬!28_2#X#'Cgp("_cGhVua#}"+l-6HjV!cFz!28D\r;eѻ|3@REϤ ,2( &eqPxD״=JN?)mK'uȨ TExjSl98$FR>" 2  zbذ/}r88SZ  ,E"x^6aau3{9ގ_NsOUX.W .v-Ua;'lT4 ,. E)Puz(Mc%.DSAi6KS<)w8oF .o=H*LJG 8, d}NX (Ϸ$u#28|]q6k H[܇v>pr263!8 `"R6*r!qr*D 15A00X{LHnM^q+J1ea,_Y!ya(:j ;<#vVǭCJK#Om@Ze1*^ҭeOfʀ[q.5zcEA]$1!B%O m@*8{aT{X"C@my_{a|0xe8h>_gdUQfr4Ps.;dTG׭d]F'֩dг`8(V&A`(3V*vPG#mjī b5l6R (X%dLy;x,do_]> LsqʍVl}3"YVpf|s|!rIjqPΘg#62*CϧC0ݗZ8Е rG(OSnK&+yCF]p(0ݎ26*E*,b{4QsLIP T/Gm爛?!284H]s:z\4 ]YާS{HH‘ݹ{Y+hs]D5`\uQmdT@OZiFz80szitȨ 8:s<]v2Y5Dmr 2BJR*g d+Z&=9:dq .C8ai#kre ZVAKYa2*#䎥na>-󭫬?2+W EoqJЪtM?YRrY3@p1 &4A&_̄ɈZ6k d.9ƗXc(&yK5(e(?pLi(T)vȨ 5 CP:`1Ҁh|K72*TkDIDF_Dlf8[o%26rx0/GkFx2n3Rz3{V ~2zvیZ+nBYK1$4@I44mIʧPnW9*CFephӼO[]!28<+?=uT#JMр|ƒ;FldToesJyA.T G|NSQ] \uVỲTͭ4L*N][ bTl9fM<@(Q@u(" P"z;|eH 4<5W։:t.Rk=GCFep Ts;)J/py%ё8ތYᕾd? +ek s8,g`h+)'`3'̷CFep |Ye|Ꚗ/r/r0wȨ 7#اR(8X!<2s&5V Vզz/6+jHH9dS;h&7BcqU;CFep*Sһf)e|yQC*x6/=N߬2*CiΓDN f*-أbNk)9Lŧ*Y&O+J~;xeDDyԊ1i>N6r~]=e~a;F"L%K?d'˰{7Ja}h<8]!6f΅nv9kw}/T.H_ϱ5[/w~^ ͪ!nSRu"L`tj<+0mzៗwԍ +athh<O ǂsn c$GנTchŢ=j2?q7K9wm;cZ8eן"L6%liW/O'0!zxȅd$A09Жge@^>۟)D_t<zS~;Wܾim^[t=<C-S_z?On>y#_t"~}Fr~;xb?26Sq*igMͮ5ÏKzro؜ɜ:qF07t؃YS_3ڜL`spwh5D =n?]':QC܌~-[m˛k005O-(c=4#z6GoF13Rjf)&59NE0Kۨ!)"; ݄;hw4 f7t2oN'*qXjF zu݅Ơc|36BdbcvbfE'OSbq7 X8Fa*0Qag]Ōm LP3ҽs4YmcLF%WDoՂ.3@߳.~ﳑI,=jTP{1]Tʩ@c'SS.t1 ]:tdleNßaC7郧777.f0tI, ڼ BMI[pf~ ChSOe~mOt Em7Ufy B`P0R1~Nɲuomrm애'MsH%I'e$v́}mlfaȢڸ S~([?Ǚ;~W{yo¾EH6MKON.ta7W_G $KuS/Q.s ɐzΚD gJ'K1+s\ؿk͚wMۧkgݯ,+45 ~/b)%ȫx4Y U~6.#>w+Gs0)$ǘX])( GB^Q鉊~ml[+羭`ɬ *4;5 ?`M:X&GS+A;-ߙ FS4B '%JM ^^kx-hAg ^lx!B6Rk}s5hkᤫw޺ZoGT[7\]sUC2ms_OC:* \bn/3Ž>| ~zicȞqa U cX`i^2Bk'rѱgNKM6{B3vhВ&z\Q@É3%S}(kAL5)0e6ks<~\ ,W Pa*I(j0p3Ӈs;cbNrPE'9Zr[`9M-b4[I`ZYൣn=Wb q\6vn-G<Oꍦ&XMaV(EsfP 9 Z [|iu&z<C9DŽRn,;8/"[,.VJ@BD h -93#8uĎgVN(G(>~oqX=T pFA 3oh#/x]> ,4P`q}觉<% TX9좉X;))qJ70yg) &8n]S9 و$5x[X;&>!)4-# zqk̔W٪gZ`@)nvl(F")45(bO?^g|ԁwD;(C9반IB> 4OM&D7T_sQ( - nZn1T /t*l+ȳ??yEY*<~v&}P؋?l:AO9uo{]^WU׬; s J0Z*Ǭ(,(x%>5Z ͱ2a6/SWR 3vJthon)L1M}&d4=wtG0Zܷ>u8k+?6ǮbUcWw3$V+\p4sE.hyLRkCXA^BmoA^'AѥыUAU L xZ\r0\AJ+ -О/;복ȫ$-ԫ>tO$M4a|EbP$[3b?)P6*B祟4jjsj[̢ڧdQ}bQkJuDwmJx$Lyכc ;-_T$_NrcPڭ >\) a[Aɂ-ҭåCdz܌SNr1imQW`dj@kM#9)ۀaYڀ|0y7x>Y܋W4wH.6}hfCR"t݆ ZL<ƄcetT.i73g&lkQX jt]VVދ}wU‰@r*,8LtqXUȡl?{zG6uݾxVmH=t 9QD,i%.ǽx/*]G˵y_l~˻5浱oz?Ṽp"~r_Ň~᧦C0ފJY˵7Tbg )Db !vOř?^ok3uaVe𿽁~?t͉9{np 9## ;* 1Ƹ0+o7x;J*?y'|/l~[;Q.Л򄑂:!UX@*7:)#"bFQC"eL@g DFc?"1d3n$*p9 1F{vn$@$Lu:RIu`IIFҵ.>O7Mz-wrK]Ixa?([c 0 YgS F0赗p=ꫳ'gR/c~3)g1;M>| sI%@_G7i*6KkMhWW?uz"A/qqo4I2HpȷCՔkoms[m6k$ m [m6kڊvڪ6}b AjIVoٷ趚}[;fI#5m5ߴjm5}[;fV1GdMSF^(E[ rEtR4<$8C85SFSl,Esys1jn `>o  e.Iά)rMYDU! ux Hkԩ#կ Ftg RHiajH!zwC!F8ٌ+'yao&df5`QƋʧVxw;|[xo91^c#-Fu1 J R ǽ~yto4m *rHAb}`^^mxQ *@@u% EWA{k=6Fm❷jy LPT"/B9 ƛvˇrRa 6 A4` FV,UAsǂjiL!QXRb=&9$BiKUk ȴpR4rl1ۄ[b_4ϮYoIY6z %q$=>l7$|hwt__lX 7.Go9 *8WH&S!3C9<A^`w8XMhP8C@^7rۿna:P,)9 IE$2`ZI%H7 ^P%@2AvJ]110k?a|%֊Z(9Je0G lWGeR$0SNDԱQ~`[niA<(t(c 2 , ]@?{WVl/3 m.U\ C^n{0g &䶜LOJe[-qb-E"yjaѠpQ·'mgO.^4RD\/]7[||_}+˺폾g_NۚXtT8vj5ޝk½OG$>ϣS~jr%Uos1 Gi:Ԋdv%s&p)P\C*,ϻ3Ҝ|& qKs;Dnl^]2b첻38,r$-JU .U;uѺoщc{vB㓽՗ !9{gGZOQCӻiiͯw4:jcYmi㣏~Cg/Xk:[יci<)GwUblW\Ut.o9Gy 8Bϵ-UG75#773/UeyM6ɴ[X7}>EM llU[=MX ` #_^?ƞ< _ׅA&e.l<=wՋ??_??׏o~8x8xsvdw.3˵C>=YQ?:5e٨. V+B Vq!bh?˚ΜYZ7c60_ѣ!2eBHFPdm >aQR${ "&? @Uh@,#'˵w>iZpKZ"pnAL}NR4dͺέ8~@b.HxJ&Y墈2QVb]Q"9\ZB[]Vu )h8W bƒI%K~RHXRNh kztkzՓd9e6Daedr8h!'n5dN nWϫm# Gԙvde EDiݍQYI3DNK=FG|LPYEb™{DsA-{vgVon Z:;޺:g%z-)5뢳Z =!|NFSҞd^v._~|lLq22+f#e(R.;<b bC$-ھ xk ,l"iV g%Q,@Mc!c.XڙMI>4`ag-HU[Mt}j?@B hX0 [f0'Ws^@Ji/i⑹fq.^^fC$/A^E(DNa^1Y02 Z)Pdz5x2]DUX'GYhóS3Hh^n ^°$Z# *ZΣ V

eq/:슏(|EWFf8cgG_piqyzT8Jؕ\y?ӗx֭Y=:&Ywyg/4>2 oٗbކ :[@ʇZ02yW=6' \ s vֶ)TPe+\,Ȩ8Hh^AW0ZC1+P\-\C$4Z+4B}nѴkL7DB ‰ֱ4hC 'g*I01>7OtK:9\hZ"Iy!JJ*S~ -+< vY"KA YHa֖#DŽo<86Rbhj^\yHhj=h&2H3z)8Uس<orQ^[,ZoB xmsZ\8s$VKmAoRzX2bYހb!fd^kR"xZCZu] -R@*Wii5zP#Z=dK?@B N[Ŋ @M QPTqNܳ\lؠ~qC8Ftz !{maXE ](ZJ d#Qk3$,-1n06<;ioqr%sOjun*B0 =(G;%vrpk0l c=Mtw"LsPTV 9kTIau{-}usnt;w4A iszP6t.侻HhӚXxUUoi tVU>{b=C$[ژkAz`: |qg Mל"MS 68Iii9g7"x>ਛinՋm,|C$4T'QFb2@QYOIy?@B T1xOmB-p [C"L=j?DB \W]d΂!JL -b:z~h B͙= sIMo~j3h%dDȌDQj2YУI M[d΁$q T5usB'uѦP&" xmZqrILz֪T t, OXkȌٰ@e"GB*=!Z=㚳cfAxMH,eB6DB;sk?ҩ9}QQ'3(D+2K' q6ܛs]Z>t l 4m= +z d]v=mƭt=x VR.DwzIeURXbUTD*u6=sy)u}ߓџFD89;G$B2!J\٠= w 촢^ArTdw?-iʺFӄ(uѷz>ZmʸIypUOqčB㓳ΘY2 k`:!%m2 >H :@#5 =]ۡ`<(D!B9jIGW@FɣYkc!+6\mUE{F87fxJ bghVl-k<"Xy2hieP엓SN[J=d-+ZQMlQem)u iK_"]Ao}1Y.͢e ?5`1ۜ7Sž}LwlΓcז?}wV טo#_0O|اdlqZ`\Lz}w59-V5ro2Kӿm֕>jM˹ߎNXZfetAբdG.cWExi ;9<f|zpWF=;(@fcT)rGZ8w;.5{ RVX3_ğӓџӫl╗{w\N켡m>ŋyIl!M5%D$K*@ % βW<'U*&[i4:>r!KwRoрuvW -:znV!'>1*3oV( iL/{Ƒ/ٜH}ȃWn$Y`l}S̡,ˋS=ëDJfHJrl 5U_U%* 98W6 8#D'C,t-TyPHpVJ Tc.= DX@ҵ$)/):$00$* "x)7!y*; ̑C#L9@RP;0Um.'MUP&h[PK FM_zdr8( DŇlBWc..L;(R`AI_!Ͽ4qΖ6 .A@ꊧS0^Қ&w 5Xr|6pWL "f97_z5zNWU1~~3к_;G:_7 Y?L[rEfyB`X'`bRAzJj>f7kqG.rݨ*sVT"Le$ ́}>*X$qp1l-3Ck*'~k4wqo~|~{}O|u~Ave^ cC_CCI&~ l˳`q4B퀶RaJ(&o?)uZ0h>=frT~ICwН:t'u&8,B-4pL$ ĻtP:M !9/#aKб=] yПGE Ј>h4Os̐ \(̎PԼJ *hFa #nLn+M-1mՈ~\2kR0у%^8id1׈ a N a䐤2U P-0A)ıCb Φs!3u\ a+OѥQR9{;y#cJ1E $݅bކ YRh#एVx=ǘD)q4NY̓eAp.U]LYU4.Y-HRA#EDM7B L.gheOq`'x%6wJ8 ҀR3; xN2J(Sy,cOHRwtdmNC{ښyW;{?.nf$ۜcslsmα95slsmfcܠm;dAVrvkn٭95g֜ݚ[sv nZլIoN)9E7ݜStsnN}KmRs:cNg913ӜΘtƜjs:cNgYr:cNg܁Z&h+᧪慒+[iWr~:x4<0Z2xt43Ńm/8|@|Ww0ړc˝q@_E@(朥lrE6.`k U ct@ؤ‰A2j/ɝ^s%*۱W܏n*'M4j(H 7;CPXa(ճ>깪Ma/;1]= ƹ[R]ʑ:glY}~Б*5tD5)8eT^/O{Rq Pc0dp0@%.-;Xؑ *6(iu6҃ e4l:]R2 . IUpt*BcF@K2@cĜE* (Sk%ik!C̖B0f7ް/}m(RJ T\A];W'Pkȵt{ͻmq0 Kss=RqG]x 4[1|0Q1j=i;A `/A\3yy[v6cZ*@ ;|z{zQ$|Ϝ`|aMB(0.xM QI)%lǒ M0&3{Qnfzu=5_\zzkn=fS?zrka]l3X!rjyl]EJ -Sg)0")3h3'=qT9 ;0BD l% JE1zjQp9ңn5G,Yydc KI S"!&"7s$Tt|t?q8>-NWcLkoyP#rNΜ߾|w{6(qHJH#$00dT.ʑ "x)%J'Ü%Q [ZcI"!tP/@d狠XsPsL(QmGsudXDa -J) $C&=s v9w.J;wdThg;N }vO7 $E@BTRt'42 ([jlUV=[xǙ}K̷ b6 E=Nϋ$^ oa=5XN^J#5*#E_\TzZgofc-mF TYKχ ׀+_D8mPKk!/&ʤ~ܧٕKxe aeΠFAehp.ղ7Jo ҳ2R0~š* ݒPj Cm=m|^[s2{+{ &zL}(9TS$"16G䲾W9(~GP3N/0*T]W?ͨ"mlHM<]Kz>ћ`* \1mc7)nvQqk3ja9;u{ ?oT:,[[Tky 2v+ [͘ǐ_ŽTO /%G[6-S~bs#J7"./Mlҙݬk .f_2[yu]qwxPVsv˲nK8EJw!ݞ:\˴\_rfhGThF3vݞ|Vb 3^a `}NMZoI6$I"wI՛眙F|>|/d"DGm5oETB-Կ_ۭZ*:0]Z[Z=͗L# #'fD=B%Yq@_yxx\퀤`DmJz;Xj_JBj6Ijϑ3{\"$3: MnmvkVle1 DX2׶TjR4"EWv~ D/M~~)b)0h@.owq.;%Xfu3WM$[Wդj\ɫ`S跮F9|jAΒ@_Ub]*Zf`,1UW54{7)6wNb]"ԿL@lHxӢ0~:P,٧l`ZNa.V2ij8pdi-6yu~l*:sn%+P`2DP5;Zc|%/`Y* Dc3 / ۜ̕Hq6\J[n;b$?=:j~\O[A@S̲U9}u—?VQ`+'R:r" J0MN 1`nHNTmZtipiSIbwMYQ@'\72MlJn;ڸLIN'"O zx},-}Y|Dv|-U}vܹVZ X"VbFet0i\}k9FsZ]g?q֬91J}SQGgve O^fwR yr4=N&T$rU#eּ#܃hTHES*;yjQ*o{9_r([)8彧2(&3wOu#Jت-Xw@%Ljww > z% =ލL_LqҊT~w-rjOAߩD}2!E[T'G*7WÌLj}`9HB߮x0Ml{\\YLה_Sss42RV֥<F< ŵ=*_Ub7F_oauS!\V)I,iH7"I(X3Pk{F;THlή'ͺ+1SW/z',ԃ]j 2g؋LϽ{vc n17F1HKcaFZČLbEǘVq$<$A՟cX}Nu ~| Dj<:Kc嵬Tp`5Њl<~UZi}aCD|{ك J]&w;:k3"YZ6-R$*W߿A[Ul<9lŊ+zf{KqZg$W8WWWMO==,iOoc Xjǂ>EF 09>a1Q.e0[0Mv ߵfo fH6mrLk#5֌yyGQ/!bshϣX$R&q7Q0av2.eqfWU}[Y\6+%[%fY: 3B UD CB 4:`a"Ndc@HȘ0i bmbBlV apCcm3)2HXkaF! pa[.a s3g8FN0 `&0ƩibpZ3miLs9z=L K ULkD>-91x,FIp^Uv>lw-ǣI]RI87$Ww\S*"1c|1Wcȁ 0~(ԮS/F6GeO4̢蛏~it6@oFгIV<|tSzX4GQ;*ػG c Ⱦw?GM:DUXg@zJ`x)\:~!~kpXkHH$ 44/JHD4L"bBLDPq|A| S^OIǁMܚQ)k1 tf׀p$+WX"Ѧ L. "IkK7̾&u[W r;c0fշ7p#?uqh+Wj+e5,ڽHK%["-y0Z]#GBnݧ| ;[&'P lRAb (3iWϞ@{u_oFDH#+##DDC8#B!EaD0!2cGȐ 7D9:71pd &23WDM}f"MIVD9/9}VSi򠯪zvFkT^x z'81U[9Vr^nkn=|wjUF>w7yqML TlD&1 m 6xQ4 \QF| ɻDo0ws}MlGMSlP]@G(yӀof}#7Zywi~ιr9LA^:2A8čO8RM RU%u&SSI=6M'Ŵ <"Bx(0(drP6QT= T*Cs)cŰ C,:&ahLL9zvvgoLppg3.OwΩ#OE[-k^R@OrQ/wOŭlV %2|1UrVWoAtv65A`Oezl}5?emK+nw[}xo'J ɇG>3Mw骔S_5%qՇRtTH@E(߿2I 0Aەi- +C+ ʠ.UEO%j}V}Le}XY?}tۊ϶[? Nfz#.]`uIsgO?n/+qp>#rՊtk _P 3]Q*!_\JAn%m\:oE[3fڢg1;e*A#ιVZlTh5/jJF;@jnYͽ[de.量j(I;IɊ;͈Y.t5Dؕ ?xw3 [pL: E k+cvڡEBܥ/>q7!8s@fy|ie^&'(@gkzK&j7Lt ҡj_꽢g cP~Ӊ.-Cb(7c@عu:ʷ{; 9[Nvo %vD%>]SI~x_;-Ųipް?UWDŽ~Gap?~Lk5UZ: mzc6ˈZ/Mdػ}|*ES[qk;2:6NX;Zڕv#HhgĦO^6ӟ;Gu½V*9.s3HXjf؏ڊ>qӑ7Og vq>#nO - fqnh/*]Yc ]UT'4kojD/TAH*AME1K8`}Yk܁sIj(ܴ!Stl7qn ݗ0jL<[,y~MYwwh~iGɡ,ԓS:(u bҞg:YP\kq "[3_P'seH;vݐXw1.pѰAٰh)PDɗך"޵=m8 =,W☏YNuo:by!-;O҉B@}`*e:ɹ>\t$*PFc/; ];nd m&J̐ܝP5ucQ7C,B5|+ J'Sp[pLlG*7 Nm}cdHHM6ޛ 䫑I^58vEë??|~\A=.,&Qט+{ Cg2K(fujЁI0e'DHF*o f=ddL> zX4G6_]C#1c}HP}`j(}0܅<>^)jK@|VM8_LjG՛38ݍf-~S\LVW:͟umgIw9+3~@;X~O>^XP^hTFy1E$ 9T;s}kdϡ _Sost3I4M1?vp昹4,bNr?0\` f6A>C1cH\`$Bb94}Q(tX&3ce &E٣ ٻ߶$eqء" 38lf_6i %Gx߯dI%ڦ,ɡ8gUQFYJy>%(SKȳFmuX &%xaQvxPl|}a ӧ ][kM]h0H_G[MMTh7V^:Y;FAòh`]Ttϡ`\!xeY  4px㱠:qt;x 4D w54sב HitL*P䜌 %aA#F sOǒpk?fh3 !OPs<*̑ye8R$0S^+̒BZ[SF‰6 G[,ϼdG{!,ShF!3RTb0͜w!Q9'̢,"d" Mpgİzc/8W]!p<,;ǢY?KָdHZ'|Lvxt?pmp5JVlji^Z00dRwE&1Kbcp6N3Jn ?ʆu憷W@qO?O?_~ӻ>a>]ŧ?/0f4.1t YXҏٯwލO¥ QA8UevGcW`07 >]]E,Qs'_eCKVQ}Ӛu47o*֦i ݮt5v.@hR(1 9Ҍ# ܗ7ل'߇e{v#_lՏV<$a+ITs02`X:j#0P((*Q,]$a.]}<ÒDx3NxCrW0sS2h V+,P : w&Ftu:9)=dCCW4bh@- Nk+Ϸh2ҩ9DOtmet=F1DZ.rYr.r3ف2LP~`&  ;r"T`#5c48gFk4 'pZ`u:G==pDVpP,m,X3GJ#Cv3-aKa N1CQ{0r) iKi FчtZ+n'cYwvcֹvx; }p6 ̈́ȩ Kv1+(_GF""E s{ AzN9"N:FI.z,1 R*ei˂VDG/m@6őp^*LjXEL)"JFc&a :܁5{lݐyEar9+ (E0cӀ@'K kXJdN{w^{['n܀FƠޚy kwsݒ9XseSJr/gk b|msO5MnL6;Y˽Gdu{Ȟeg:=8F8hY]drTaYT RXN^ C!f"68Rh @'sJx-lVκY #_9"w7a}Z#ZNܻy8_MכYFxx@6${',DZih-5tkݷϪ q"M)&0$\Q&WwK-.; @s08 ,: ʠz%*W'1i$/N3/N!4 IG߁ P3 x:FY^ i$ɡI|gYmRwMK_3ժ&YoiޓV5+}ch[,nN>$d~He>TO۲:bE- mJr *{A0o ULG eZ30{-# p&p+ K-%eXB7o #d8U(]vViZwrk'DZuZg by$)k.ǟJUWiƶW(#n͍ɸ[WmzPxwIB6ڢ楒a:lov064ZR;wՕx5UåQoqqZ|m]6WEԕ%.?Yz 0A_NQ-q0f$+ɷ;hƆ*Y[d:tdppf؛Qz3JͰfg3 lߕV 1xx.dKE|Ͽ'J'd<* YDڥl3 SDYY<DM/l~sqYvo8'q 43փI}8]pKUՓvr!IJtk.(qCvY iI(J&lR"庠mɷ{wӟׄh-JHR>RLSNq飶ěU`̙[zj]+~C?ܥeKWp͓_h4:^TeE#K >OP fh/#Et$LEt!"'U2`M!@˾!\H8Y`3 A3 ('` )2GddR{dby7cZadPg($^2#YobA% 9IjdJ_a=s>99lLᅌ0yc3P J}{&n!%AT-܍1UK0˿yMȡC`(Rjsfȵ1'Hk#~o+*!-uqn:)H-xgIn Dϣ",m+8o=v<vL1gDJyF Rsc̱|6=Ga\СO90EXfRқP懬§׽U'/m] jKƚ[*[?BVIk90Yfs/0JrCodNtWLw22('Y3{EWo/ ԦY{J |_~n刲~eXDgr /ZeX~˿MbwKU $g\Kr1~ _ayYnSh݉K}VfI|Q<7idA;DiU+5\!hp\TE;?SYs$3U2ХLLF uoIJI)][ 'nbvmSKwZ!#$:N(O^t7ܣc>f IUw ٿٜuZʟ)wnm5TwToUEors[J!,&<4Cp#?><6CgeL;3ňVT?W~.\>\f:%?4_]Oe̓lHp4nRD[w!u-M?ml naHU̲G q8'`b^G\CdԶV烬knH 2o,ĹG];Fvig4ymfC%NN/w|/8}߿Ƿ?Hxˏ~+_P*qn\+/sa52?t~0/}Ȕ| xU00tOp(iɟ~ٙ̒f=U0_,:y36T޼iaU |w&\J> هS.z|Pm@UqUd!Vd 3 '!2?QAEJ*I΅:SKM*2iq['نRT優,f٪kcp  @Fc3P"WoSvSM࣯Qأ&҉I3n?ڭаtY]G_Z(/ :/mUmq '~g)f|PDZeDX \Iat4q#inD>տMD8'3; g 62>U`0+!P($`Oп ' bԚQh-l 6 n7wnGoN0 MR\&~*hn<"bb-&\ &BiT\AqRsQo!Hk)'d)Ƥ$D)z@sJΔMGvSAi`lB] Gkvd}s|H#@4*Y &x9Q q% ]﹖5Ơ  oϡ;RP$1$5qnւLtq@ Zr?(v6W&GseEHEV-'m$"*NY[Kԓ|xȞVc{gKz`=]" } YGWXȧr2rQI'XٳpV#Sz h&fKƫ!{7n^ IG sqlёcJgCGLA6!8Wᓼj% YZDP <13+SJJD*֊ntHPF J=>oJD` $6x8WQDhShFX4VμztPP~MzZZ?߻9fM7bz *ĨMUeěm·PXPH@R}O пn;w?E'~=t~y\U%{yb ^^>JmuQtmc dؼV`mJ[EȁSӜo8i? LS)kiM*q0ı (HI'A%>) O`$|J7Xh4B9!ph )$K)TǍ8Yzv`hX07t Az-W91\I DKm]dDKn%׬ }XK>Zf| R[cLK*&鬲>nr=ռOyj$O/E7T th(@Fl-"uI{ST)Ƅy]H&T|s>&.eqlBiN9w (-pn}7Y'Y6^Y/C.@ )& ԝE_:+DK p4L3B òWw Ӧ`'B3s+"D\Ҡ ΏZ NJ J( Tiq$&8Fsf)Qp*piPXcVBy#xicIwReIN o>*]ued 9§; >Xpmt[-wr';~=<D̤Kp!NSL.2 %QMPrkxϝNU.S j:%r{ T95UV>K-7gfovxl%ѣIL֊tg}ӵNnڴVv]qkOL']vϧm/W;.w7Xw|T}ulMpsuO^s>dyk~GqP =|ǃgQ Kr@ko X}ݻ}7OknfWڒiG"Ҥ!f h9]K^nP_ix~4tFņ諕t osCgl%8<i X4+D ÚHŝe)8kdalHE.jS9ҥe^U@"-9vz {"Z`0>8v4ŶG|>Rz?->D(lL8ۅ&R-/rp!)a9[4"`Qp:OQSo%V3qJh".pGT@bNg( >P1@CvL:5A)΄YV qT@ dsSpƺ{*`c~.8I,&56ru7/Mlւęܣ5"I > \[ BI" 6b3JFS1j| ^* ]qo6ֿ9c"L,k$x(&Ap:;W$Y(ךY +Z?ukΗEn` -m;i*9~V֌ʶs*kK=-,ҩ}R z.>-<(|!;:Iۢ6Nʯ?=~j\98\s"^Nu<'_d.-?/}O_)HR3i|ӹS"6v- N K>m3ks>h?O.;&2D}MGЇfI-GpҊqF҉"7ǵYoiK jQ;o.W*#:$NVkB2N AR-lB+l^Zמֵlr sӋ{0*K?Y ;I;LZ`3|, \*/IEMb`"|L+@ʾ" KB*4mR@(yThL!/YdK\kBxDQF*"t 8p'wޱG2/iK]3G}3Hkl&IBF)w*1"$ډ Fjoy2B'Er^h1sP)c$pP(8Nj"0tp pqc3a'f_f{'}_ZsuGg+--eWZV3x>H Ulbʩrp^dtDM0aWt?Bl|Ir6O\ϔL)`;(UhѢ!DEwD+BŽܣ:d< Zc]1}?p/5@\9MP K"Nqqh&e,tSpUy?ҴWάXc%;H6@\& ~CҁFǓ4֐5 !I{CT`HNP,9$l&O-rZ|{JP%I/+Y4le@gjd]-o]t~oӯ9 2S|!xL{*{ EW7pA+f.:{߹ÀoS̘|p ]qQ! LY NCg͑,czT#)@(-^)9k2޸}xӍS̮jP9DuDgc;?{WF /ݲ> ^`z16E$e[ Ȫ"ER,")e-Kd^`>t-[}O°.g`Ћ76[ԯZ }{1_,?@c v`y&p& 艫0tcoeMm^<9sU擟̮نPAb2{˫fo%'Qo8r3so|[l 1Fbs$7t5 )mqebևᨖOi ލٿ϶-NQ >|]vU"‹Q,'y$ ́|:4F~{)[:@7=_9ѧ>O?~~>|q;0 .Zc軅,WF/7KKR#܉30UI&>V]0c~Ϫkc(eS̽/YaT}9WnqRs1_ǥhk_E>Bt/u]>ҙgG eb}dd#j.&V/`^KG>2u$ Ee *ű+Fs{ҥʹ'p#n+{ 3K5)O Ƞ5Zalҳ@%(ܙ]Сd9ԙA`GC.Xq4= NkΏ=h3,N;CB6)vH1A_czS:Ɔ=4PI< Î0uX |5y%&bγU^t<CFFj^hpŌt3Ti4 ڃLj![Ӎ!;R"L)_\씵i)/Jԉ 8.h";l@:Gp@8xf wdD/Q$ eIh ;D{[7u38`U-] ~UwݗLvګ+_pAŕ8H(NၱX1XSY8S`"W/R.ϗx䨗4nP FI7f5OFok0}΀̮Yo{c.y}^&r `S/%# be}0`'HM-a$EN7RD\{_g{N>HP6I<ێuǟ)o iacsɄ/2",ɲ7oNod =!"=A)hp`%J7bi1g;JI7II 0:w:JNBwnZb{7gV[5ڱd/O/k]8sg,y<\ vő033 'jZ^UL$ʡv*5ܰowO$3i2.t~A5aV)I5xt6}if["[.Ձ sJ߲1S«OU[fԪj5{{l8jítNIkbufSjϯ$vlPj7yp 1L2QMv@suXw` uDtRcWssOӨ8P^1o&I+gU~)2a%ZCfE|%װg(I\"Ǐ9Nc^wҲŎw ֞{GءKS9Yt1k \#%ĒXSIP㍜L"O`[JV Ffdlb]wgش; 0 2?|'NcF1ZleF͝QFGR :drFs愞2NĮ䀋Ζ!)`pŶ@be4(#P =rFv6{6NxJkhfC E= OׅD?SN&g^YIFl^h; ⒑bشŦ-6miM[lbشŦ-6mMKlZ.RشMeNwVm-fJ$efHEY*]šJ7󰘇/9&r6ȹMutdXDc&"VD h$en$t"O z|A`iA<7p<=#y%wK 릐}Q f-B|c|z̠24EsՕP!(+]:\U1X_Pg9yTp9+Y(-r/vn>+>H@Xx>;^֞,KBUxeO9?ѴL.9MV 3$5AZX,^vGI8!,$S[)r9Sw~g6~J|VULybd>ZRylS\-&x#32x5sf"pa^{)ʡpҨ.ag7!ӑrpsԵ76o~Ӡ2۠?8j fQk %I.P!`V0/2ZDJGt`ZD"rRkaE˾ \H8y@bP |@N@k ၱ>r M:\,  #$$8XG!1z ",iP%LR#1s:: N;$5)2Fǟ̢D9<:)o.}7WUp+ 0{M\} :ڊ-*m|Rbp#M{NC?s#!-sΝǻ٥X9lcJ#68R\ojRz\TknqU0>B=nӫE`vzn6|f.io|^o۰ɵu><}.84m*ۼ1̏ܗ`r(%ּA6ʶ #!-CZ+/b/9,0J!* ,% K(R(أB"zIkeo( Fᑕ\Y#&XO- >\z0zK`<>+щ]N>ؼB8Xsgo25M=P(~WWX²"7RP)w~ºF`PrQ?%&Vf& U2D洨RbK*Y p|q9X9@ǃe*q?Nhg;uM-YZ@hWg=qF|ÕZ'U~GH?&bujx]ߑ?rQT"WZkr8TSe!Tx%ԣI%ruu'{JMFؽqj0Lx^/7O. 7M?nח=mr؜|}Lv ?ʞ-.7ހMvG:{0KMᛓ䛃=o/ 8-sW{P]6V]{FO;|w Y0sͲ.WGfwhyU1u+ItפRs=y'IqFjEʝ[sI-W0e s:pV},(#:stNIkbufSjNul^տȗ 1L2ik'80{Vw` uDtRc=zvsw8P^ 1o&I+˗QdJygϫOO:L#K"Ǖ`Jg|0XeW&*lPbXʚ`HV2M?\\5}#=[X#wfѓk.Qs&n8HGeCxnţ2"rHg(2{xJc%ҨY*gMu+@0!S&aJ;f2Rʃp-q(5^#2'U6=Mt~2QufwV&D?wzC$U0wOdvd7|], ɤʲ 4px㱠:qtШz$h?{FNۼh 9Ivf`3} hk#KN98[˶ZR%9vn|U,VUqp=*g)AyGR֠VaU&tt`9$IH]$*^L{f(bSL-WX7a$ɵSIJk6NIrҢ#-Mlg+H@dRAt ce6*[ 9>&22JFZ% IV>~)"1!cvǪP 1 ,Cd<`Q$g wfNE2˝!BeCǼLVBT%Qxut#{ǛJAJ1 ?1Br1ZW p#):.Ü~C}7g Ώ$R4=GľF:@@ L1L]Sk2h4&x~WM~!|BuaȜ FfOO7ӴXקA8sn>iOz(ƛޥHD6}_NNNj*i\0&qS4f!2Үkgէ[U&ػӽ%H֡L\xs=>^x̖LFIoz20gJwKۅ Qox1z\yK7'cM#)8G:oF4cl06ODBb/GcUqTV:[dӨMk|QX5/by{W1?⽋ÿzRVOsprgĎ!uw?/?_~ӛ>qO~~C/D1Rfi]Vk/纰^{]fGlw޼Ո(!pUKXhzi'߬Da8뱄dY2y0Oo|hƛ -Am2msS2q%<*Dz3f|Po@e~SL}ىkJ1_􏶩{7‹pFq,' N?Y\ bQk2"]'0 - (aq;'鹣 3Rm5d݁)ϒ-1di8uғ'mR(-(:#:6uZiJ>m=@C[[P@3-9dKL*–G@4N`YB5[ZQ\dJ)IG_u٦BQ|_,p=*f%Jhtu )MKLB{\aU9SW@kv=Y3OgJO@;4+*^zg޻|vv dd7`zwpqюБ - ] pj٘\.קa:dٜtFCW *\tp"E /Et#cnVt}<eRDzj 0CG)8J’*!t\,2+sFo4{tPP&=C/d4֟Gf4 Yz!6a5oJEގ4L֡޹g$Ӛ{דau.0񒉠c";eY uu=%o]K%~논@ꝕm,DVzG0a6x7S.p`LMnRWJl&zt\;sNG蓷]Eo}7/< |ϓY\/_r4yUHlG7MoPi=t8_'_; Y7;{ C"q?G&uNhR FYY% TΧ\q>(s2>u+dGk`. W{`h9xhl;#e5ɐ֝xշ?H?ì$]3P)*um*Ù(nԁ nbBb<:`x3 VC  #&6zYx:.0[l?H|MZٕ*qRX)#I#sSx2CDE4I|V1PJh:mExJ/gGք;e\;|7;o"PZeRQUAgHjC@Q)RVU^yAGy6ު]'up`qSA@*Ř~^/DžF]R>k㜲^Eg* +YbYPP/@gEPm Ԥ]>J1u(E\Vd(X;Y`KXc̢P[Bvq9+]]["tDv[(I尣M~vxٿh@JAigV"6+$v&'*}TLB0ĺt˧: ^t{zεzn#)30FM#LF\ll&' Z`9\Z ]ODw<;zvZ;8!42ٞ[gx9:c<$d!bIt! 0'.A}sRsTkS@*]:MЕw!sP6v8և8ӫ)'@j~o{H۽YOQMdZ]͎Row}=~ND}C Mj$"%D璼? ?~9)Y'3"L>Ț%"*K[! 2eF3v~*|ڥhU۫jr41)K.H0BmSvA$ rVb2HϹV>"k/8tҵt)5.sV݌⴪rO`pu;0 Ԃ̊fXOE/HcQdK6ZdP39bfB:Cʾ Xh^mdL"(%Z,![KBH|HY' ٢T!bd$c8c*.eJ"H! n%Bg[b؆5-;I$'kDjuvwҷ˖PZ~p3^\\?o'=R&nKwN6jO^~sbT5Ώ=a5 (ӤSڥ/H LAE(g1_6D%%L5Z;2ku\:y2L4l8Ou[Uk-uiEmҳe)Jk5:LXyEF%LdEI#m@?CSaɖ嬵l(g\}{bV3*F!K@Iւ3Bb{fUrR$_((|sh$1Gى@"TQowΣ I$>Bo 9XApHFE>8^'@6b;ycU1Ej9m Tp(i{ X8;TUU]]E&FKO9XS~#1OOz j鰷z;i.ǿIqH%Hr+et `d1A D zőEqTaz{D0.1RG.VO~ߑqSO.۴fS|q8jRvKjR&e9Xfަ<̓qK[>vT^tu*—ENMv@N1M=茠9%hϼbRop0[z's0u(ґC(tΜUXvCnRO=y3) HX͑e6ioarkeT>*S\eiM0ex+(wJ-pszpE A*Lq"cшZili%Jdu6p^t.\.%6!`BLJ;f2Rʃp-q,5^#r\1ٜOռp/;^r[|0B ˹BF2i,Ti6ǂ@iBoK _ut*(6"y~6RHitL*"3 &J1  B' LlO}{< [%0I `x8uIXwaOPs<2hCJFmhQ,(A3IQ0zfIGanzniA‰6 G[,ϼlJa{!,ShF! R^AIyTN# ( 2X0&MpgLׁ% Y$Ģ>K.*kHZ^\_͟v쥳 =oU.j_):O\o2#oc(fj QTGv0"X1$輸m`gX!S`)"g7R J1ugµޝUSPgaٳ_%1*@lPBrTat|P;}^.?$ |aBk/pWܘZ-C۬Z ]\Oo?*@c93S+@F:_$z UJ?6pu}Yi\k{Wldh.֓6_8\^5 Nb4r;KwF?0Fjj#]w CsEfyBh\'`bǣ]K~`iquJQEvڵV@ GRHi`>o~z ;oR ˮu{Xze/~rT.׻kϦj|: Wg` Nh ~VMo*tf%w'˚ C/Yavh.C˜|)8kiu\ f> Q}:zP@ N'qn㒄V߹D5 +iJU{0fXT ,DJLZ(Ǯ7jvj^+L RFDO Ƞ5Zal;`pgbtAP'ˡIfo$w>Vie)ji 㗎10K~ry"H .JN1"=O2q-:7xk{R0g%^8y01aqRQopޔ:E !S2mx2 3(B; E N&x3l١̼8IfBGIE`{/a|L8d`8VD6@rBu>Z]X2cTz=;e5!‰>[PW]U#LYU4.,HRA#EDM7B L89g\q<'v\؉"FUg"1is`%{edlAڝk Ի@xvw\\Oȋ`[e V-C=䕭R kam>ݗ_U.oZ`tY/7ܜ`"{&CD{RX{)i)Ŝ(&e$$9&m𽀬 |P:LoS8]햩p$тI=#xZذѧ>)$n2t߽d^0{<`Xֽvg!߱LԼ1uWm'm&-57ld6K2=ka5vôrV\ས['K0Ip]5 ~oG-My < r81"Tk@[ΰ}&Ͻ QyMf%),C@d.HKǒ&t=24{/(yO/5{1pWFF1>U3# 3yU8#Oa] ɹRFc5}CbB8_,v?J~4> bd*52b*z&{A NE{**7Ų`dJ%ezn2(+zg՘IDk4zl"Bj4VZ"2lrlt`5`iGj&@ԩo5U:.&ٚa36.{iݧ:I5feքoٕ7&'UuӀ.&9 I˦uư": h+mD=7r3?$㯻0`i얱}.Cηt<]E\X,sl65[}Qqimwu%?v۽`lsń:\|?0# UMY 1f#cgmI %ȗZrV:0{ RW_o> r9,Qv,+| ~ +?O.6iVC@*E7lUi@0)pįrInӢ$JђdKk,)QV8dԢ7*pZ[+غ==<tbG7QBk@/ `0Z3QF+ɕKUjCX3((dQ0{7z zQ4Rz/׮Ԓi0Io,ЧA >؋k(& %/5СXIzj?xs#^c1{=x`e|?Uz)ڏj$< 1vGn8m:II8JTa5UGxUqACi(;ېhzz:+XxEĭgo8TEPK>V~}EVu7-L% *Vka \M N^\CX؝lXR2TΎTmrbO+.tVIb%惥,`ima{0 Vb;FH8 OP \x : 1Z`G!:jֵJ]T^nu(q(z;Ƹ DL˓roLƤteC+U*`֏D=JZőB9U9n}Tc$h1|,KB D9O+R(Ҹ,  {xγ|bq6Y'ϞY.u#pΎ|;[nWݻc/_];7IfYVNk5 j9os).dg#ɸV>jRFbSl5lYZ77Xf=7O߄v|d%ʔ6%$.P!_m!zژ%oYmruUu<\͝u{gegd7LiZIuD,(0\U?ҭ\q720z?Kks?lB}x$cmmۼo.oglwBYW%%#4tdS<ʖ-w/cUښ'G;H,5[9!QERCwynZz./ೳǷ<\ۥǷ9hGS}=a؎ŚtF.܁u{Ơ^3\]464KJ fqB%\e Z.|UUE'7]uf5VJ+۴UZy6hj;~ =n}zN\:n=f rmA,%nM~4@"2 ~{1 m@#+C,eG (^ WV PVw= 1d9tSئ<^ uZ;+wv_{m1g2Wz =yzGtg'0uG4ܑrXmM?Z#q@CHQ*>0JqZVV§Jh)d89 ;j虝P(X\*&r؊kJxbq6FsؖPPaX[+ݧab4AGn?>':rk#8!ۇ$j8k7[*֋OlK/!5ħ&pY!ĪhpI6]4TuU,3t͇arZj+Hgb?96&۽|m[-bsK2jOہ= ճN.v ?'0 klV׭dP>7]-65>;=2\گTt >'wJ۾fro~'^UX;l|0]`iUԯa_~˃6Jk!\^O^͗ȹ(R}Jg_>tt~ tȷwѫ ~=7l*ODmٴcU$UE,, can׋9Sӓ>+'痳ևkOϖ#˦~Tw*>GYe9ͳ+i`@nS_nFg\srnN_ ~虜םGWыG8Dw\@ ?Y)|,^#NF77:n|7_Og p>>25՘9q"Mݛ3@ӻ6=y;>꺘4^ihG[wJK`݄׮kRzYT>,uqx?hdo.'Qm>rr}<|rÎ9&鬕 RTackҁU9X6>@18[6UyAN h|:`o[:(q#?GY-ا.'cfv^6={_ܽ)[*O"Fo/eo dqrrvuqzT7fڒ( -q3zuzC3F&/&=KIi҆_ig۪$]r$BL4-Ԛ[gv߼t7>wƒV$pmzs| |XǵE &k_~|}eMnz.qrlfW&yޡ~qZjjV p{oUNY=6]]碶YI ޚz5첦湇{; ]o5@ J_BѲM:/nu6rZW j$7 :7˳r3Ѻ*HEխ-ZZTl4zZJgj Aw YA63]]~/mZ$L>DbpdXpmU2z(9P:6W5U@n:xLF-\!AEƒS?H$F LU +C4(LC/,9Ѱ~/>$/eڃjneP d6 u"s2YЩW >gJ(Զ ̄0AIt'CUxlnѴTI."K_#7K*+xm 0= xv)]LAjz1h !J8qb /&]ME\J:ꡔ%LbDDK Isq xAK^ϠB vF|0rF3jIVW(t5Ãtz͉EȜ,T\5v(ΠNd&GE[(MCe@A8e8bEDD֨d%CAiIVuEB*!ƺpClͥx#+-.#! |N1,"xHHzc[F3_5r#,LgA&0 ^j,4yڀ*U95eV@E^u+`Zz_n039J@ /iNr%T \ƭ ZXi̝#+NaS>uM{L;Nes z&AՈYC][ Z`8h3 6z`2omtFL =IKQ6 5$gys8lY5X`s 7H975n y ;`!u>fJ`p7D[٫P.B 2NVhR Dvi JO>N(Ӄ@]a$ Z! u@++6U.FrAp^`ȿ+MgAAKfe3X X0e%3:652<bi7l4プ ygr'jgC1IU6\<RL;FfAJ&$%H".pdߛL\NH]tMxUA/?+8;70&o00vꅉЯuōlhQ Q5`A&zpԳzLDSو{ D E1R27!$lw$׳AG9"@"$! DH B@"$! DH B@"$! DH B@"$! DH B@"$! DH B@"$! |@ JKH AN m&<@ "$! DH B@"$! DH B@"$! DH B@"$! DH B@"$! DH B@"$! DH B"UKH XН@@;G9 ,=X+̺:^DE/+7&zdSn:ntZf`fz;R<4 ܸFUgIcʃ5m^U.㌶WޜObYeˏ=-o|[$vrKjɕKHGӧ}2l?E=ue\_*)0:L'%GE;m8alLf[T0ew98pL(*a V*p ,OlmٶU|Mh19. 8a Z:tұ `Ѵ OؒkpgtSx ŗ^n&Qq2߱M:ބɪ/ OWC0h$tS<<ߐ[*L 용ld1]Ky|t{hۭrpS\5th&ɭ|Z>b1"|c5ZUx\a5*&gInoۀUs>Lӧ$ &9ۗF:Qqh+óA %E)=BsJkTIXaYW=YoQZ;Q'obTx5d."Zp&|Pq^(ID "V(b1_ $S>~/Pwߘ}(6fC.Dp|*T7gE,5xS~`c=! S7S+F'.q2^YWbDž|N nβx3ª >|2Iy,[a(1e5 ;L(gT EqLl~H\7ft8OOȘ]uzrκɾT(lT#N:pޣL-l6j):\H3內 xM5xf9"NeQ2-Im|l`OEzc&|w"ROD#`$SIk ³ xV`IZ0fǂU0X \g@aL\##)+xӞ]%}HZ~o!-".~W~DOoY -9lD",^ 3UZ'OᾈyM^#ܷ>Qqǖqߗq<n$z2'*Ki9GMU/A*^`heDdVQ~'5E8AaːQՐU6*aMblLT,8Ym1 |L +HYg6 YKM ,8M5C`l240Ѕ`wVg5%! |#273Ő>`ThTz0[ ce'(%a?շ[C* k*  v88 + 7|k |IT8rL .9Ãf8q{,\087<+#,-u;%u#Xu hjuzr}p+rga:ɀ58F*Ϲ9ZoKA,J}]-klF9ǔhtl̞60r5?s'N`(k3 &=\A:FD6Tɯ탗W5H#̊i8nn_Fh0ڪ!V;*czRzkOJ_nn&2 fgwxy9uYWnu9mneWNKt/Mϧe0T|pmڨ74j'u A;ϯ~gyzc.xu|Kq30+ 6mkoaZvdR ~z9? i^Wߞ Zu@Yb+0Ӯ p:H&Mg=tbqi^~koٵt.]>G9-r_ ͪt[LWzd}ОU0sp[i3z/n.ZHmi$$a̝$Q\DLm\ `QkM2ƌI)Ɗ6qdcp牜mXj?LaC#,]yf]̋=nfOF6mxvww=ud T,=b6"-(:ZSuz9ԙ8 4 aWNk;+4}w~x1G;dqj#I]%/W#rU3ck. ƵvՃ/o% Kޚ؝@;SxYbɲ'_x /?˗TY s)9C{m"p0x[ռOu &@>33xɬ{5l~T  7 B! Ҭaٙ2YW ix^7ި`S&0k+N~-NW;uO+tE_$<|6FsFкI!*PXP-F1*<*dxU+ eo7)@UCx#ەQcWթx gajLbQ #Pa A|uMx!kXłd[`#VxY2GAF-e|)\3hgZHB$9܈nnX&.Okr({2Cǀm3GuUuU~}$IGVQ1f 5EPdaEJZ@.<뭜ڿx[0'm.)jDG:QbN:Dj\";/!z/ ;iohg6LU;,̚GJQN5;$=4\_iP- 'Ow&j<.l3ԅhNUZMosyH Oe=g/v87P\$p5,k4+L&1a0*ys ^N|K6Q띖43&[9 HbiJ .$G* uv}doHƌ(hZG)85IM͔絈** ҵngkhiTcTx;=٘˹Ӂ߇ɱ%Xmu6lUvYZ(nfi-Jɗ] VL.4Q"W0)%,2Fvヌ^@^=@I6j 03q HJ$3FoAA$|b" 53 1%-q1xΒz2tYN?PGPc]bq*"R9߻_x+hLy2i#G"1 _q\0IB'zdwT78k7)|͚lo;ΌOIM~$z|A/Q"0.l܅wY'F g'J;L*M953ϦA.'R %./jGcQ)&;Znyo, ;D5'U {Vx 5/%pֱ$)O0wzgZ=xfk /T<$MXdQj qWG hT2RMUNRRƹIR:H`LT%ЌO5Zks( <om8C [;/J̛*W,WgwSa$xaX9n7S{"OMt^~7_Z8sd [\yk5fMj{f+(.#+ey8)g*CobJyp>p<\]-{p~ Yj"&d0cED' qO*3 ]T_2F=DM\Q7OFp".:F&]*eɒhHDoo m 'X`՚OvR8o6^|- DhQvPYnZe.G+ۛOUo=}x-e{-&4]1R ;ͼۜv h>6#esrbTQ&y'mEDplXb'S595k i9#ɣb6 Q&< 㸣A˃ Smģ=Xc'U25ٳL,iL,!cJEJQR, ;u^$t+c8P(AxvFD1!ˠ䑖C;o bw\8%+x\C;t*e3q)Ixk2u,BfIז&upzDTh wwD|:{sc}4YJAS 1"IB?,mh9OFęPWqcrK$1RHp;mpV[8QȤ )D;x's)7p^:,zm[|tB%NJM,A;S.",G\& #)d̩>$% LW$5^DA&eA|W_~f1չW=WB>*!rC\e|syo{Wvp%?'5Z QzA8n*U О , ĉx׻H棡2_(T#mr.n%(),~(/gBN WЪ#$:QN' n ƍ.g*(ݒ*}{7R"ղX6oBϟ7Տ a394x \TI9_Nm} *0fM]omN{ Z~=zvz5ժaa0\2 U Fi8і;ڮziUYߒ8z!IڞrM=]uQP*E q4R0S1]t@^n@P /{-sܱDxsӷeS/>,OFQi4'~56 O޿y7߷/囷)_?OHiuwQ7/y'1K]X͌wnx9 ϥ%?/Չu>(SgzGب,{az"'^q<]"9Á{:BL(j9VHGF@""E w勑~!J:FI.z,1 R*ei˂VGG_PW/6GLYU4.`XbZ3@uQ25 D1p[`>;z "FUf-E0cӠIFIE½Z4!Ky)0@k=WEkzgPyt#KJl]$V$ 26!MiL )&?Ua0kG/׺@Ŵg9.O~/8uV{qpt1w4 ns` sRgRlrE̛9p*1 * a# |r繨b 6f dgfʐ>Ke{Tye~1d=tMߖ[7_c+[ovt&(C11z7 79*#A+;U6Av|)d̖ErzJ/\w&Qc2`C2N,(Vuu\rJ{WjEC v Ȳnd,`k.E~s1q`2cĜE* FJ[-}',`*epVϷx/`F }-Tl(RTmT}.bp5zr^2Z쿷1nB{JW3,ٸmf#JhewNc&y?M^$\z~r%0G )lHc[4Hwez&TRi2N~Qo`ObxN0 G\9,dtƩ!?~aw: VSE50@AD+,:\tNrNL# )QMJA Fc Ilůi7+UQ'0qBWT ~c(wzZ,Zk1#vj(UvfeT.dkQZ FGM(r%#PZOV 8T9M=t6ʉ^.qr8ILGk\ ,sMf>L_j V'b[o*"ﰈu]J #z!U.1fYėlP)#"b1h)!eLDzo-k}g|69w9"p`[;(Nٸv U]| kv\iݷ\z馓t%Td{RH0X{ RDpCJ,s$Tʝ䠛${Y%NK  0w[ŗH Fl&C;Dlذї):qCewֿ{\C=ay>\{ m3VOoL>0MԸ,ZPzWC$Q Djny`d&i%n݅IV\=[GM󆤒 IͿAjƔOpՉaKLXx{bxŽFBgaj˜cN<<4$^f6O  J;2[[&1~7mfLVMTUO&܏G=#!  2K` ,! R{3wA'v`_ ̑ k6RcG Jc|j-gP8gemOzfpRY#[<0ZwjVc -Wr4O&tyW~mIy=V+I,  sZc D XI'"=VFK"e%ZeARm 4wFm6Q1,%! jC&wo\lzIl+<nWnGA-3YLjEQ<"K.Txݼ]Pbuu[$_LTTy*aJk)#E[4wwShV62x;,Ct"H)q"c6?|JTGIJ*Jdu6u?xw9Mަ Hײ4yW/ +d$+BL PNh8 qqtШ?wzaMt;"E RrȺ0qL!H&Gt}q 1܄Qj}Ǯĺ {ꄢk)Ƚ(Fp ~Q,,Z/H6Y&zԲh9m஛^K hTm~|S7˻\<#o.'KJ*Ƙ D]i,9pt' >FXS3O7"kɏ&5PA{m0l~9'Q1~zU?wFaxJ_mҾ,"g ZF?Z7Ih<;e`isJUgl[mZ;00ѥPc%xziѯ'zä/ nvtçC_5&~ q 틿me ܜ;⧧oߏ_մk,AL/lZ;+E`}v?u5Sfz0Xy3v_Z\Kzb,-s9JsneݿKA}B-]Ts>Ү]uMz6p]}틍Ѳ6_~](߱D5 +?`P 3(*Z}d"N- @TrcIomq&Pۜ#tGD\w7sinzyA|Hk{R0g%^8y01aqRS#l6oC`+6^C}2 `}J!E N&xsg73Á{:L(,}WT;+j D0+ E "hHo z{!J:FI.zrc"Tz=;e5! vPW]U#Y*|RXEd*@uQ25 D1p?[By0]q<uQ;.H";l:Gp8xAK4s`%{iC4WwZ;:h[s/zgP$s2yAJl}ͧ]$fRix`,)Xd hJ+gI1zwR IpjO3.wV]͂œ/xlJa"68RX@'wJx-`lf@6pvhT&K4͆Biƍ\{[*a5]f_G.E[cI;∠צ7(6~2q4AyjPݪMCMC4z''3`xf\-ʧS*`9e'n2u< .ˎ9K˭.#I2ި;ƨkuPdSڻK+bR%m((-uKUn(MLrܥ!`1f4L_tH^(ZI-T"H1bHa8P0#*6)*ކQ1qI l݇:( \>'O/%^5{aK.ͷ LmLv˱]&-}t&GַϋK37UXͱ-;H2=e)ӒfWVAC ҦT2/á:0Aq| K>`""j05ϻI q;@Mf>L_j}'XoGxݒVo+ZT~_}UunP7*ݘk kvav`xYO7="F'"| .k5fO#˺<3aDD1 D˂w^9NۀKVv;00@f9x\f)AY]\K\nsutĄn_q깚{fL'Z SޞF|,^qo'rNaب2t''ɍf-Ԍڎ-63j&Hqh7,4͂{N^UiAX' rh'u՜Lf3;PPZT8x39N2F?]v25SB3L0YP)"YkḲfFKS6 (j>938&|*TgÍ*T+B|3MUg,3ol_R>L2Q8]]s+<֦c UyJnm}تndɑx chDifvYb)e/E:`q׋(M"WVn2A8?SD5A "07ac^ڤKe7{bV#󨪝g*%yX^uPz 3ı!CՄX08c+1U lRu\BCF,јѪQ0h:Qy+6t!\&wyV@ѺC,7WQ7V8N9Tb"ȍE*-P\DմZ,%3Ysi=5[` -Į6f@űK!$+"Hbl= PlvV*3Tϒi ȁ XjI\ E!gW#|HvKˆ#k[rXj*,瘭MuŁ/)J=w0蝩%@⭑iYw%(\y[ +wK'E\;!?_&Hu~z,{Bn z~eЫ|B]ԇO :~৕]0蟃'Kz?Y7Y`[п6 ~sf) A.}>Ptv} џӇ/% $6euBT9/WS/.7KFI{-e[_›kS$䃌yЖ+規g?0+O KޯWdߌ(}l]};__<3W_zx.>b!ȼ"yb̭)V+Ѽ|K7͵^m)-R~OyDimB*8qo~RazwS|.xLT&wOnq2aahVCmm}.AcǏx}lWۋ<p4lN\d̍CQR. Wݨ@wU]m~z]_?PꢠI^XA}ݫ_v|xM^տW.GGł%_+l;e|zb;#-!-m {z&hۇ?#v׃}vھƠ-@ s؍1َAh_e!bQ96LU7!2E8ejt|1,HJ;ߠ׀\s(-9ZHў,3`Vb!.lgS:y#&'1L&LF&I&_ @=@}y~rjE*KHGȥlMK5Ƽl~]?~guf DqD>:uz1uX*y$%wx b% Xv2e؃EfL`m3C αiHypLymZ醅ޥ/~|W wlKosg39 #X_vv# p:7q1g9s>ux/^o݊)D5C%TH}|̯[_ϋ ip+tyk^vt9^F>qo+Mޝ xzq`O{F9'!VSB1:y*Z#>Ԥ~fW|^pECљUN=p׼ߕ\ސꐊ/Bd8 gNNi}=? #ٯng;SvrtZ A +$6GjSN%lЈklHI10Rat:5=56RTZWR-&%Zm ax?Jي-&[$nh oz께k'7$">,_þ{Lw.{=$PG]T[4JB.jaC˪xOvke|Kh5(eao2ePRM:AQ*HR/&U)Yft,5&*BQٺ"IXC r $dck͚6ksj^k3(vI{Nس eE0[J &Q[0ƽ)t {bGK~ (yf~n7<{Og*RZܽ33 + \TI{5xwhع% ~LdjM2Xh1%&&Ă!U̞{˅l624NkH(&B5zy\F,јuZakI< @`a9 ,jCwtš ײԣ}w/;!4+1R2pف `<MDIPa[c$0^>V9Kъ@4GM%&pQK+M*\x(#DDgAܾXo?]s~8uYwyƔD`.)IQXC.f;mliVG.D>jPd)N%lqv !Hǒi$0#eClL$ VrSbIrc w&oO b O*Id~MTl{z>韹6sև!~IZcW !&0zލxسrѢ1+P+saZz+d}U9b_ @6LaJ0a^=k%#_٪wntUM@]šAͳٿZ㰫9@uGgcNH)J'X\.?Gu]"BކzVkmƧރHݢ~mWKGIldX]Z_NM^ S᜸dƟaԴ[SӺ7mMkmO'7brN̝6f<킑787_ac6#q~ah0Sa7/!k+W1l1wɦMͣ&6j\)<q#y`c~b>+һ{ S {gHzM&|,ΦbуW`wA՛o[uceYC~͛y1>Іƛ 4ehUr͂oWaRr-uu\f> SrP @xo oݨ-VCR߹ČP?AEB8-$Q9B hL-5Hu6)IommI3 I3evט5V),(&A;#SoSRȥYUZtZZ:/qѨ,y!&$%[W34QSx4-{{G!VgTXu)uIcDG6(Tk)'>cs" f e0)u'ZE SdFGM,7_Rq܋mոc̝F` >:c8j珚}H ီí.D,AT3(Q hRPkPRÆ%yXZ7!)Xʁ-2E2q` Qr% =?X`I"tJDĂYEeKamnKu>s7M=+p.C-]T&zh̯p&X^z߆Ee|wՒ柣Qjڑ_+Wx>`rhiڡX njdG0ַϫl4i!3?GK o Ve[Dx;;ԓZ0rAvLc*u)vLan*%;ë׌LᅩtNR)z~+cZm, {ĩH{ M(VS@EAy&>FQ&jfg^!__歱hTً(Ȣ'\?f <&\l SǂiH<&F &aDrfT m mz+k?MӴAͥˡzl n;kPa2kY;~,NٍI"v}~m67?Iﺔ_adY7ߜ%>b75X.,RG#M6\%&2|boK8얊k1fւN6N 퍃hByMg/Nk2)䈡]di"4TwdT3|l;F}W(Fl@8 Gee;] VH>R>}D`Y0"J\b*(LHA[8Sֆ\-8 Ԟz}ŶGs=19{;ݞSWZ<0ޚgnq]Um0Ijc0k J;MD`*HT =h^h!:LH`ҩe+~" !Kldhښ"q $N(^2 L<+~"g;YԖyb;]l)jr^i',3HQ5Zi1S~7ޢGR&LV3NbtKpҬKgMNgBҳP!E.< (0FMT*j#RSD4 Ч `)CRSIe Z{ @wo<ʚLk\|Qq+8RaZ]\j|Fٟ{C .g͊cΟd&_o-cS.iEUEi@0#brerH2~92u9ҾiZ+L! ?F'Pfw1]vz11 ȉg1U$IᤱmUnPzvͩΈ L0½\!/)5S;f*5RߍS*ճ+_G$s9YП 9h3Q#5"ʭ2l(Ni^zHwrmZgݵ .|^صVQi4@o*d.qp$nJ$eTro$Z3-vYGNI7zZ 9VWϬsf\B𾡄rK磋+%OΗE$G+.kgBʘNB8TNxHd {ZDS*0KMn_B?f#oƉ7EξPy4$vf4b(%9{?%;{?r:{/j%ѧ1~Gef{Yg>k4)Q5r~?$DREQĽu0PL;i 8RVJ)]Vؑ2DKy5xi1PV'.}gWV"7+W(oꂊ sveiħшa#'az<1zT¡B弲K'5Bt"d]0X?܄7pF8ehɹV?Cbec'q\t_.VAr,~kr:W;,Y_t،|sEXK1*#NqkM#J+_vWwڮRuΌFFV=<M@\`TyV:?m$u8`di*tF! bB &8 7PNroDʵFj j.:};re+3GݪV^})J+c#"HQ:m$T* Ws%#F븴V5P2zE0G Q[M*Ffd j0DPRmkY{){хqƆp6lCnuᚺp6Z6\`njŃ?~4nk88ϭȊ!A!P*F X`JA>V!k!*Sƀ( C'ZK}PTH'K!#a:0-]j{-rVkl`v[ܱ-kmv`PU iU p0Q"9AQNO{ Ҙ6֨-6T,c"FHHPbC` YobA% 9H!c 7YҲeIۥ@(Z9͟#;X`숎$59;@DX$j):;Jx|p"i;`IT[욕r.ϊ\qA͎'-? <GG=&أPH¿@'S%A7wJGRN䪣Q@΍j:QIPV/QMkؽu^FմCm68h&k]Q k n/_0~7LTߦKuT*nq]^+q]8Aߙw:_"Q+L^b]%P :77>?1:Z;y›~뵊rn3 'yiw/5T#5"1kČTeIH8#D$F$%:DB Tr!0_"|@L͂T:ɮrۦ*Uޖ6(T2"Kk)#E[4ծ9uwɠ a%Pu w=g͛\x4gh`d"[ eqHg(2jFVZe4jD[UT9nuu (WKM)0pZFe)A@8Gg-FΚɭS>?aSѰ' Qq%PӰ?: on"WM !:~y0-uoN_0rL, 10eru {gPAĔkӋR}?3DEvTUa=n7^3ZF.l-]75Cø/lfY>W F0b^n26;םѲI*A[tզJ$ӆhHj?pշPc)>Ec?%/_osO.ޝ}.?}|݇ L_|x,Ο`FM_Nua3J?~{o;C&ebڸWgt%Lr|z=^ M63Yijڛ4M[if_ݮD.7Rphk_E>T4~W'WoZ~-)Vab% ˀ1#R`騵GBAQJ W&ڥNT#x95B /)z*55@ {e*!FySMa1xbc|o7Qn̻|{IM:KYzKM leHWrEVA3 &?܌+,lwcX L g<F 10<8NF.5†1l[uC'Hme((B;NxMF^'p %Cj|bM֑Q1ƁcAADmR{Yo],)$)Zy18E%1&qAJw3MSV`Y*hå*őp-SaKkRXEdW:dG(o" urcy<D`JհsT WN?0^Hxe,c)c0j͞dCuۚ.^ ˁ{Za{܇Jlͧ] 8* 2%!N: ƙARL2x;mƞ&8uV{qpt1w< ns sҩ| ċY1T)b\@G -2 |ZsQ brcm|[PZ+4KmN?saԁ~tܰ+>+xz8fhOGyG`&n8Gu5t.MT*nq`K%.[ ,U/W9K˭.!OEˆAe/i/ypIuFɅ k9[rZ vPLLrܥT!0f4|ItH]ZI5}ͥda*%eMϡ%v+-oCtAC Wd3M:)o Wpiis{kHKS|qO?vnkvŚmwv9&Om}>I5L˹-6̊ژUVxzxkXՓʙfXwYpLQڔ"W2"Yi ({t/v#06B>-ˤ+XʛA]%f./ }Xoa5F^w7s rI1BXY RSFDD b Fрӱ 1 mQힰOcJ ݠs3<ݼ:%f8ve>z,5:(03,M,YydcOPJE"k/%%D )XZYhRQ@҆$䡀w,6\/CEVBh_"[ ,TKCz_ v0eg\ Y0`O0DYBtς♅biV SKk0;ɔ"I&mcW-Y6t\ǯ(kCr[$5ؒdևO.#a}y #:1MKf\y$cv{ E9)瓔 Wst7ΗZT#ON5 Pj^]:}<)+e8f5gNk441u+;J3:цwMz&64v.OCyyawI k/@.+SB3L0YP)"YkḲмfFKiN#QԬ;ųgrKLpC*olTVKN߷ݦU{B1ol)ǩ9~OZ n; v( lG8v;jBoCDxH\T DJp%gQ#gz WgW?ԙpa x@ `*MIh|Vxc>0I{Uv:+~_r%;U& L&jr~nXA?PvZDP{Y!H`9 <8m$ o-sGC}"~H /w-|)$\*!˗/T&WiQ3tlO 0#ॉ9 ;L,G4yn5Ys1X~&DFƳ7Fi( @*[H3h<Ԭ9,d^[i '`І LX&u"+Fh&SU3@?>wm{k'H Ѱgem&$)7% ~%@M^KoQD=Bg$2,u.AcɲTMq A|BDѨV5f'8:o9`YrWF>pbWJN@!⌎|I PCp556&):>d6)G<.Hgt'r4m*̢/2EJ2%[!䥸/Wmp$J)X8Qy. %!t,Z/=@ VFu^Pܣ{<.;#7Q[6-kpO~dNmUfzw/|=]6[zZ,ݜ;w3h,&(Bq< h.&8R{ĺk\eY(:ɪsr!TґSj/wǓxtǮֿ>N{0rG\" E4 XYΡv0M~|?~B3+X$XvF(G%,=}OBmҿ ѣR"ThURBEť6PBIY%+}r*ɽ\bNغM6%#G H9U` >\.kTUeL@s{D9őo+b}6jcF\7ݥ|un{~hl+n,v['1^ 6ßt_E]c >iR!w+xI~hkARQL:he)¥xd"F Q5bUiIɥU#)E\v[7Z̑]$!kKDeZ͜3 l~Y1`ɏRj m |v ~]ΕX̯Yrs,|5VbF4m!^]!B52~B.ovJ-ή(|Fֻ{~]+?\C|u3w~qanOB7r)ybUoqhC]?H;-H?GiZbĖ5~= ~+R+^_ֲ+,ml\{-pUU-Vjƌ> yaz . WVeuV/W4:sf-WW`c8gkb}+4p%dl^ \kEX)UWJ2pRt0|Cp|?Fqͧ\.c㎭j-"?p9y`#ZfgAo-e 2H'71 oE&_)6[œqqf< w*]yw'1s@f8.ofF?/77M7ysy|ۣ_<ǿ|c_O#Yη}_WTća]"DZ<-dit8d*mB Ey2J16ܐKt0F9VwQvc:07rgO_@idȈ[/!qyJFR#KN{UR\H\֥ȲRdS)ΐ U,p-",7#6LfQy97QOu)_i42Xܱ6}w╾i̾gK'< 6sYթOM !ݤN QdI#^#:wV2c*2Өi=Vq;vjlcЖR5PXdb8R@3f@m, $!tj>vA]87%+ϐaTF|4y!)Ja@]`JT> LZqއ=:o lArkW0{xKtUZ o6^?bc*ⴡ;8d~^TK|O.WCs;Uz%~ܟ;}0~=rIecCT2d8 Z8J'G'5.5U0c)ўkrl~W$"p80T^f~Myr4lNzV|,Y-{o O3Ă-u=-[.\WE~ r`%g2ejLg5gAa)"2>KV5cr4 `̒ z]38Z:L Uw`aq$5қb᳖ٷRIv)NLR囿8sa88ι[˼r ZNprͥ b"=Ee}N^FnC\Lz/ 66Ad ,&:’`vz9ۏGq- jǢ6ڼvcF`I/[ jʁkDb@^@uYb.K{oq%'A̐EA YĂI`DCcV qEA{LEl4 "V"!bC]#UEE}I6keg|!9Y!P$-Hg8VEe~DܒG|kj;lXҰn~]uNQQ*W9bU^ʟ HMhEP.' X'8;bÄSN^H^9߫5uD]+qprebEmcZbZZ*Nƴ=O/}{SpCdpL-YzDLZEGۿ ٴ0r y36oR73"F7(@,:Ε, `/sY#Y&_hʀB+'\fDnZ۫qxph~QEhjph]O2X<ʾw*'M!A@ n8-8w1w^,oZ.4LfwD|7A>|9.(Qk 0 )K>>X Bg.Dr(t`\f\by'4VKۙqS_/?~aְM^Ɋ^R9Ȧ{T-4\yߧL \nJHq-:ceuK[,SB~93_1/u-~Ž՜N0|⧱|b2jL;e*gԫڴyH' #+Ͷ50/A8|Ir}B%t/rŬg"aDR7g{FW!Z} vL?vF6ёoU-ےEKT,2"Ů-Ӕ c [&ror'$۟ +yt&ޗJ-TU3E[ ⽢[07S)+e\*KN&7 4sՀ#,;B9gh׺keys<<5޿q؞8i!6rrK=%D;ӑŝZ:!"EA+ kfD>&4EuhC@a7&]cКl<ބ¦U;]N!LާS=XùU$1LL#V |IVɬŢ/k2-/'o4+zp6]lDt}5fϯ6R;ތJ>w8L@/ @Y% Fг)_!>";[&<2">HPM+h),!\[խW—[Gג5[ϒ9;pAm强ٻ/ (VW~-N7-jV~L4] ;zsVnէ#fE0{{es4f&5<*][8iN>/u7ˤ<5S;[OryKy흑]w »vFfS;[:J-l%Sj=Zr>l]"D4R3F鈍!О }A19c&%vȶ+F–Kd ( e5Bj2b؆rfb\Z#g8ރp{:_z(_+s9]q=ynYMH[eMC&=};_.~Pr)+qZGlJbk cf%YkT@ы蔳YZ.255 Y'uy&1p2"(b8ap\iTU*lv|4(* %B# b!hQ\1 eHAFӖ5rYWlRAM)oA? FD@d^Sn1(Vƕ sB{hS< u;}Mo|%4 #HT6RӦɠQS) ''{r~'ʻ9(R ++M` @Kt1A sƢhpTAk2VH6p7-*)z"NGMj"q.zI)L7$92N^DhB`ǻ|Ȯ)_DiJ )1ǹזZ X[ۖz5bW5 Y3 v= t3-IY[TMwjsc&|u|H,YB>7r~|Д;FLTTy.ark#E[4n}[L@^qץhf|>`~]:u mf"#p)9IEdPKafJf4"P (O X}@l'>) BZ}{񑧫MS'\3O sddF=`E05 ̔W$e %{_ ^ڿkj2>ɁO{M2exPJ  aG 4"H>)i$=a@D"VD4 `-1X;""c懀Y QY>FqY7)L?uw];_pָ4z1^W MG:in٩xX!Y|0CD QĎkS+"a9:$C33ۿ8OΫk|@Y@ukrTa1}P5s'Uד$Aa0Ic,k24>Qpˆwb|n39;Y6i B$?t!gϿ95qSߧ'N}%Js#LgWTݝ#Z~+).NNb &| b-՗ etR; e#1t$tl|!3/,Ai} L\L^|8_=w֛69ɖJ&lesp=83Is G.=z:0flt!Ά q\P2|XoP^\0G'^y~?xsLџN޿y $Ԑqټma13Jo/UoRᖋٻ"$\3ت'Lz4}f?+Oƙ1s.c~kΖMV?m>fm ͆5Z9Ǖ9%M=; Q8vd9>(^@Ex&n׸[M?M\0LD{0fDT ,֑Z(nIцuc73kwIPfx*55 ZV+MgX(ܙ]aR:;ۚL~Hl'2ΒW,&B6RZ]85IE\R)LS @ȺY.;q`ϗ(9Ä/Rp`*/ʚ L,-y:H*0g4Pr93ϛBfR>*~G[_b^1F~V? vza$8I!s %WhJ*㇣ş(?DH{-+sV?;^wEX?y5:9m,Dv[ l#ke5¨H@=~̏zۊlդ/ᛜ8C6JGX"?$ȝU;j.%*5#Gxå_ۑj!{ Ug@ !/ySDiV+c78Ŝ9s˭#I-2|H3>-c1O4PP4F5T?%,yXţ?L _Z^֡8 dg|Y!\&ӖLa6y,2qbJvN߇Eu߆Xc Afh-2N?dozvޚ{0sU'kG$w8y˔ӾV'*uZ=Չ%?.u>fpcʰ,[g*zOՁ9UKxqp <95qmI~&"H-a+ ١6.(hv Dc2_25`A@On=DhCB{6x]&hDe|WB2Z {oל"I9&/ٶ6tb%TW 7|f"?ڂ;91_cg-p.Qs-9˝J]&|MSrƘcdtޏsd5}Β7r?' !IoB .d-D3[moj-a<9xӀD[x|b9-3dGnjEI45*u^ |6~i)[VXCdP,/&mEvI;Xw;]_lo}A'܉`='܍\A71nj|?wK wH ĀuSͱ|@*,Ճ1W\-jնJTz0"9z@ V?sȥD-׻nsko\_˓D9jvĺ^s`$',Ci g᤼789FV]A3JU*q\{yk%Nz~! ~#?)je ܜ A;M{kZgbhUП*a0N*||<oUAVڸ +}ϯ^4deۯ+WQLvUWylq[Ph'N.Q9=|+ b`U"ʇbrʹw{stDŽma& _^l680@m\h0PXUX1\-DDk1њD& r`gH:GtjG8S$=J@@-ywe8|'7T aQ8?*Дjؘn3>?PW]U/_Iڱ)K7ti=aظ@NKp higa?;7hP k7,2B u '~=hkk3ξ&tPJ? R]gfϝy ̫B- ڨ5*Cz `j fPE%K.-4{׮Wa*u:E\_aϺ['9~= eբo a]60#,mY sٝv.eD7d=W,EvԖ,3tVȮ*UݻF^cӤR}[oa >V5۾ ;İDnEFwHi5funߖfoMW#y`*ĵj^՘]W]F{e]w|P޵~`y --%J8[9*_R;J%d # [{O lC* :ۧ3 dC^6|m .z ta}MA\I yBHg Z6VhD(f%fA65mQ2)gTD\(UR[Եh"h,!$ٸEU3rv`8xD=<}qI:)M:VZ+v_|(wJ2PQ*XyDHd([ Z5B۠Pjm"7 nf(9x7omgP\6;}1P-o|;ccU_ys*49D lK4ɗ#L^nЫD(gCXkkٿF8@gTF}kHz6WfuaBb0yITA.(;Qg4IDzś=wfňQyg*'{ ཈6dbH%P5ySH8ADפtMT:$eQX )XX/$E9+TikjFn_]-tDNf|0ҽf̙;O+݌C~iħ+6ھ yF>y2;[U}̋7:؂Itkd)+]Ebȗugziֆ̓%p/BjA!Q9Wl{.ZڣgnI|Sk 4c #elR:Qt6P`)=VT6l&=5'>:'U^q )z]|—|gb89?{x_ KBNf륿K1@g)l1tsӒ: 3]Q$]_ָ@;Bd0_ ~`8b^bWnka&|Dv#t|C,HIKy$ϧ׋<*; |qțk;Tky|j S2[BE:80:W ]-EcIXh%As(z(GeW KـQ@c(Zkf֌gх8@]iNЅƬ].\ST6ȞC%ˋC]MzY/ox|i<Ϳu-6 R(gH0R<Pds޹$bt{,hu۸Sy*FT&`g[ЛX$ZeZ[cNjI0.FkC640h1[ ɂ%``k}CPMPP &9X X4.)TmlHiFlE'4I$#Id8!梳HlT >lFn}0Ws-X؄o RRvJ`lu%&-kײ*BFF*lDf c Ǝl390+L4>%Ս5q -nVY7o:\ڛ5e0$a .&INBP1fYnG;+ į.ޟۭx}[Pm."Ҋt~A!y2IRTwegHƒo:ХլkPWB֧29IkHZUIhla-; `Y"( QJEDQPy\03{ֲ7x<-a݇J)R6:ق9֢SPdM*c? Ϯ'iښ$|}pl>f2T2f,JlRP,&٘*i Ѓ CB5JYb+M "8M1yL? u!(BnH/Wj!ZWg!: Ov/55ATgglYKbůUa^oN>I7+ܸ]MgP,J?8Wq^z'w ꪟU'cCхJKq@0y_Oi컛k;EFuBP TKju\6ܚ|6J'Uɼ`ޮ!S> BA_2,kꯛ-ЫWg˳͏:eBȳxv\=˩SmxzjSRgU&Ѿ=5O׈pW7f[QF51ߣIXvIt2|5_4b~^1MB􍄺w$G:Fcl06bx*]b2^--][GQWoV^:i%#u`iE9j Nby<_~PyaaI]~`vw?O?֟o~xO~;zO?^9߼o^/QZ̔z{տ?f^T՝$WxKhvѷ0ul!=|1_^[ 㭆Vڶ͂ h9q|w\ŀf=n: H?>XĻ x~O:<^6xXwn$) )gK(ET9d]'1q$]uC6\i#?gvwCFόjQW HF X:!)6NDNQWhqCB;/>Τk\5[>OֹALf *E48ς%NHr0e} AI9"IIJ6hr*,R$YcN:W\"1AG϶ Ի*I$x/s1+!UTI=:4HY8l xI9 Lp?SRy\ y]p`tgH-IQ_<(Ik ۂ8͞_UףU*n8 7*J12 sH[]:+Q.(pEMVՅ :kܡVCGڭLz}LrܥT!3 X2#,R@Wa@Z+Iuɮg!YeAYsu(j |+-oCRF0,0._aO:/߆Et|w5wpuhW7R|q:m\nk3an(ֽ'=~cBCTzz.3լ566{w G%C8;}sf2}+Պ)F}+Z@sdT+C#vx˙cg̻vo&ATc@"cWz.Vy K0nB7zGOZYQGn\#!#vկj±DX^+$1>XhRykRḳUh4BkM̨3h4`) NPB2kR"g/Y߀Gi*<8v#;lʠ\aW,K%S&4300/h_Md gN%idzׇK1q'Pq؍\*p}Q~mRwR'oH\T DtB%gF񟕱tvYlg >cჴ@/`*זIh3-=vy,cTJ74.돪oE=es嗪nF{EW|g0Nj{ 7(>V5mWŢ@x=g2H]gr(#BeݝwxQm%ΔW<^""l-7%`KY2p p<^Mx#xxg;0'sL)Aʠ( 4EpAS0`!sԘ*/ y{8SzS>,cc`-m=3{K9A 3$-Gd!7j yzwڱsϳХtj?Ouiɪ`ķVw%:.6bz@B._?ʻ?^䨊FYJ](>/*`?=փYwTpGBD(0/Zs/="SLh6+L!D!e!x0m՘ X^ˈiDk45059ֺ[8 @_-cZ긿I:]`58u~QU5gB6ff|K6hȕչ$ >FM΁ bx,4GgQ,ms5_dG]iW-lgj|9m oaܬܓꚉ`='=v3liHMZ~UGvN,6?m'td\}'j8D%;<ւ)NH]i'd"uZбD%ƽfԕՋs$WW"X}`u:r?z@rn 8 u%^D}XB JRu**QٱDzps⹇a,~Y|U~ GJ /9DwC1MV^N^bxɴ@:ъ1dO%'dtVcWӉJz54iyB dU"SQWZ]]%*ܶz?I9!uVdU"WSQW@-U^]Cu'uu`m2|ibi*Ub_x& A] K*cT+&xut mHr渖#I@!-j rДvY^Z% 'q~gPf%ׂanƚ7&P؊?@\n~k 7SDb8B.xxYJsaPp(mJE+SR9J $Z~Ry 5iڝvv$By)Qv KrE8m K3l SwgEctO35J϶XН-u0%S[66>I}&mE^e2[`z]< + He  7ŦI̊n^E--tZ i_Ȼ5u)틲Vﷲs3 6L[%t^K'!4$:Fer7ꪅ7sݻphV7;1w/K$^0?|޲%C.,XPP<9zo?yͫs9t\f)BSnrǤ\སۮ$Ud 窢n|왦~rL LB}qS5oY=?i+%{&)V;XmPp6k&54Exk7,aNj 6DDtңmWrW7M(CQa›q7rU*Ӷf, LGwJj鄈jZ8,^3%Xi$ρ,0E~:ޝjowiǽosI_:#`|YkT@ы蔳Y\/$wE$d^. ʈR`"F ׁ1urD8\M,v@*gEC%D|kBh`DF4QAJpDm@3 if9FΆrV՟߀w@6'mXGyMSDM(=cn%a\ (0?';ڋ@N7mxB#`@&FKO9XS~#1OOz jmo'vLxE*A@!Npo#/A<m X%`F PS9cQ8(:8[>d~p:o'oMK*F;SQ#좗8r̼PIHMqK[>0eԁ=9"&|G JSJPt`[LXtFP4Ok^[jg`SC%`O{SqigO$RO(G o8f\MOx;P {D%.q=6Mqa|LrDE% TT2,BiM0ex+rchWn]ne{.Raje$3CF=|Jc%`?GU(@WQmy ºi؄ A2 ǨU1kQRN%Kȳ΄\lXtGs2W8]]tg3|_F ˹BF2i,ĔK 4p X^ցӄF_n>An&Li6,#)LZ0K`@ }.ncqo*%R^fJ ]^^L/?*@cIMF/- 2٣?`#Qucԓ҃=)xOC݈nfyJ6h]|]LujWPCcmӚ{vie(|jҜ W̃Xwꌨy= >哼rn?&&fie:1<"{7뿾ej5Z:M׶o&|sC!^mEkKӖ6[Ŗ P;.wYghϢtu+MKBBQaUSWDkٔwj"W !@ԣ.Ws^*d#X!*LB:64k:S=_j9qn23:d_$,͊_ £ˮ⢍cGGBGjc蘘/pf{c'6OhCh>4@I JܳgE)Zt%=% ;f`paӠزtHKӰ6`U yIT]8n(<$,2Ҟ.*2;U@({geO.<E񱠬[gxrK<O.Nzf~3!,^ig>{42Ͽ>_F`U#Ň+iز]Qa׻j I+rHvD}1>g+.]KRnmmqww gfswT6bvgr\ن'qk@ 9e X-% 6\娕EEK"-'laPX$1޴^*a oDP5?qBᾱ8[_'1$v5lp}z59|#a7>6CfYt}6{iŴ-)4ƖZ=t А8#D[tm'D7Jz DXHB'#mxټY>e(٫_?p-B$bF4,,V}Lyw`|k@!MeJ*`5E+BPx)(8 mQ:kRxT(d$!ȩ%DAlmojBMfp.9Re' Gd- s1jjRx-8f#KcL-Lz ر mK㹗`z#7y8Xa귏6U3w,WWμL$j 2Uq~-HZdR~LsoD{ z|:]~ΦFSa)zd  ?tPfwvlwcqIZ$qa*lVfE-7!#fI)"wсGQ$b(ߢ#}^%n;kcvC7Mgm4b Vi5g. Pyk ARtli:G ޾Hy\?MFoպ}Aw/onlқCxu7mpdEPD*ҾBjs:W_+q6oQ0h#Kp'ʞ5[';=[5<199rԈG7_oNZ\.ЮDkb+x.ڪw>&mcf4ΖK&3"ATox\|9`v`102h )֚8[ZR u2k pp%S>.!ӯtɟ4zi-\#Ji3  HAg49\1+s&eu[ψE4T2FAڔ&Q m!ǝ/ByTޔD$Y3J;Gy\n:vXkCdd"0tPD$Tb_I~*ؐP ANd("#OFppJ1y"R6NLpFk-y9:?l}Ec(Fq݊8+EfI5lx *8! )%v'*ζqcJ/Ƃgz&昜BH1i Ty[gh%f 髎Xcu6[%EX/ʮ^\_vVÑX2bB%iCH "zqzXaձ>ԍMA5p*jͺypH=bU>F4iz[3ah}kG"p&Lo0zU}UZ]Hz5hWzU;(Zǥ/.&GWOzl1ϣU|櫩M 2`Jh7r/Py&KR5YU)mk 󵄣< ј^+rtq7o4N#/N>zuST Z ViR.Gx?LQA3=Fè=Q0F݉^è;a=Q0Fè{ua=;7Bh%zu9=Qwqa=C@*I0zua=Q0(M{kRMOg_)M3B+bCL*¥&py@Si媰X~'/~KFpPG9O Rif:$h(B8بp/BKl@Q`Hhc=}M۔ի/~mt5}=n=#Qe==bbО7[h:m.xFgN5 ^;V+-!H=ΝLb/:itkήj}<'sZ.]tsT cI=( d%/I9F<4v휢fnwDPIjp|t~vG6Y bS ܙDD Kd- Z?I(-uhoITz`hSH8bhe "ABiX!t]9Z]j&yk#78 8 -Kqn{%GQȟ_硇xԒ޶쪏_Ut2kʵf}(כXT󥏢z EF"/z"$@(*X)ֹb@$h-KLƢ-l-9Ӂ'C%m.>(R0X Ca{x7Ky,M2v-0*z.0*,|T,V?uq^:'mP_b2ML>MŷzMaK "r35%L&9K"Fʜweĩ< Y 6`46E'$ƒd]+#}u~Ďg|ŸK:vEma@%1[ ɂ%`bLt@ %kLbU#![ Z*aƆgfVtR܇"3d$I GP>b.wl=aoݹ[~Ѳ bo+"ʞQ8 ]/ ۬ J{:$QLJ6" P`& =3#ZYl6w 71iT2Y*C2d{RbKZU&ug?ud\7+'_goR+.bϸ\p.LvVk!)L/@ZbҞB,ID6 ѫ{]Pz7<|OYN9^oW ױ6q!!fc\U27˼N]\5 MMJBom*~t%Sn7ꈅ))(CPQ/IBXg4l%혴C5b +XaMY 5UEƑR#Rښ0F ;B[web ݣ0v s/B[cjGǩ9~vWn-jbF֧DV64s2Tٱ]'̓1?K[>EBl.IQz18%\9[sf5tg4X%oY8%"pfYJTA{BTĬ9g5!IL JPV*id4VȂKGzLJhрᮈp`=;[YW Y+g'w *~ ]Fj@ H,bJG@%4u5{)]$XԠ}oEW@N?@6fC&*0$ER 6+o9 @zZi1N(~oEYV:1JeŔ-s#G$Cb^cN{RۗL0xݓDgL6VJL.$j dBMf,ǚb$BLʴ?|FM-zov!&"|[I畒$SsӬ#H,)%0imy >  J6[O=6dl+&!;Ƭ {t̤;܆/iKu}µY.DiDCY$ &l♼^ު<^s׈+/f~X;ʒ~zËz?aPЏtz6xX=>z:~<sMZAB?[),GRR`h{fVFhgL};裗Fl8ZF*ٖ@e /RhQݦKt_3-y3,\gs5efФf׻*^E5$ժ$NZL0J`@X(]5?^tK(Ғ$Ed2(5eFJ)R6:ق9FQWPd *c?/iQ$r}pl>f2T2f,JbRP(&٘*Xh +Ѓ"H,(]\fQ^iJhAm2|4BY >Q}Fj}rYmU^ }jTYhgZq54 CÏpk~RaWY7pTEѯ}3z{UgP< x>H |;aHVW:Տ.$PB|'(*N``<=oV:YEfuBP T)kut{w|p q:n7.OOtQ`ng!SO{}-WZ:::\~8zQFx)<\Li?5s ڼڌyĻWjv|IIoGiuhyy]_[W/3ߝ/ޭJn2l:s~Od|n9_'hz6|~ M_]-l -v5#16':u?N.=ls|:^r;[V/j׻Z(iu^wZGjÚOgT}>YU8-c5,[>l<]w,`8:?|wtǯZV8m}3ގ~&dÀ|#瞀(9!pFh%Lz%VՇ8] q{h|͖~6;$Ĝiɾ2XEHX2餏$)Ƞ){)ޗ5BMN%E]JUD:s=}v CrZD2RIEޣsAΖC";܃){^ q<{NDB 0I,@yf)C3csXlE: cOc3Z.t_ ܁dewwF.h}(}Abijj(thPdnoiÄN3,VpŴvn+4 _hϲPjtuVfSafKMaʩ+5QlJ{j"v(n%.Ww^*d#XCtg@oݹY= 'v*SH}O\jx[eFoV%ן,:-UbG]EN;z&vT3vGcGM;j?/VG} 5ZʝbԺXtaNߖ?jn͵=W܈RTxaLTO iyyq\7~#7G!8DjPZl\jbV;Š=R]s=nC2ur+VA.POٹd0d&i# 6H>:MzЫ Tɫ2@TSLٴ ) %Ά$\]ugY 0; *wǽ֏8Y?,F?4ke7q3~;zxZ-(5d>1}5v$ka0haxK-OLFn,_~?-#Ē^D)KBE9TDq'- XOB62Z{XIMEdRx/gkU-ꪕ4:Hֻ6FIjE30֝-VKcGtT~3OʏpuU+~Lf/5>ӏM ;3Ϳq^k}gXZ YLeֶ2%1P5ӣb~lx =|kSU@91W @8Xt"akIeV8l[9y*]iRA3#7F]EE\ m!) ]nIOX`gin3@YcT=`}#xJveUQY'ODdD,j%JxeDmY_mw0Cb˜=  Qew?BJ| 8qf9sE=ѷQS 20 K+&x Ga m6|o6V{1m:3.8BE)8߾t>χEZ܌B/Sy3%epf F-f3dt579ߓ.zry1f,R]-8kzU+ӹi{BvǍcq֝dp!j_/#\J ɷC jCt@dP&E0Nq0R ͦUxcqM5+9*(~vp2{rY%,#љ-X,IZfa7s!_6^M7.x_v:[hKk=Cy}s=S>Qt|O>| [:w}}45BW40@s $-(DȬ|cfxUr?jΔiu,3Xadx,!]zc Q+G8lU }~9TXv:{*-G=4BeBl"K_}.l vvY;f PGɥG#g)]ƹDŽ( BFif(uFޙ87yT&eL3!s!t0Ӥʁڍj?ȳy<gy[uձ;~-6EkP+z,ҌeBq3<}۾ i27(xC@,ړm3}qsEBxLH!Rt„ԥ;P Ĩ:j.p d$<$ QN-9FㅫAedK]:򑱆BF啎Jy`)+jr89t$ks݆c,{妻67=5h*z $0w$W !~;|:F-; uM@Cj 7:I~>G:HGW#Ikl4U4`s:dL<2BYjfTHɧdtYܥQqI<#:^sYA2G3z'#iLSs`cCcDz|-v!6m4q ?Wh7S^02ҋUKg5juH3rh"]s ڿk!1oȅNOYf=s^K:d[o|wrܭ%XzˣsdzΏG9 /|w3'=^+\>C{nXxhO}#~/7ҟ"4G6fsҬU_e0bPdo I,o }.. G|U;b⥨|9ݫ铿_o2 }?iۓ.c!z: tp]?MJ w4yJf$]~;ZcC?[s(~zamAr:I{|}t8uZ~|? ;k9C] {ok!_,6X`G> d4`sq3`o@b|Gl HJ3dտz86;cyvz 3UyqҚgI +x\AWǾz ގ׃.5hkM"z1(Ғ)VO46?^qA6}ܦq>nmM6}ܦq>nmvmq掁~APKbE\%_#HkDE׽#q 挓]5$^WaDۄ)|lHK\Q `#FygcnJ4cJAe|xчң\?Hoz7Qsw ^轴FK{%&Pйh25ihqGoom[ \CW~/onq!Ci%`8.5|H+ zLm7i[޷Duq42Rw#@~4ֽ۞=Xh?c}61N*W:\ٗ+Z&mB;ϯۺBHlLk㥼}5$n,3QL/9{u Z=k$,}<8+͔gD-JQ"Ks %aԭ֔&L] Z±AzeW|/*`?=C*>~O--)FDRNqlCJRL`>(fK%4[֙ =Yxx6¶)O37,vüy37>g$O72Jz+9&$%Ylhk uo|N.Ql7M_ƒnmm|Oz?7)~0?񖥴TDȌT5W׳li\ҤQÑP; h3V63lУhX"rVGqX)t2Qɘ-#/{P!a|AR86-KޠkZs!A:"Z#d(_*. Xȝ6dW&sm_S5qP N[i/zv|}4 _3>_iË`O}(݊ 6 6nŶioty1xw^$ -/?_`2Oh;v 䅶ܼ*}sR:勣oxX @thqlωF ̤E㙎xyδ}zFLGR`- X/V0%(@&f' ²F綩OgU/!t]cH|6%y]O'N$I>zjLڂ%zA>Y)iQ,DLzהN=2mr[c-*$LX=!Veˬ:o\%VJD[Fq`V/L0 ĺC7FWGZw}<&D-SJ A13Idc {h^7+ħv͖ 1V+`SUTޟ#Nr&SY@Lk,hf1h,YUd4rYi1K.zR(R!f"mC9gXMȸ< Vӌ#pݠF{,< 7Sn7pK:0N/ëԈ-14L.RHk#&`C3XT<{[6񊜌e<#d5E %vIdFVQ F!cϲ 0qRT*0DV &U 2C0?~<&΁W_i*r᧘KLbgyHF ʔ\rjB#-&#I<~ Ԝ_8|"z0rIdY\&ϒcIC)N 'D=ժFlI.} .xe]0:bh{MF!8- C⌞SFNeM)z4L`-8j'D3CiUy@+Ax!G62X(RR\)Ӳm<"$$5!U"]|L$RNMF.TH@ȓXt(GE l6 NJuS5 9r>\u z3- k tJӞ,~z]UtH >/d;RL毕6X|(nR>&K&"r҅~J5λ: (smkF`q~hE_C&$&0HiZ)p{IӚ#v[U_둭mB("䝇VYby"bGH_TjbRIm-AJA -o:Jl(K-d&j+tvL໒cb}y 0;x-z W||D N6 BHVY843f ,3AI^|m](h?hմ&sl/뭭B2$+ZJBd K @1Gҕ,^brʏ-A1˓!ahڟhڅXm">2T(0:<1Sf&'H2cl~J$1R ߦ<&Mqg~{VOF!7ֻ]7e?i&jV!?}g,A2꘷5=4%0_L7!9uW_~yt&o}-8dp$*b4x5>ERoF<6Ѵ|5R"P 2B%х8zgRQNY<>XS,-u2U1b1Zc@$˛[MydJAy2k#XY2-"p8~G"myۢV zu<;8^QFx)jv:00eYM,6*kƑ|UwV{'F!FB9#u #1sub伕Ou)~>9^nfx,v*ã.5jZ`)uZ%ԁ5_/۠iQpyNqyYN 睍'ǣ48ar#߽?o ?|'yf9pԵ.}1N+YخL7~fBߎ^ɴjgIXhB#Z~W]tme=vcl{ʛ7}h} -Vwņo sjtFqMH7>h/^m]Wm·__>F#~گ\GR^-P$0'P2Z6 !q";et*iБذ.m3;z5*翣w@.VyʲS(%e;^t7B^sٻYtuqAPm,:<'G uSJM Wx=*4K8vrۀP&eg \ ڤNZ_S3M"fN:[B(^f EBO;s|5:i$"WKHCr7%LbʀL3J1NòEj2ccy~: ?wn6 2֪pe\yevi4UdI!Ԭ@KBZc ʩ+fkdJpj P, @Q<"JG՜3u:y.lm:ۑ]t@xl2~j޽gȰ!5 /YFa'g'I{x-: M̲eE=- K< 9ڻN*56a Y1DӥƢȱF; uBO/nrZk}䵲v|.R+cՈ.h$OF3nW̏|&PnFMnei'''`u׍SxHw3ݽ>hZk~l(n(}&~cEGV4KHz͢6'-omo~z/Ph+gT'@|6t\J=ã#ҫXE]3=0i֖ÿvHq_}8 F8%Rb&JށV ܒ8~ݒ%yWKݗ;@}"Qّ7#Aa D PyYoj^ol$ʨg\SE++F98b%B9|ctv􅹌!5|vJtk#zY]62=^̖V/ޏkҔJ_Oxy.~{So,;JEņ-(D5vĢM/*2B#$^$B+9G011(̨2j24͐e',| XI}"H*=xVZZId`#h̔,払= ަsGxHHw:UJ?Lɧrsw*;Tn?m<ɇL>4KrLy ƘɧubVY`n @F(Nz{ Q s~9;|tA|i8\ !3WN\t^iڒ"f8tӋp=JxD "8~apFrH_X(!-Y —"(k@$FQ+0eТgw} M@Qߚ8n 7 /!+/mI<=O ,wBB=*,5D[ƱQPHH,ttcP{wH?ͷtϲ\cɴpz nN|F7OiѦmNul̔dSXDhq`6#M[f+]Â6E=O)MƵ2]A-Zm'Cۖθ ƼGtN&nvQi~`%X vɴ/\#u}_>_.vnyyȷ!bb(ӆuy`-Aex)k}0#\H$+fk/^'^_9ҧ}trdLʳM.j VE:g~^T5U7xT|:X#=ƒUmzGQa{ ?ʊ󰝅tlܤc9|v>id"$69 ߰#J٤l= g0Dx}^pp'c{z\xYŢl)Y{**lTKQTK2) o8uO5Ac@ ^/1C,,A\R'3UI"]l8 YJƽzW{١S zŠYfwͲͼqnRhNߙB" FֶAа V:n EА{_*<$!pp*< R YgEJ4(%}FdPtg!Vu$p.90ce-J)HPSLFx-E&JC4%0\[ϝRkVz_Nrnznc{{̳t ^,qVM":]K":['=gWC"ί0VLK|=I?_~?Zg)?M9Uy1ĸIY,Ŝ]Mʼn kJq"eN\>Qu ɷ]JBK$d,^iI(0$]t!dj3(10Je!AM෶>ҘNI!BF9 ϟp=\r܇zPR&#h4+NH h|`g^ZEHH"zXK6j9. )"H)e"- A(1x>Btv/{K_^tXt:o=ߡ-a]PrlY cvK׋Sx/fmq<ҭlw&uvE:nzSÑˀں1qݼhuo׻}h{|Ӹek-zO@VF{uKNjkǖ=<+\.t>{ou'f0W6e_,묰jo0o㍟6.\4 q¸sPMT24X2AsꄿlpG"Cv!jڝhhI!GRQA[YXO!C5Kc@2rl%ύ7ϞM3"EITbn^fLť)L֐r!Y¡J-0} ˚b5"p |BPz9Mvo^F1 8t0Neʍ "&lf=ũcSd *A :{AzSߏ(P.}r9- wǔOIO(B]+L!cuN`$1 PTn-Ǿylsۯ0s9d'a 9(r(.(jKU%-/RCp)iT-36-T&v]"'G|ğ)|?iAйqz>_ɮWmq߶7ysw)w}Wyvlը_[Z-bn]/LdJXFo{ҕR"Ja0WS;̥Z*yW+}Z+yJ<~^:iK=5o 3d0U@oʚ=!߻Yxsi՚O~eϵ B -})dƭ AFGdK͓k53)cKmzJ$g_Hz`F >{k|9%·"G?Ua~w!|හB~ ejS=19?7i!A@uRH,kRy)x.:\aonJ7{|([@\ ?[(eٔ7Q}=c73r \b ,p 4^n!1JK%+,r ͷk3Ԏ[-q|(&vÁ{?u>~egwGG!JDm{v>GV1zkL(Re[kX2hZ*ZcwrE2vbOj%g^Č4wj/ ؂CL25NʈdrN3gbƟi~pM'e**,rb*{)ѻغEubrGDKA$RLU;Tm F z m[^8hmJCŦXOPldf~dUa4x$f`cW,| Ry0meW>PۥlͿ`_OO_c;`qWc3/>8mY 8fK(so0=ކU .2{~͘Q2pPvJA] KiQ,s#6hP{w<d+j >gEc͈ V ;K׌9(<\Ul5vwU =X-lAZ F-B'ubIu:{44s&nx,8?a2"WDCՇRB>^]j]|mŘѲ y),7n /=jΰܲ7 H2+zg2j!}lxy|yL{Ӽ䱸q1}_J숭Ҙ^kY%mk X@8[gW\<. Naqxx(*ߔL}Epk U?R_@]}%Ȅ tyZReH>ay#R?uz濮Upj+8uۭSwXO\lW2L Etә=G^e{w;~\)=n}Œޫ7Pjr~.[lpGݎ+_cY/&*#:(u4T*udF)C2U%]Y2=]g}*~M-PJ%iX+>B`)iX9KH=@K„%nKyd`gzڪi,B>*V)6O8l:͜\p_6y'Ce{g;t C yc.?؄y reg{yýkxqej?g lvǵ\7h{~ޣtfv>,xwWnBWd8yk±}-~wtd1r]/d׼| tw1l|,hd0fBdQ1N=Š1%FNc#ijLMy_0a|"hX4%NkK.(#Xd-@fa P Pp-@Q=8'@BWu#-FcHߺ<ƽSHCp퍃ȖW1T2VC)fyw4]=Fw;>[wmתUJ y1Ŗxd{-4]JJ}l~^z*b@ E*ͷjrL#>0WqPYNfy>5Z}*1"OrOo҂XH)8"LVoA.t]mUAsDܤM5g2;6sl]moG+2RWsup\'Я#Ԓm/dEQCkd H69]UTuu=DEVv17DI9-%VS'DISK"LoO 0^Dbf#fgٛsuri}qZcggiZyHrq?[PdM|߿;כwo/83}dso_{_tan0>bO E9CWeW¬GPW p/&m2v_5ǼϺ뛟]3ͷZu_EGn24~EhLdiGAm[ݷ/7Gă|$kHɁ:H8$(Ϝ !q4u4f'Mf*R & WkvMYApOSY1 A2CEަ䣉eO=^v6']vv}Ν:k`UIyM l3p|}ؼo2vd"7j89_VTX-<$2;U@M{Nݿ8G9IG&i\%^DAE5QYyUDÔWӂC8tn+И h>Dp𠣑P('w)n Y,v&6r]S/ |\y;xA<.2_)†Kz4)x*(No꜓8l=ˠUQG@37;UYyn[;ywp(Ob4~]GgG'_p|Ir`/<6QR nc)E#$ %˽T&ԥg*|K% 2{u4Ç86t6Fy,|{6afזu skw\GϳonBMG+_ /jql.wWdf|?V3YdUn@rxPlGW㥨чAÕ|_=@/za)kbF?yhuCׁ$"RU#15QQhPx{InХ(E8DE\b*(LHA[8Sֆ%$9Yxz$Ա %g#"hM,(8EoI-j!o!km"c3qz;k\j>]~L~t465[cv׍Э]P۫VK{{a"H#ª 7Tt4h˙G@k`##N]w~nB[yKDZy !o!*Es !oѳB[y !o!-B'BB[y !o!-Vy !o!-Bޯ逘~$UԴBB[y !o!-BB[!-BB[y !o!-BB[y !o!-eOBB[vhe9B9ϦWs)啥良T|Z#,Ûp=aR&*#,*_#$p(Jx9ٽ;(Ja?c.hJEŵ/7A‚C\˴9TõF ꉋhU6MĢ;U:鎚2J4 .v{Ǚ1h(Erk7g4EAQEzc8K ) tRnWm-7-^Hi/ea:~P 횬k!N2ʊysj}9֍"hR0L[ϩ@OWxO,ޣ+,g3rQI'hۓr<! 1İGN_( y>ip'Ns?kDIz(H.ѬtoM.zat5._|>^x[g.{ǂiBWFGLLFqݘ}T϶k-C}I(I`BN8SOHNE]PcPRa;{o 3&6|:U:޷'-nvR'#mzvÁn+atL*&@d%y) eK=8 R`j]r_{4V78.ʻXQ4)kG\BlKqۇ߽i{wl88lWͲ)~>@ƛ0#>ILoƶz-wJmbkҭ9f$;ľ>~1b;1^ͦ=o>lo@zqb3JE糟EoƛOR*(i/0Թg:0WI%E?H%d6֌Sk!.0o4PHq@LIxwoiBn<+9A;6 7dQ#%w!@=YGB eVFfnNwl(imH<&څhWq*P#4Z"shkQwl:g\ƅu(h׆X5]4|.mv/bC+~me6{"[0;DGl=As0#E\X ʃa&ea&',(Qr8ax9{ {`ϵqMUH-A=A`$MOLTjX^Ja)@<ִcЙ8[$wdGB+0A$O+kgہ-6&k{DxwᡆZgYA#w1 yd$0"w, jDGr('tcxFi(Y "zMs|9  %P;@oDM(˓yޟɈ(&3qĖ+zIHADMB_qb1)@LJTi)цi}He -Ztv[p`>~a^ˆg7ZO>88p;lCvF *)EF%KrBwTKM6vf>,t϶\ ecm8tT'דiE;U滗)?d<ݳr5hD}rmݱpr=oߓ.&xajQEAf'_+hy2힛a3&i'-28:Zy-*6([Nx^;≝FlO'vO$ .`D\b*(LHA[8Sֆ̼OhO%>}oSG|`c|vpFvU9j{5=&=a{N*P1*wD` $U6G~V^Q#IcJހHN;kd4 62nu4[mMr\B SKfQ'kĹ_⬯@ǣ4^^{+|GC(y4tY6`2_6D~R& [Vu.tS!f¢o R{_.$y I{B^"* *ƨJ!E)$ CD)8e -5& h> 5Gָ-VAI hgls }4A<˛ǐ{g/'4=8+0/E-;>ޙXsٻ6$W}2RWcqHrF} ~tIu߯z"JEjIdQ3Ù꧟ w-f`tq+x $ŵ G 2<u*0Z 2&Qr٪4Rr)2mJ+.G;-/sYB2GwV:K҂2M&Ύ+slV9F(.H"/?n1.{mC;pKl|Blb„[/fo=Mu?!mH]BG1A?!宇ru7➗ݶallOv[lm>Լw^a}(<}Bw(V&[n߬n>Ʒ~tJ5t wSHsf?Z5˭/d[xbŹ1? jTSBgO#Ɗ]Θ9*fp@&;RV 8cZH+՞!S.sT>rHtV%X3k ,[;ױ g14!jRkKhZ,pP8$GZҲfmnk fF@CI5/br"𐔧ABj8ڕk}I_hi;^jzUws}UۣJr`%g2ed@̆1p4(}Aϲ.19=+e bDzR(*"1g EÍtk2Tmd&vdUaa5 -XoȆk叝]fa3iXtn8_/?P[edKIJ#$FLM8nRa1hL04>'@WEl2+jb(^ 6`7*2v m&8)kr , 8ۏGq<Ԯ6ڼvci<@1 DմSd s,ݍ~$emw~ŦȊd"&H!H#8,Q_XMx\PǾ(*#h!lV.h{ټX6锣a ҊZYwPצGmf\Fzf.8IYBtU>^ju'\.#YMKEUeņWq";fvк +1oPA[d $oeŧCjڱ/xg'ݯb s|(E?L#CS=q)/tY )HҬG"TިRRN9k:bJ _ںyO;6g8s@ع΂VStp}1;\|WVmܷlՆEJ^w~!UwN"r-qdzH#} Z#1Th՞q)o&Jwmq U< V-Exqh0LyM3BL=; nGASFGnWd}4T+%wg׎zwN,<`p8VGMoi" ,i8cƧy.(=Ks %a< -uB8)u׫^,S;usvnM?3K*()ZKf4.z $Yy MQgtqK|Ϟl7 bBzJBCYREKoΧ:_F^57̼4qذ@:.GA-P#훣_]Pst-ZjzCԵ`57q4%:#ΐMѩ|g]fqLGvE"Ӛ\Yl\Y-WV˕re\Y-WV˕re\Y-W`jvN˕re5q˕re\YRA6ڶ@--EKKR--WB}҄IJgM4EZk=MQ3҄~iBB+eˢӶ9,:-NˢӲ,:-Nˢ,z5F>u s > jT@)]:ם1sTo_ira.Pr8֩BEQ(rAgC V%XKS;>œ=#vG;}=bt1vρsBp MZY?D!)O,5`*jllүk-%|'C#h"ԞEOnzS> pGmL;_\}~W[,b(E9:σɔiT;>n31΂5ӠD >˺  Ic&DГ"1g EÍtk2Tmd&vdUaa5 -XoJVUaO*nqfƟ_йp|=ξt`7+RH4S[n`XL& Z' ω+u:PfEP= z>4FE6B&ێd'eM%RG-q#⚉y0]K;Em^yC`xxg#b8.ig%\#44)'"$Y*fmw>1CȊd"&H!H#8,Q`K"Vg;FrwCAj/"ʈ("6D|H! kE.veN9.)1a)Q*ϧ,umz1zI+m6eg3>*d8I!%-H\ც![yeCuVӒ}qQUEpCom 4BHVc߄>j4 X"1O܀Sa5ue<@XB,b*HP\~|G\嵣C 49g7"ؠGt%FS4t%eDasӅS}Q~nH`>tjC8~Q&r,72RD2ב(-ZA2['GcTZUi/3>HBy,JpJzAy:]ĵx@ hYM{c M&"q+{mY]Z$M):iâne%ʩ݈p|v[n'mIgu˝ic^kd^\*'L4TdS$R9X!3ZGt ëi3m/:,,M LZPB9E(,@#"9H]Fهwp;{"=K4F6Xo ]?`PrmUR]PTD&x #ʋ/ӏд=e_ˑ Uve% IV c iǢe,d oQIdQ wfN`Xf~Ǣzr9nqxk҅i6RW(%OO'oMO6h\L',pZҐǜ~Ei㔫FNΏI/H_] !mp'#kLKvȗ a3.8dg`24}Ï47w'VX.-LKč.»e9:{W4(fE|oibG-t{NާK׃HEmo-ףNf9pzvffW߯h8/%w74P;-//5wQm~Ŀ_]'f#0+a>\|޶޴sd]mo9+|ٽ/ŷa.Y &3{AWGra[-Yݲ,vq5E٬OŪdټ jpΖ@\IW3:Y,QiCO|̣W|9г6G')l.;jW_iu7{\HnXy2Y&ʰ>d1?]t8RsEeqGp: ?}oL|_?#8G#QwK\(ce qۏh+ي;p(Yơ ˯`pOFʷUt w`~}X@f4߯ٽiMs 4JUrG?_n#YOB!}t%! M]o mHƝэݻm$@OFFH)GbyPR`8CsBHP"RjI\E8tߛ Kpgny0*XKIS \n~Og4 EDDަ䣉ÙN3YŁ؜ˁi|p79@R5jpymݸ\lm/:>/øVQdi VFG&J}J^ n<=ݢBF\q{ǙAcYk7f)(rбԱi\t;S5lk xfаPہ0e|4Y<~YgS!wOIB%?vWGW":DȶvhlHr@*yw}|bY"RpL`[Lxp;E@(|BCHXbDw袒N Sٛ:k֔z-Vb"&-&HC>}I`*DfvfCJ%+%s:l̊{-~zߧ#?=|=Iq0ȘS\ ug8.Q>=F䩵pHh<.Q,ESԑ"UOL3ub"bNE=CXRCcO._[ьYu4Tū/wNN}*-}s~ُh6JPCJM$kh%!D锨ăF2ъIn{vÃru {S.o\C.V*6T}5+ !6ϫj8 w/vŻcy<:WͲ)^qP +$Վ[^.T{M0dַyӤCx^&H`5S:aQ=5z/zm_;%v . VN| LA= MQd`tka;d}[řҾr~LcP~ :<`1 MҤH `2N+##ge@<\"ALjE60(3Y`ji剶e.ZL}at:v,J}Jgn~\Gp |Yn;=!w:RsJ̐O.:2v`dmm[:5џz pā!`sj r\fmv#[ g9Ο8S@ܱ`h^!QPI)&5BpX ًd|2=o>˫!ߙ~&:4mmjȇ}7 }S^j>+hݟLv2ť6.mzPb6$ƛ|4~mo4dviH8`fl;ի6KME.&qjwU*"R-#C[ְuuIAҮz[Wz;]՚jӳf _{#jʆYŢpj5'?_iTHsۣ\}X{D|, յ݋j=;bM!~7HK[].qwNNj{[fPGr)w̐ }cZl݋uDVoC^;nst9N/-ͻ$m ޱH99r:-u^2,vS޶ƌW4ѳl}- -ݶ[ϝ;kp?ln컹юҎKEfo蓒&e9ⴐ,䉠qjڙ\μ4䔒h5,xt&EMYϜjs)tTt;~gΘ$Q Q^ĸAcH/ XH7OLs;x<}:{e+E+LA^{mX[Ɨ |2\&9-~0d h%kt@t>(udroph؜hos'SS.˴ H)P4zn \H} [/ rSYp9#PLA툠lgXg_|ѳ(_)7A="ӆsFG;"7 Q[ x!"Og8hd< =u ͖A=ʩNJ~rz(6C:! !&,/pA,R7EV^^چHdCl@&r;X x`d\07v\Rܴ^1S# ǵ1jL$=(;IJ>crpa.$RA\(iph~ )hpemH:c'3oʞH}nyorf"l#tLHƞ?waV|՝*)!RlSa&hLل Rb6DG( \:e WoHN;kdDKPcFxI.RAt*Ř[TIiõ87k< t2.nݟ=H9Nu$L*kG*)kqq,GVH 8opO t.Z.,=n8t:w-}أ/-@cWe:h&Y$]'1_}(j+چb=ˤU1@[i-r ڼgW7msWrʷv,P䰨2=?L`su lWy뻟JLzbL@h$dMT Rֳfg__̞\g?$?hһy] 4-I._!ي嘙 6,]j7~xZT_.7iӗ ,_iȪߙMdz}W-Φ6J{ y:MWN'_UKWٿ~H$R<*x,p5pR*Bz2pev|9B[ l{`vҊI){WpeS)UWWYZWYJ p6\yAUN2^5tf^TD%Ud(p45e5ʗ kϥ1`޿R(L L|40Utv { (%0ر?\&kfvc3.OɳqxǢ_8^qY=Of-lSk1[0?;3{ 8.;z^LZ <S{scm欦c(f=l2'Ue~fGM. i9#v?Q8pWϸL"5` !kmX;R_ !$ $8a!↦Q#q$jI=bj2U_USPrR W|ϥy@xV$F}n4+5 ;P:HwDU뽃HAOs}f%OTZΣ.Zf`(muY4f}8 U[\V8 ӠD2#Š@$=H^Fv k[x\3hy5~GjWg߾bCxm@^йFm^̹\ /&ܦ~.Yb8_}_eJ?UYR N׎ Y&KDڈDrIBBeD^/AwOj~垚B#45nk~;7cT?g$ >$ps ,<M0y^ ƣ^ 9ķ8R8{a~⁣tTD1(RzCMO-0}fssآ-N9;"v1DՉFD?݃Qܬe҃WQ wfй8|+)J6&T_Z#H}ОqIA;M2PS"6HK;`>=RҞ^xO#gll}ba-F}QF+@FBһGÄmT[x1yz,֗ Gb>Ŵo~6?YOui>(dxC9*Mq])HXB#1Ҿ\<ܡyM B >8SH!f$T1dI5FIL)D@dMaz뢓(G:f6g$U9{,R,CkFȬXƬt6~'d7rџ8rt>=}B`JA8DB> [?,R`)JCmXǦJ +m5ēcT#2d"$αM¥Ɗl:+*?LK?#M3<ԇ 7/;X&Ox[^\N?Z,E{ )*/V.DBSaIJ-lA犕"AXb2nKaÒsQrֹ\:1Pw1U])V) M~dlUaa3 Zh-p#nյUf-l1_lvq6}O }V;K K+b#2w.s6 6Tc/d%tgh! v1%1$t_u3J:Em#KcF I R1U3PkM8HV)A4b,՝P[I$U\LYCs1Y$6 Chͦs?F2s b3UcDT#"x+6TޫR=F(6 C%[`&<3#ٺSö6wRmc2/edf+I2OƁd<+HYl (-FG\| \Ƭrď0 #OX-h Y$K1`>V3Ng4I+WQ4dMCI>(b,8[K8ǂNώw&?Llpݓ(DgӘ]/H%J.$j#$SɨC-Ra,Z"i=j[;섺̇ΤT6]S ԃZ+Q)i6Q*()X- (7P%tRG,J@$]읗AF&byA}f$-+1iAqLd :1ֆl&ełtR\Kf^SM?k644aYKFQKSJp٩!tF_|QڟhDd\k.Ju5,&D}N9[1Ʋ 3*R[EMq (mKhc2i;䧇FdEjOyU;ܮfbHUk8$G:whw-C~k?J`X',x]Sr0'vr晟o4oͯ:A>%x6snro,jIdž@u%Bh:ł>٧)=[dV'@$V͟xߦO睿?D:})|2I{OЃH}zzzՏ:mR Ogѷxo/ۨWK;8ՖN.iT]|7oOο^]kE+?;;=Ubf2*̦̇V˽̏濜߰6wFoA fHQX檰e4?wb=ŏOˍ^\90=kq*qE:V«Q5':R6,>=:Ig I}''GIM㦖e7/O/ G?o;w?=7oORmχe.2iʴ~7_Ee>-36U$z4ϓ^b.n11W$kۍ۲f!O?e=[l24\ņ?. r渚 j!Իb‘a~пx)LsW?7Pۿ>&>F rI{cA$$%(P1 uR['i$W\Fzjg\v0d ;gqwBjLj#yFdvidŤPJ"O;&tv*쵙L|!aۼy!:7]uЪ1lyv>NΥy27])Av0u P\Ji^;#QJ!gKSDH,΃NgNSe 9'.Vdh 6 E[GQ2It ,LK+bY oϡ:LSP$Q$\[8i-{f6p1/d́IYB*]}g|%,WZY9OUU8'"I.Q1ϣx?!>\m4lÀբiOf{f5Qct-mzYФ`ޙ@SN.} YGUXr2rQI'hP9pHBbDNPV$!|2@Nnv-[Zg7z HrhǛf3|$2[,"nC]jk>~o\Xd].+✁lPsl%r\ gT)58"C.C=XЈU,X g.Y&սy&5< 5ʻ8]Mf.x&Xk} _r}ۗoqeW3S|6P'GWuhgiX?˖G5S3o3㳾!j3_=Ғ+jIYKe[u _ޅ%HJ=ޔzrٗRYڵ"-RJwXQin]3|Wk$G6v 2Y4#TJ{  (R3%SGw^d}?OwRݧw 8ÍAk/YH]H6POP3Cy0B976SJ)v!ZTaHj9V9m-Mk ଯ >:tkCLC۬.Lq=ϖ+Ü4jyҬ>76}ooݖ`Ri4Q㑾(i&t ]Dy0̤\ :}΂y-wHg se5JOh,|Nfpk"Dj $hZP<1 (Ԁ J\T0*(M6牥-Ck@Pt-)2E&o]5X3v`,~[-eP/ 0w<] ;'p L]γA;ёn8Ivg0 |4BOA yVL'68H7 f\BsNGÄDmpQ '&iy2]~ AzIHADMB_qb1)@LJTiin dN0ʮomw0CެϩEÞݗA2%Amnv ?@ȖR 7Ӊ^EˤH ٱT ,C(ENSj$T$x0~L^EرD[g=xsRL±$qj(x<Dd8]n'V+ߏVu=gӫdlN,_'^QJj6q~ǵ՟COhwh?w/v{βċg>p3>ϐ|g|:|u g2))!)˩#N I,ԪBI¨\RmFI'YlaߟONKS-\vqX`o&(/e ` Ff8"zyt]uHOkk"=#ZgQZ䅝Ǫf]E5Tx( cSKˤNЄI "YPtR*6([gt2thQoLQG~. ,%J) RЖ%Δ!a Np'x%6y]Jz]̡s[@ĝKiC6Q}"q_\ٚ;@I! *wD` $U626DGN%tѵހHN;kd4 62nu4[mMr\B)ZPR%'cm{oY>~Lfw\b٠S h,A(پfYC1p:{vUύZ>R& [!X31uKH$:R!i\ATH!vBgȨ"`Tj,JM!hOO$RN)Lii7$2^#|8@v@x)*k2q[.Fŭ4$K-'.](~I z|cȑ)mMoMc?n mƮ4fazz!J"ݛ#j ʔ§;+>V+N(?v(7妣7@LFBzR Iᤱ PZpdW7 .0ŕpur0[ՁRR)bd-puԃL P`Ho*I_*K+L*KXwWT(j n%,|n!^,媶?~\vӅ-V(G<_~σl4,~#* BGF?rEm4=FޠtʾtOG,!JsW@q% \ei :\W@T ?pť1ei%:\e)uW.W(pnRPZI:Ϯ#\E0zWY\FW(-0d))/pJJ7 J+HMYJ( hft ?*{cjGiEU`ϩ. "0Ľ#a*~:LJ1v:v #BS,.},"],MWjMĮJPZC;WYJf \GZ%53=Ʉ>HMD Yi?`RJc_O&S| 1 zdTݏ8VWw} KΞ&l8g{u0&_Wy`<>e<puSԋg`qU=&*q:aLFF&')JyyKKh&wU5㇫\ a)NL7/2Oǜ|^OU' '7RGd-eKvRG+uJRG+uJRG;}ܧh lXNQ1pﯖo+:ΪP`~~5TR_~|qT?׋kmZ6Lo vi[LZ+O_~`NrecxzVƆTQh rNZ7ZQ\j>?0uק Lsj%sy5{Sk.V~1Xm)wğYt}b%-;/8;lZܪ}-O_,-.ueT_,[u;#֣[#lvh_?ZjcIW1!asM P \Ja)iʁ1xF`=mc7KW{^>k UB_kb5`Ak TC")mڏKՅ,UBkK;4domPRW01oꃦY˴ 1\TygLDQZ#pyk \ڹna[օٻnWyJ ? F /޷ł,e5vW-(4s15[$NCH랍i{_~GdSP.LT5irZe![O![;[b(ֽO"Hs-{-EjV(*{C&\lVUf_T5.`w=L60毮Enކlҳd+EfV*J4|ղjؐ{RɒaU[5߶f33o{G᷋B&I=7:1,:TR=+z۝EАex6>8m|ϖy?p'I 2L;{ x~:H,\ulU u׻׾w7ӻO߿۸nyӡ'˵?r,/ݤGg0q֗mZrѣy˿3[ yק 3Ԭ^n,N { Q:Kl^pqTe7[z,;>Y~;ulS7۝\\urRI30N;ω$Ʉ`'J*O9e3g'*S\&D0!{׳nvٓ[6~Q 1x!WWZPhQef\gY&صcˬ9՘o]bcA?W^;f(U-"6O+x#YzhÛ+[~u;0OT Me޻BK9MkӇfR{ z :3l$ߵ#zk4CC};m#m1xbwN`7{yg#47bom !ENn9$}G<9@Xppl=xJQ3;<`ԡ[]F۴]x=RݶZiτ+n* .n|cN\-3q:pφ}{ w9Ip`0[Ym}Lȱ&CIjlIHΪ~?uZ7v%ºv[T)O~oYo +̇I߶?n(trVoRr%-JE\TCl@^ؚLvuRlIs4tؙ:dMNtVFj,RQցL)bXӏbES$>{sX4_n^k)L 7gs]l#UOg}/5hr)X+bWf0)"WM*bDLµ,=_:dBq`06c7c676iIbrx_{ hp"P G2^,4GYd 6Rt\i!Y*#ht<@d!h~AԒ4I" D&rZ!d^#2o)dCazƒŁ>S!^BdyUi,~ [:%s0Q`348\ ZAfEL@24ЬVVe0y m:R@ W.d砟6'{Ye/^]$zߞ<[v0{`oa7xK>o[p]߭8Zпk?ZǟoWVҬOt3Lq@&7u*G4KSgOSS)쯨JgS#+_̣~[0gx([ױUѶ@;Y`f[70|fWe%6j\XԱ;NCǟJW%BtwvzCѽcU}71S4 ޜG~)i9t9Ψo7s^`%8o85ԋ$_j_j_j_j_j_j_j_j_j_j_j_j:^)~1Z.g'fjYPK+Mqy"y Sks$?OQ(`qS-Aڼ` A /#)J[wqҙSawlmCjhs= d qu eo`ÑZTpRZ}tj֘ ]M"1ĶO|5ȿ:b@f@{[^N,]~sK(OI;f^}8p(5j~8xqZ-#9|q~2%Z Q#=W-Wn;M|ulO·(*wUXQ"ֆN!foJ80Ȕ͎ܲ0#DaFq\e#pW-X\XxcQoէΌ7if߄Ӧrr׮>t5Lp_OO6_3bUv ʨZu6Lұ0Ubr .iDjr6B7tQ6g6K|xov*y\)֗ćgˍ20b#1_ jcQ[Fm`\<I&UD^72ko@URoή?T|Mf0.{kqB,^T=EIzm3c'nY"rbB%A Xʢ$A}Cч͎Cݳ>Ӈ{Pa+Z汏y7Q?b W{tKpuc:)7ٺk YGk4HfjrRy4i{KO?\zv@!XO!rSQҔ e9^51f 3>+J)Rb>b zD֜-@C('/d˨B;XQj`Mq_Usڰ7\|K% 3@JNF]*;ȸ( ^TkvbǘBkҵ(0 j65Dۛ6`՜!ea &9"0Yb5fDBɠ}QJj7ݪs֨ȓH/<嬷l)g\g fDmןB ׂ3*0:eR/H SE Pb$L~-ODOxBO?ؘEZ+M/X%Xh 1Y՛f'O}Jx; ҂`)lA<eĔ-10eD +Q)DF1-IלV'{w쪼L-mi];j):\H#內 xM5r[*R`iI֟Gvq^ǻ`r6EM"R`\L%-h eDnEĵI,O*_v[C9ˀظt_'D%/}Kt_ݗD%/}Kt_ݗD%/}Kt_ݗD%/}Kt_ݗD%/}Kt_ݗD%/}Kt_ݗ{:P/1}>/4GCuB+"H`WN!˱űFqSؕJ&2H[KDD|[bamL+VDV3|!-eUUQ Ȩ!i'܍Miug@v ˕,:Ea%c*T Բ舤ޠATp_Dɼ&/dpwC*ѸvB}Senꓰ 6?dګ,k,dWE/jW3\v* 2ƉfZ$ `۔aɐQՐU6*a`TŐ?Zr6&*1 |L +H,I]]f"H35D4(u@Y9*!KB1Kt_GU -KOߗ_%!TQJp[a+b)zIX?q|[ 3>5^wp~$,*AC2j;)|6@5A Zlg\2qtƎ.(L4|8@,[t;%R`1Vb(mb֕ͅ[57~0q Ϲy&!<C[gm08 ihZlQ# cDpv2=J[=U.˿jHFzǰ|חfP;-Ǔge^̎<[764FaV9 dx޷_sd0Pg0V{R(ƺjR&7tUfXdy Bx'AE\'ˎ^9:n{9:kZ/Uk׻ZZxURFZL& Pm [fR9ۼx Ύa: Ϟ~}5|Š7YI(uv1J+]ؾ?_8?M>'gxbwq`@Y"g+sGh:H":fmfݷc~پ[64227WU_UC}˪STm|E]eWJk.:V8  +=m<ܖirE޺?jM|IG^E*8Εj IƘ1XQ6Cl<S;,[D F^z`Xw<9JT(:ZStz99<͉$}Nv=K#yYE$kƴ1I&N2qLd$'8I&N2qLd$'8I&N2qLd$'8I&N2qLd$'8I&N2qLd$'8I&N2qLd$'8{഼O~cD}t/S)y:0H8R="F({4Xgl?'H8=[wعW/%1[]ˈ^ eeA9)ƣ?/ ꎧy8<,=˧lXڠ4`5$[Kjջ񺟲9; XXV*5g}߶؈m*ovf-zw(X4P.W~ ;J]#7- d}a/b 8.Գ]/*Z^6!5ms׈Oh)ۍXG_Gm1잫[*sh&Яjwү0 2l0HjO)z"2 oO1îX~2ix^7ި`S|bwL""Ȋs@ܡeS2PFn9cI!*PXPhQ}H9wӢpgQ@&JS%g'Jih"yCL8cDV/[M$${_46hrzEIlbN:+/ۘ<]*aqD;8?e+-W@ٶI ưXyxOX3͎vZLX,1ز[?+m?<0B1 Ԯ$ڡm`(6I\MV$2 u$Z>A29\3ÚKYh0ɭ߽6~;wj¶R96)n،rXhvhzg{e6źפ+-1# h&~\~WS{5\ru oL24SpcN}2Xq/7]1K$fYt{ X{7`]s} }L}п GO3bsB%VzUlH^57E~]}Szt֜-9cHC(TAq2mSgs[goGCxWsJ/4ݡ5ˤ;̉*|c.0KYrf6q/} 0ibYTihˋiL(eD*L,EQ M9-@N@5lr_|x=*ޚii4? 3\␾%ULuPw"YTpKuϕɟ^k3cֵ0fL)u.* m!d." N 㭽\(F39g2>:UiMTg<y^&_yqR7uNKm`$3`XCL̓zH'8\;b$tCq Xs|Ro5;0wg U,Jf8 +d`mD-e$M }+WMp@cF}Qnn <^C/Y|L'm}*Tˊss# ô' ži Er⡐x5vC`.Xa=O jlVmQVC#x8 gn1!Qqb6[N+6zP'nN;Aff2O5Ʀp }"`cZ~p:6uhSz:ڪo1輼0m*r[vC/*߷xEL7kU-P¬XFb69;k#`]KX;-EA6mW_%['^Wˬ٩\Uuܮ*_8j;R˵Y|1' ~yrZ̈́"w7*o=W/fX~$ Kq\j8m^`fG\fې[~!`U/з\wbIw&-i.ƣtm׿c`H z--Zlo/O(3{8;[=7)GMZA0 oמ.'2?OtE{"mٻ6e*LEq8N$'G"ʄ(R~{GȑD;vTU'N+{i[z޾)A w#H{a]+דՌt^yRwSuY ?&1pMgr=fY#M_OKIO1%|Y::W2QQmc *&Ax=FC6VQC97kSE S+#Ar4bZ`Ie4jH: Q&Y4{̛kH0Fa[+l̾nO'8Jɇimxg"·1>@lepa/#_Z?(bhPlb2<(38 v(ɽF"rpNSw`fpnGfsaյ_%1*@lPB!!oCwRu8IҀ(qbR(;>zd!ȯyrGJH"COn`ʕG\6Ѧ?pN#LO04b噪䨂_|me`f1Tyy1pGӷR%H1/sPLe"ڙ_?y4}![Y&7!E'Y>))OƣFsχNY y!fm{V@ 7Β:<33|L]rI,8\2l8./nx+G'^~U͋?~ɋWoN0Q'/yy 8qlm_9Y/l>Z:dnNj*t&>_)l$ghۘ%vc~|[,@fM˟6ZꩩbL-|joW"#y)f^wQ89 YT 1D>YB)KW&#Ϡv[s0~2`PX:j#!bU((*Q*]#=.=ݣF y.MI Z!f2.OǠ5m^O@EO^vF[wghx'*`'V)Ts9JsqI1EN>4469,8Z]8E\R)LSqCaipK+iζ'Gɳ fAp6|rµuu~o鏅Wnr-TQn먨o?Ͼ?< U_-.JdJu/(U^ X,J^YMÃY*+)•C] Z*mҕoFpxW6h7t;pmQix`,R2ɀ1SX8Sm =dCz>%oV^m&ɱASg'wrg%b\[]5\YʚMpE6W=`k U cJ#68R ֪=>\Tknl&ΖA+i" wwW5x67WWX NYV]%%ñW'IY :cb6e(?r@p3qJ12x0@%Xn Tܔ+Q.(=&@C ̘vq>*r8*l%0zZ%su6=. IUp$B§ZzF@K2cĜE* (Sk%i =\5i]}X\,m(R5LTٚp&b۰Yw^}n~'-iڑu['yX{3ƶz ; "obgw+P}&-L+׻)8^rr īmmpP^CM6)# Z'{7h]Wƽ_w_toV3W B0#Ҙ2 I3 Km"Pҭ K=*d',D2""&ZH0RR;6v #ָtc>5ڣTgT&谶Pt}ciӑH"M*_@Rl%\4ba#GaMu4* *\&,̿t{Վ]O{1~w1M%]{ꮯ$7WBnk4{Z8S("N'e0Fǜx, v\֭sˑGPq&)z̶\J@/Xd|ΌatyN*WNsM({:ǫӔp+M&RnXʽ\l4vxe~j)X$OD b>ɝr@l(JhmOȺ%Hel3ᵭLm㮧k㶽abʇ72oHu(Kf^4MEh!H~DW|wmR.<^>P_]J, &$.}rt:k@xk[5MhvgfSS1-_R'LO]yO>-F]'sxVo[ke\QR%T#5S!0O<;hHI1G,*9P󁷞2-?FGvX}+U{*ҰZ !2Rf65J1c( z A8(9gud ႉh m!M$i 3ˌH!,ǙuAhXwΗaҗйfjҡxV1z`ZGlJbk cfTg!zr3 93 |MϨ ʈRBzQu`rLQ)WbG Tt=lP4T0rIKޓȈ&*HΑC H!`)h1u ,6m7jʳؓ N0"ڰd$rK M@ 0lYKx%7ӏݠi=xFRaA1ҁDe#1)MrЦ0Fc P3^OFE*AGi'c瑗*c X%`F PS9cQ/8:{&rZwwmxߎ79W*qH%5Nh43om#GEכ[`Y`wX|: ,$(G',_bٱdYVn>ETdIVca.g$C f.2#OYWe\'dU0Sf鵷yfC1sߔ34H!M@C`m;TԁsK(bQ"0Bʁzo߆P )Bj-Ȍ*kV%q^1Pd=(L& _ Ⴠ]}|T:O :GS* h3sqyH ,x"sfa1n N:OMc}ilIBcL&U+iŢO}<çDAKcd䎼⧩ DƖG"8DVm';ɪ1M-~c6vEnGY0?k);zߣ~S`gڤ^̃Q'|};j sx)eu&M} ]i7SgJ@Xcש߲ca959fBTazGxբR=0eGQ;<}~NG7og^.gaJ?0kQ>+:0+7+EA饚bzݏT΀',rW-=ӊ|]z=~bx*c,ذXכRH۞6LX^;9]}EN|o &?bXr°7soօ~8|v9<,͟^^J}^}rLy(,iK!殮;诹uS冑;M[JWP󶻛WU]6DzڥڳgwJNi~%BsݼՕNn!ƙnAKm)q/>zDžUWtq;6]suSݐZH{ɍ$OQ@z#si*QͩVꥤI:N*lݝ Z_D7tum)%[s_X5"Ց֟:84: sQ %E٢EuT3Q)@{uqeok-r>O&~)Oj3 ɧtr! w~2i~= go^|6&ۄwЖSt:k̳&lkڠgN2.Jt#z&Sјֵ֟v̾ޢoI.vq:R2tuCBA a! (Aپ!&Y:2dJD|M"KCL0X\ DY-*=t])ܪj 7n׻:Sϐ/r@\JEFkoDbJMN ZkzA"$x[݋zy7S.E/w&7X~;7ϔ>L-mu1bw1;yéHeJ[]tU){Ayη1|*"7qx6s_qizd[ґR=^ihj:ۼ S~.yToћN:: : $;QO2Q@=@m\?DqBYbDMB^% LD[(U=u'r05)A*CT>$dͣ,:J;bDKkP\S3skBCu[yݜ=|Rs~c,~)tJZ>癨msNUMg3Ro@utyȮu^!uE(G&XH=ҾliS^1izSh(f8XR:QJ:Y`]C!=++۱fmOͷ~|>n{ym/WSb>t Џܖ" ~omǼ{oiJ(g|W^whHwkו 8%XBUH3nv@*0&|pNG!KBJ`3(xeI,2D1vhݭb di6f<9=@31is`d9X}BA^%jU*ifK (>qʢ+ #ČAb9H'V8PY?Q̴+C9o'oΦ9Nǜvc{VQݥ)p zBQȾ-L"js.2Q-ՖgKJH8xp0ubɱT<rr`90*h A5263ndlUaa38 73Xx2{bFfo٧8gy\$'ta:=4=yz5Fli S8"gr& fj2J2)w. "NnTA~ 5 I E$6 LVd vqFoJd,ֵ^he݈Mgi# jCQ6=2]OZγ11!$S5BAD( SDH6-MPƆHNCgfYtTlC)hd.&@$"'ن"̹~sP1"GDp {YW(!sIF !d( HV;)868kTg8Clșt0JZ9{lU|մwHsͼP\ƸF\qq{_b&;GvԐ8K\7%-1j!bY'H@`,y5cPw1 Q,SwՏUG!#Js,kV+5/PC;\飁+6W<?tVQہ+竗 &e~"s~ &'e,֗^!/~~|J1 9_ tvP MdҰnRCK^65f'Z:.QRQhẍ́=LH hL軦>-jP aD!l'4!cN ֈ!˘,DAM f%rL&TBe W9,i$ *@[ojfmSCH47O_NJWz;S. q协fm|Eض;=W٧uzBPe|ڧԽp!z_|l"t )Jԥ IDeQ)EJH5}ّvC`$8{]˒P36ɪ0(Q3Ҿ4U A@Y[(AYؑsm"JX! J/] ebE;6E[g3 Ygv9*~X'j#-AH0 +M*hjKGNVRe-Co@N?-%B U81YGER Yۤ4gS1O$GՌjŋ'y:xݔVe9@| |4:QI$!L1Y持cGGT"1GjmB>M=̀mr#.DIBXUj9Ҝ%ԘdIVca.g=\dhGxdRRyJ`n4Q֖I$d["$9mã]xؠz7cx:0ڃOSUX/nkbU0պB޳u=M3j<{} P\(NDCL(%d"^84oTz(d@-RhQDU@'䝇VY&by|1f$ЖJ;3H)Bq\p&e偌BI^ۚM3z70zr{MkmHe`GE'9C*1HlkS3Ëġ(jh6@9ͮꯪ\!#4^Y"1C9 ܦAXA`H8:MhTJ)V x>(jJBضLGYHw "%1dNF0N+ FHD0 AP$@2qwa}130i*f9U֘GAXuL}fRiHȼ{A2`jG )H@3K(SmMN|SIT=lԲhRT*&t2FiAVQHz,-AF+`hDA wF oOr@>? חxfkwīKwŏaZ3dV:=,;5鵺qKd_zS7yڏZ  ! J)pa__$Fm;/󙤫H+0C0EJA)L&&SPGaYp 6 `MH?tHz`>p5[ߝ7Oalx;[F__RKwKHn__uY ŏJ*Ƙ Dzm H6ߞ8Φ6^>qO0H(Ks[UZf"0[1Z_b̃08蟞UkK|m#p4*)s`)CRu}%%WSh#CJfYހ0e+V1Yɜf?]79Zj36ms$$8_b6z*)k:U/π_H?<;__<{uO޼|;`p6/>leLQó?q us&]>Wg`Nh |f1K.f]כ/ˊ[|54kHS6!-uwӕw9?ܤK U"oi%_a_Ľl$I,rj.&V/`^KG>2Pu$ Ee *yű;H(=m(={^+LjR OǠ5Zal gJ0Q31L3 {'mNv>N:+`!lqv(T|FҀ+g+1\R(!'ט19M+\Z>(PԼJ*hF#p9&{/վ=;ćöVp`x)!I! ;rRQopC+6^B2 #(B;XNxMwlv6mt9!i/} %S^Qc> 8X:2*88p41(m Nw1!ARHRcpG+KcL"ₔJYgb U8у+MϺH`/SaK.tK"Rk`PlH%1QM0)G\Dq`'xsT WP`pIFIE½Zdldl{Y)~Vm-V;tzb܁KiTg%W6G%hWr~:]bBKOMiL )&AϱJu8z6t1mY-q 4VKӆmi^/>N?O,) o8L'KV^.0 `kX{Z dR͂BlG.;7:z$t$5hghpeK-Cyi(`2`J!]Znu v>#Y%=%戈!k~31)g0o^5;2&a`K\Sl;I\(w(UƌƁd>LLj9+UPJ[!}.Eb*WŐεo_B:ؗ>kyt°Bg=^zT߆Eκ{5w|8z?lWMc؎os 8AMۦ4- mb{wvϣC2\zPv6cZ*w@z;=IB:zwr ̆G~Gj " `6ȕHV*Ci=uZa$: .n_#?C¾]~Nkн [ʹv^)'hP^ Hcʌ'`FI5  R̨&k.URS(zU.NY+4-5eDDL ` 8]rHcp6<\[uޔ JLCǠoWcƦrڴ[F/)iJP}eOi};4 H$|&]&)Sa,FÚꘪrR[{\Pxytףc`NyI`4xC]Jfeb žIT1`"C!"=A)ދh42nȝb΂wDĒCgzO]K镾wg.Z\]Iw@$io<-kk&5|22>Lpţ5*c"O̦< mxL,N(jF0axQ<G9h/MD%oqPO8(N#wAS.Z?K=ӉIpݸAGtKqYs|R/+p$:/2ʚe  tD*HnDs5XΒޚv|wn5)j5O_ ~-NϾv|)<&Ǐ&Ƿ:\.o28 )ؒUJُG 9'~Gn| -{R ŀ~~o%V լMկƛb#kRyBTDX4}̊Wdm`J <:I >^NKPk^њ6*]&4-}2{e8:'*ߨ&|oH=3ؚʦ?iԼkY=٥!Rpx,'[Dr6ޡprGmv7ӆ.w5[XxHn/mؤPjY=˕}Ggo&uI{O*6QEB϶&7tO_At>u+Xt[Tqwnj8JÈEVqVap'4!2X)Lw$(NNL'{JЎ#: ~׈zJv # ;%tBDB5h9p\Y ׅH5xn^@O˽olU+Nk*Zpuf]tn͕qPJ2nH8xysz`r7 Uht4>WtjRje)n Yeg21S|۰UHE%I$,S֦dR)#јFlpG؛*JeOt7!:j[{oӽz!%]G⛽_"Dy~ xqj<9\;dp `1K)a43_gwZe~j,g`AsdT+C#/$ rH53y!.-S^)ړ*ĝI|֫Vh9m󥹚Y+6wo1?])-$sYj±Dc%$0TX-UfLqh5Z`(3*h2:ڀm b0XJBԀPELׅg}r<g+__q 2-6Eά+nGrP*̪ܲi2_砳ҙrp*RDe-KF.ARnC]dϓ3!d/LH{*p=ǔL* ByЧC¦vlTyaºХJ9b@wsƚ%{gqo]Q1#4'(xAcl '} Ae kxm*d]৆%kJÿbx*j~Rş<Z%aZ_7ۤ͡W~ OaSJSߤF/UHצe|,w>t0]ÆK'\_^}266J>? #mjR / g7>I >H  <[%ue޶Ɂ[-lY4)ټ{Y74wf:.lOsRQu~M͋jUjFdS$1/ͫjo}Qo-gE+ey2=䳍Gsg}E**-.P2Bϒ*"Vfw0vz00g8 n6 ) ĥ>`;W-C0!B*Ca nVT3D佖豉hj4H+-l<%n;3Jg/ۓ9l$L,\o-y}ubh7M$O-_=Nwڛ.ݚ˸$FiIrh߶=c$1uYcΘ.smo>ʴڷnGo~9G͍;7C:[]tJ0=/C0Ɓ|7[lU1IͿhF$u[GJg#)=dK!sO%AJ&D(Л<2uaw~x7%z2R`g1duar:ϒ;xt ":5-.tgMW7df>Q3WTo'n?wLǎ7w$Oh塎e7g8m9zqȩ;z=?z*K6yy/ܼes*(յ@NΑZYK0X`"k"QKQE57V"FMsFV;|=^$g'Aw85i9 sF"jO 7_`X|49U(]\:Cܬb|NO A ()#O*d1"CĜ!sЁ\RL1|]E =Ù h& >V VIV0#*)dFR˴Uhw19+P"͟ K'ЮspnP0vY4a#A2sVk/;^sXelHERswyp?\!cȆ7HK/C}H?>TU^/bi xuC_OO}M$,j4˘|2פo_uB`>^l;HH҃YZ fOϣQ2DK*Y0R*oAs[[죝v洗eȡMPiJVᑕ\Y#&XO- c.= DX00d޲?`(Rj55(_dlaK+/n}-1;):H4àQ(G.T'%Cq e#M,wj;qou:5oy0W ?ounf}7oGIF. &cѰ39;kH)EԒ)E 0;?el*׺J.rtݶm%=HwfkΤ:I[hZI[U)}G FSqž88 Qo0j`DަE#Zܢ@mW>.5'.uRxOTÙ<95*HUuGZdi@y*X/ R Mw%3jAk`%`8ZOKBhR|姅#@aחŻNbc]1piB\؜-SϴV ^DpϹndKh2/YϯsInӢ$JђdKk,)Q߭p29{8"=Vq?d0J:&<>*Dc8a]!:iU*BP1G Eέ2QaY;X+\$JE ~B v;Yۤ /Ǡ6TC_]Ny,I^;Hn9eZ3UΩ9} s3{ )ψ(! %s!JHuD IJ&:WH (| m)g:̣ |cR2gl)eDTYa`Q@]ՁpLKwd I; F6 pUG=)}>MfvljcQ:gsBZ0ɪ4,Ty)Ge։ -,0v $r!,LI(JfR"2Oll:l%=U!Ϣ' qhUݔG*4~M${:YgEHQXaVF"Jq0G\2mKi^F(" ,G0zSNq飶ěU`̙R18ۑ1 iƎXr`ᮡD][Nݤ7^ #6Y"(JU" Ԉ L X0J9d-`J0X( r Gc}9p^2a6qa*$`<D̦""ɌC7i<b5Iۭơ(,!z!KhTwGJ'7QzfL8cu sI&9bˌiQ^:P[q9\יMKvEi.nx3+{E6=xIy 8`='q 9O avx\<<̦!ό|7<<Vke< u7,8ԱA"~`VSLRJ|@ńbJ@+*~.p2O@Jd,ul H \%i9?uJRJՋ+r%"D-[1+˷ԿVuٕKUjCX3((dQM=oFɽLCC3.Ta?6룢ʚP L,-yJvtPۨny8(*uӬ7xڦ?xt+I/V<iaXxE邺^~np_UmGlg5RtVjSG{+=&X*=Y/W|DJŶFWX>A_*"/AlT_^^t;qm ,ȑm+\iҪ#UzZ-F`u1,d#5XMѺb§r7(naԘ[HVSܝk#Lu*w"?5ɔ|pI#Kp"- !š{Mégc3,*cQ0fl1P[hp>y'.xfPu=5ڼ"1E\ g s$roF]$o>PAWURRA$louQ'0p&4&i H֩*8xUmdhWu o_qz:M1L[=fg /1+YRݛ-0}T}Q: ,٠tWsAiav($JJ>wTa/Ʀ෹36y˄:0 }1臣3pLyhف%}s:Iaz;b=O7!kJ]6Si]9ӹʪFݑcͿCkn,ӝDcnΔ̛ 1;,'Qnuĸ3DQ/g$:G%=$%՝G =&$%WI`?U'WI.U"vFpl*+f8IUR^%\q(|B%9NKC3'ਚu)oa* |mYeTUOFiS*\Ɉdr8S&H"f\Ѡ 7>+xR_`X|t&Nw ]|dK!sOEDȒv") ȨB@?ԓI|g&Xc 1Qd$zMR;cD !κ>p(8$yC i^f2k!SÙ%`,tA\SBB!JɀAC$A`M>ݪ}ٲ6ڕЧ#Ƶ.dP~ P]>u01)nϋٷ*&R oX- GoN~Z~q+0x_PLy(Xx>n~_RYg,.>|am {l7'fnA  KU{i=Go*o&]Z=pU֥3B.rU3׍9FD:)vY4a#A2sVk/;^s8ʮYG,e}SסqRGeK8u ޥ{0|Y.bi euC_OO} KU zU)mOso`m1o'y YyARhŹҲVǺu>uI%fH̤gj_Qݧ8c𷻔sTOILj^߯1$%J(P$-~4込6<{-|LDC/B:Q`s};zDh֞(ǦBlو&UH0w<9'ḛrWΔ$vLԶ>*˒(Zˎ+(M¸#(tD `Bh]p{B/ws[{KofnuZ\My8k 8V2{&zϑJyZSt=_x35:؂TmA( B璌ϨQ0A+,dY׊XY*΄"a6HȞ4UJ5YcDl6̯:0.NqZg3-1.\ܤΕ%3L%ɘ(eq $ c1A$ʹc_<ԍP 3eMLo~ܺxg8 w 4~\{aTP9[?#=.\VҬ$Lƿ|9l`s9^@*bMH $}awξ=,) y_{vuAy`ĩe,sKY߃"g(׶r =NBPh%,cT*(A8Ei@KȉhRp?νF3Wנ(ٓDhH.MQjcH[K\rWht_cjc Hs ˽xetȊRuIrא3wș5gfR>8OR.Ocx$ǐ˛\<ЀlQ1nw$ u)o!NjRݽ ѫeХ$g )D:2H"XM  "G.M" L0ll%-ebJhsI,n%khZѵ`rƜQ:wZl_fLO,e7jn(.naU녤?# :9HAEcvt< r4BT?lH&4'}ѼoR]$c eT p]ut=:fXқsXgagV;Q阼Ɛ0Y1Ĕ٦KJX%e:GV4촷iqo`}=[?'=ace/ kkYw-3fiR6zRbG4jE)xj|GJĮ(܅# Š};ՋԆhtCթ;g~ TDʔ1RN* %-V9̂RՓ|ں@֙y8>tOJoR>FWaV6f6z%3哻kϦMoD8BPE/G# ȚTiʀ*҄)W!Zz#ʮC &,WyL>~n*nRWnY<0m͈j7ubxCN.3,YtW[$!vr!H@1OOn?|yL>ԫ̤HZSڰsva6$ AmuGU?b p\9ހatI$, ;s%0RRFO*H6V* V_cݕ  mԳHQk*l 7lėZx5y"B7[_MASD9[# 8Z:]ا$ T"Eχ}+Hd JgXs/eBh`Ǥ*_ sqGѵzM{} ;wbE?^HQMj׽_^]cL/ɗ"M?cZ' Bטh_u巋iU2Oe7o}t$uڭ鍏Qr>|Qz> }?^M5aGnhV=;w;W9O~bIՍ6榏냥 {*H;7Tcfw ~K Q֍̚ɕ֭'r׽ubݽV Zz# ufY§T-djqt[?,E2gX\]Jyu. =0?6}a]5Psz8+ F7dC<\UffYSwǛQ >lqo~w}57n'֚~'+{ ?Ӌxwמ^{%/RN~ߓ񧛥GoܷJ)~(ϟ b3 $4rWLdkD:q/5f֩1&yȬhz}pv|Kl)dgjwK)\|??:?iETpEh+EMT͈so '$4W}0t~cۻCY <.{DuJ%|[ٚ`*A4cÒ?I faƸ|cÈksN p3<'|V R+tjM|fL.EM$<zOHO& a0.3!3g(6CF"ڠ)PV*)8!S$;/Dgn6>sZ(EK_ |_UxNY~m^1\U~4*_ *[e,*cn3K9Lʴ݊$w5_T`)DS6nGq9RѢ6mL/G'%Kg-#4>1|OZ2* ̩D$qN_Zh/pQB%NMxLL(\G%Hc eI;(K164r*}>Mb2xUJTUk#. 0hQd"KcS'_|kҕr1ßp; h?7sPU7|DZqnJu$ @.'~0?hA2x}>|R@!2F d $dx:'=%wgjl:BbY:Г6)UJַ:VWUdE|=e‰UDTd؇F]0k*8'~+Dž}߽}?7y}|#0~+ᯬ_pa3>?^|vN|Ngr9ߓ\U2+aQ(+\*V z@?jܼiÛjۛ74ߤidg~mTWv*4vi(Dy*cjxIx^߾\k]&I#$Ap!9P?AENq* 3BH|&Q3^qΆ9ɜZrA 91ah tFC. cDަ䣉N#{: {mlN|!a|΍N`;Ԫ>lg'a;(civ,7C(SPr|JJ &JE V@yY;'39 S4m|@ǂ a eXΡL2$, IolHo8n bpRРBJ_cpWU}mM&J$dh釠fƃ!N&q&k! ZB55.@j. KJ$Ř"EBr2@qR/7 vFDǣNCGvݡtUiA2 >OݩA!u㼋$@ʼn$TJQ -Aem! 6#WiO%>l[ʩcAJδ4<&hFXPq*P#4Z"-OO4|41i ΆZ繳?_(61"ʽ:ت%LkwCi/2]kmK/6:!~~w:ںӡ dN9.=R Iᤱ<Ȝ;Lj PМrj*Fjcp9Gh&g}P6߂԰ w:jnv9FQTWı9.Y1Td *\܇BLiSՙ52{OճUxgӪΕ9< Bd 6*lVWk=֓>eoMQbKWϥ˺q8G{*]ҟYU%K,Yfծ[5_ j^*~>͊͟%ɜfqW_♯jw䦽^ϋnxwX^9tQ{#9jv,6ߑmVq|i$rjyvbGT%%<=Du*%-)g}o?;k&ӃF쉬aizZ r+C1RKaqp=-DbtDsu8/Q)2b@9@83lӁ6,č_Z^ێg/Q:tJ4WR{ѧSu1h˔ݳ}|f,vݕjy`*l6`h<7?eDD_ث~ګg"'oE¡G?K O'|2=OxZ:CbwB& CdEKB%g} „;M½ofv~}:dmDftVYuG| /l  V0c~ꏪoreu'?\ G/j8wIM@E_> i7_9r|qϧa`4+ :CW.]3tQ*CW|WZj+vp]vh ]mr9FoAWz\\iW9#|ExT]O'A}At|>\NOs(dREp%]AQlF+_E0͓>Sɒg>&?3=脖S_";_txS3ǒ gsgԒ]֔ml#) sz>1R3TjmW2JzUjT]lX2Խva\v03tlUgߪF螮z1 W;6javCW[4t%zzPF&;׮HW*5t(m;=]= ]Q K̄ ]e BWm+Dɗ/zzAt%A,x H~ Ȁ&%[ptiqahFtbKs0=3i;x2>ໟTq<< h1DL}Lbҷt$W&1Jn, )8m r5e)S; Kι$=j2Č},fHkaS=1Δ $Q ^](钫=VѮ2\c2h=]e]}9t%7|r@aR윮Ȯ}W[jǁ ۠ԋ* %-"+C_=.]dg rݙj+m;=]=]ߥtw 7!%]z[uI)bqc)4dg*6r~?{?c1Ͳw%W*w3~:~ENfw ? 0C}a[:m3П緺 ;㡋? (?oq @|ӻe>u=OnFķŮ [a%mOh-{*0y~|.sg'7/OE~ԝ*L§aZ cx{z9@lݲՂ%0Di%&l;PZyEm&Ji]{"9Hc,ȸo5Ep DhA)JbJ--xx85=OikytW^,o5Kzߙ=(}VU7: o\':uL׿, =x O)Vc}c]mStw᧫ *ttH_0brφ%ft7&H@5(n0w0O v<=q/}YCsVl%/qzKXt01?F YR-kш4V?85#M(w9"5L#ÏtJ#^2pN 8Pndj!@3xfxQh܈mЈmĈFFAEy(*5E)$DC 2tW/YkhFo~@+)wC[ݞn/\Cb~8H~y3ߓ|jy1䡥]\t2ޜ-ӼIwIw&-gPτ8Λ_țkPܛ+[\a VWϟ D:V\X9iY@y;cvqL'=3wYk!;(`?#DFvH$-DbtDnKw˪eՌzW/^Ŕ-qhv'jg6)ƛД)o;1K1 ! `ZmĠxb҇UL$@GpOq J1*!1*EFL('xF`=-܈߈0AXU]3;qm5Mfk;^IǥX# ZԦ.&nڙ=랥vλ^ƻr%5ŽkLf }|HT$ ㉺Nuj6W%´qQ;=;ts}bgr'a)N %M{Bz%zҬHR%5 J5AIT6– 1a=CH)T&q'전ă|)*Km18-2tMa֧,v̺L[ ϿFNuI I<̦μtцz|*Ҥz N\ښ*/7tF*%@s |+l܊Fՠ/$("(U3EYy0ьeu bVmqcȤNڃ Bq\r^(XY #J!$;J'*$h1蔸U1HnTif,g?3*ŸXdȅ\\!+-ϟ|~,AeS5cSe %12&R H` Qz֞8LK"p%Nuh8>ll9GHL*/<(jJ6 qF&\.=Q ~vAXaRq*kCaֆ;sMcP l(LW(J5a)51^yؠ$pT;- 2.3ĐhK#PD MuTn/ebl #SfD1bLj"J@)G{ϓ$1I`#)twUYOo4@og`T3eqyͣ4YSyJψOw!/N/v7iXʋ0/;^$oŎǞ] @>_^lCrux^l SPCq ~ӊij]ՏZȀޢs2`Zs.BE[Eh%k"`[k#P_gvϴs ~kh㶮YA4gѾ|WOà&ystbӑ߶E :εN۟iN}o^3jd1Vto"IciF+LZ7tGײz%8҂JFBy0TJ )™M.ou(Ǜ: lrBx%h HYTJُ|XK?Z0>iR/f\D*Ƙ526*oveS~w;lr3oeYNVk)juP?)5\mPLkO  XHE)1AX'-T:V/&淀M3\KrWcx8bE>ab=u}um|5 w_αlY`42Cm$% ϫ-gΖ2O6-jCGl;u!$]9ZV㯟]GJ+-@ߢ[+{4|M^% rI6j!i uF N 픦s$zBt(06.)0rxذ-ԩK*mػuI/NٸsO_ۭi11I > L[' سSABx4޹zKS>lyw5෽fMŮٛ 77Fh I >VBH.9W_b#CS޹*H+HZS+܊|ʳXP23i(rtݰaq֍Se[NR]FR RI*dlMo46lKW?w'˽՚UW9hEtM'^!V7;@IJFm-sf\BbfF("htX=d% 5isZ&Nq^ٻ6$W,2RTwW ^ q!~xH(.߯z8HCQHq[4kz>}q ?]\iHh&Cg>R5O&~F82E*"KA:BBUTX:(E.a؃ۻkXӍvR|fo/i ǿ[6DoZ&Vk13%(H 8rDX} h^oJ_h7Ϝ /^X_t?>],l b\L ^JlJIegLV01I3cQ,B+{3 !2Y oIQēpLێ~iYc>YGӒEWKgaJH>ޭI!Zm;hf2MvW Gtj&xDkxU |6xO;,bI(ULGSNzȅ9I? `L..Oc>ZMOoy4˓H]~*NgٰHc62h 2Bq+nd3$ӌ ԛ]p 1vXfsσ!@ `1B#̚b_s+P-|&7l _Lﱖ0*ŭp'gbz]\|*)\~/<N%ԍhBxo[V0<Rn(l6Ό~]Fwlݢtk;z!\z"ͅA٫4g:,?I5v/ef_]NJqu]~QSvWM_hKyS5wo*otC6uڧ3t)Yy G%?7hqz0oѷiP]jzb+|yM|/sg\/}40ZJ ={ Y%'-i?Q0Бѵ|;k6]| C3Tu+vȬ{5.F]&%V3tvJ1NZzhָln|c]y h#tbO2,~s}^aSW簵nLr5Me >md׍jc/fXIwcⷾLj$7Jo$+ ϑ/9KZæYІ27G&4n4l<ǻӹ_ ?4o}`,5_M [uxqA픷Xt&&8k5\b#iDm3aM޲m<})["݇w /Wۚ:ܼ\W$6猧<3Jשef5R!!R)BHTKȘujtRC嗗Q~D9^_y&7ŷ~9eB@͐P4VaQ 1zEC>鲎EdcvxSq۪%Tb.Зx {A Ú]bn-{stRr7G +e N,>q`FWDMU}`lexl䑲]ˆ}eY⥽RW2TK|f'uʠX)$!"86s:0f6v 3'K"Dh324+oטL'1+) H89 <8]2:;cn= 2w2y˫0T~m.Q^*-M](eL }v*2s+z\Ol B[밽 =`FKsP9/ear9`|;j}ݚu`D{=;ZI#ٛCm(MGHlʖjFTtU }be'#'ANx \PˤAd$ Z+YٱuΖzVտ5r XVoiQ3a${f Z \ YK5$FB$WB {R>|)-3˿u(3R{rl}:ɱdYR&JEa "juz?i4 3ڠUx Z $!Hwh᝴5Yo8:1{ЙL씉~x&‚ރ xmD)G<.HWxH%l3(R)k[w"U%t_37B=@+`9F幀 R,OkzyB2ܖ^=SӾhGgcc&15Aec nt؜MTšϗ[.m͗Dɒ'$qϔEͳEgW(]FWdY\ʻ,TO@錿M@1{b| rML,U@D;n! gü͚tTZG~GLP03,J0p =y:Rc|4hYQiWldaM&ĺ{x NN[ EwEν%,9m΀q=\NJ&KSxl&" ѺxA)ڊi%Yύ]|s.6K GCd`P&`ѐF)BR 0>6Wk"=I4K)BۈK,D)G gdE'5iQ&#xы8T[6ċDܦH[F$"D-KJ O1j l &cWniTe& ,Ύ9^g $S̜Q2c~B…%4]x_޷GەVy{!-~5@z1]OFZtX9ۺ=iqnITVpgx@Tx_q< `ϴ q9;81ZJ.ÁNә}:^<;H#[5&|$U<:R]Y{e3ֳK.؍h݆]}w|HP_E>^OWj9Zut}48wGԏ/z:::<;?\ҹ8 l5\_rk@iZF+'NiHaT]`垚уZ_/+?^]]6^fkzVf1t^Rpd0*F||o4 06Ӭڙ@m>6, T*'}\4u]G =sx6tsZgll^ #+uZU8I|\BrA1&꿏yj̈ dz˃0O8PsMFCÛT }φJE]iTUU*\5(@=tf1ELkÃm@2Ĵeg}sRj.=(;[i%QE 0PTȄ,V ʡ|2+u(\"灁'lb,iFjfK8/d* Σ8*;w#iɦб4j?fӛ{sTحh}DY+Q+<%vݹ9БGCGML'4Gn>fmG*TM͡:I(ID`*qJ"?_xڑIأuל!s3 1|^ՈV:nϒwyeڏ=-mxvÁTBNTJ&,O$D:3WeDUPhdVR4뒲o>_O_ˣX[yDM4xƃKW̏'Tk ߦEm}fhϞ?Ǔvx[@D[Qnu /W@cm ;娵C=i; \>(}Z,׫t\"QYeCgzR[%^PG-거RZ=}oXoz4/\bUaW˗b \V\} /\rVE.b *\"?{Fre a1UC@E&H~ z#KI I-R69H">UuVwWj*3F&$WJ&#WUGn媣wUxЇ#%DF=\m8g/ڿJx7%Wa zSv\0=.UGK{UG W{(WZ8;!`Xrd䪣}Pzw}+vڸbU#o_5|u웍 7mt]k)9s ;o^.ϯ)gT8yV*01}w10].^}vJ:b'0.<AW,(YrίO֦%E1zc{?*QZXjtm^ή~8u7Z]g4d8V{aQhjnIsC,mU݋Ԫ4]/+ ̜y~}yqڞXg􁱿,cmMPPNits6̃gu4GyyR}Oa':w >殖>XeE&Tu'\Ǔ'hJ(҇Jn+9iJ:`&#WnS+.W C+ ɕXpr:LF:Zw]:ʍr?rem0\u“NF:Zov]: Cr2ф EUy*r_Q::ʕw)]yd pES@kxAd>(W䮂7 Փ93њ?3Q5_\zг:Rjj;F?ՖpUDgjKA\z\-=c|I֓+S+w]5jJ+ gme{>@ 5>= }[lf>q}yRnu'}Ӎ|oz/z@BpM_o~]Vjv׿bf7GK؈^,El?[6V8 Ev%y[7{w;;ꋲypf6LȡCpOšvCJKPѡuvpn{{x4!MiBdg+f*j'PCq+&#W.OfBrQ:9ʕUjs͘ݗغUT hy(pl)jeBr{?\;:Zq.W=L\ye) 5OO;UGAP bt1e3wT䪣erQڃ\}=rEz:+6-?vp.%LsW[crEz?!wm~g=IuJrr)+DSͮUGi W(W2N]d:cc>r< ^E೩lvSy̖ ۘKG3ծ׷ۅ}õ|vl7(O\ чTM幀[$wfBfWr4yrO@iwJTr"xBrNUGKrQjw=+!e}o|rd䪣;/WU"_KP]`K66/g89=]aUɺ>]fV/^ {}Z߂D^ [uf-^Kݻο_~զ mo@/>+xi؈|n"bzKe&X˂.hV,t8V{U_V̭n`> .>}*X;{HZ lٚY TѼJ0>Uk . $Q1?ޏ薁8M@|}yy:3=yx;CEC75 i]7-xU<_#5^Zq 5jh*YoJ*:MQR2f}$)G޻~%^A !+B\e-oukk\Pr~rVnRBpS)D<LI(*1ܯ1TLE7Q%SLLUBr.Q(c-.Tt.F7 qsؘ{c&|g?KaW* yP3}UDcS\"Z2rg;&ZWk ^)Z0J&\I%F-`bd]v-֛ oK3X!.&Kƺ&H9.Idत:RcU K}P 1MkK`3劁FT5ZEvm_+_[ ,'@ו z}qsyw>fY Zڲ% ,OPɐXlo5Ĺ.VkusTM.1ڜATE*ɑ F[5jBZ<}M94n "p'RAr1į#ZkDUBPZvAlP5ja`HȜ'cRGڦ`JP"Ȓ5/f񬊋2jVb[jWB&ȍ@1K΂E2VFYR4RFvT`{R]J0I/>q*!^R ]BnQ0xJ݊ *9؆Z恩ÎË栌N#Z.\l]y bPn X"jX²qkP f1űnJʞeS}`T΀^`8$q9k Ki)~bE dҐK2 8sBwcm'zMQL%b*Ċ3ی2"A5cK0&X{Bũx(R3 NS㯜2XQcIYuD2K͸P`BDaPwSAW8c)(`Ogј` dP3%9ihLq̒ʔM'ԭ)iQ ݄Q>TKsv`2Wv9PIpgžuU +ؕ75d&Yl{R AEg$JQ"A.Hq`QՀ GYVRl eJ@P:mdQ]لT%ƀh $C1Ts_2ASPNWDrh!ʃQ@PF 6 m8`=)a!ی꼵 FuN=4ȗ}rnv݋ TE>c8 9x^1FHTP!PA/HLwlݙxcO<^{O\flɈzD4 ަB$a5ŒW . r,.}t,}V2z89Xmi>Z. s[$;CţE ~PqMVz$2Lȼ:#P>8>i. e1W|OE# }IGXVOwkrx$ ې, x.ܨNv#kUQgkWw*km+ɒ3.A0}0í~y~z׫>;<|[=N> _{`#F`F$S6q9(P"Q)E݅ZB [2MX Xޘ n4js-h'R;y}`z/B>Kv%n*G#ZM4\K,yNH,ɂϱ7 F:LL*>QlV ;MwE6DME1T Aұ^4O=`a:`d8v{2k&``n8a<HjiTYlϢY.5 c`vDM<@#H<v.xדܸh 4 D ᝈFi*(j0ݦzDGy| Ypa܈Z(j c-ꑃeoi>WJOl4-$UٔeeWn8klu4ƛR o\;TmȂS\S)%?/ˣ!@Z)vF@R0BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $B@`-5& R{uF2 ]R$# 1H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! |I .9&@  Js$H%@ϐR "BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $Bْ@(C@E`y<{q>{Ǔ@EJnz$I $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $ t=^ݽ z˨nvr}P\wH)ΖyU/FG@ LKf \=pADc\i:xp T.= p^<*$?sռ6<׊)%Es͕x#2WE`u<檈EG"9xsRjB\=Cse☼"d c1WEZz`rkEsu ^[*\O\~'*s?)yW抡z裧BH%\1WE\y4Hš"hbB[m\1a(Gc* sH)-h YJyepNx%u`[y&M 6LTD%^?/o_i9Fִ<䊰>}O [~u[I*.as̍lto2.I?#P ˤWx M_]/0{Ԡ4¬L/i=/׮͓S+girBU(gP yY> &fʬU ԪW'y@7|?^Yjr)غZdh1;l݊fl$p>HWONe&oy< JJn$K%($HNN{l̹R ۖR <61js_ꭧ ɭ3Ca8mR $)pgINB]Ikh}D`DA ֟Ʊу7e<lWGСݥVo89í@7,Ű=K}ZJ-{Rd䟻.vU~=|7?$fꓳZjCu?|tWm]7)ZG4]{e76쭰6Sdbc&r" Ăه0heAYU/)MuOfjX2si+U/IRz s񲙔GY`'گfKsTsFRImt1w;MקP.)v /ܴ7K|Yj6!VOnXpA 9|FV,avAhrYt yʃ-;WQMLd"|Pr;0axX9+LB %StCqƻuN(JlL)X)"E6ܣwuTzյء]8K{9E]Gvа{ّԅ/ 6g!ͿG&0oX`V=qsY6_h0;ݜs p_z'c?~voFᷟ=zn]|iìݤe13b>Y fM^obo?/ڲ?iԤ?,fO8f(Nd=MA$kO7dR*:-2+̍pÈ9E=RMlBjP۶կ[HvvtiɦvٿXۋ-ljYr3AQ榊(KS9_T0t~e**l mZ)涋ue+0%OEMBx8[xYҠJb<_{+7kseq5X%4w/׏?#!DxXKOn3W V/iċYFuY|yv5%u_Aj477 kԾ%wk[nScp[Z+Sβӱ7ێIzsDBE$I~w䏓wC=ιTg*g5VsB793ʒfowLo13p>m({DmhޡhvGW|+] tpZg)8rVwuFA=q&ftQBDо=aŽN.[\yJN{V3g>,봻S SZ)Zmf|gBE,:p9 ="+b\0X6YL'  __QG:HxgI@me'T ` V#k07me y6dd0Tf܂cѪ ӗ4l0mWF`Y7K~;C.}w66,}n^|oY@i)/Tm'E 7|3n֓xĆ^܃Y=:1&N K>:1EZqUagX'~yrbX/Gc:ZZdkc\XqwQW=ZɃaL$])1ǃbfUBsgpLQQ(D(;dRXm.24TBeL i  F=q IBיͿud9w&!e5#(JxnXGPȉ=S,e&im W!, ` X@,X:gO=k_^92P =ti0HF N$DhP"ڔ0 $сw %yqPe"\]~|&ºGw{L:f H7,*rM 9jJ̢qYJJp$GmnEd[LS]RAE:"|{GbrH8MD)5X Ǣ(GgztkghGg#Irt$Gs\]ܰC;&ud |u|Ƒ(Yr̍{8m ѝ'EzW?vb\6D_'*%WgNX03>o> (;U҆r3#R, B j ( 2QB9a'r^+)'n6K~xy2mUsElIثJq+Nח6MZq+pG7}2.RNg&gwIL>-|2SSt?3*$h+DiD¬H֡ ?/ˆϺ^[t-ڧkrݯ%n5~_ɀfoz*񐎸L/Ax,Ir+?ZG_đ`q$6 0Z 0ϖZ6s.UR-9i ]C8so,‰ʚb;Bj @%z?3z2_ 2E)i9f2񕄽y!:&]u6!levC`f Q JzݕS#$+2FDt"fBE"*dj*H NQ@{!s JAL2I{& +Qy \-.x9phNA_} z5 4 Q+㋕hf9OT&N3LEIV;ۀP&TmRC[l;ƠID¡%r('x h9ʃs& IX A`t&deNa,ي:vc'nii _oZZ7s˘ 9pR:S" *]U^ ]4J+J~k_&t5߷n7^mEkOӞ6νb+ t$[gişfA3Z3BLWhҌt*,tp)1ZK9|E&j̦ GP_8 'oF>%7~j%̘_]}M8 ֯Q2&qlU0$2ewNՕjRIƒ*R@"c]2wV Ѿ~ZHH)(k~^zM|._穖tVotqGg4:ygBi֛o"Oui/~V~mtaF}[}?Xͧ!$d "m)}.Y!z1G/*:m\DbڶJ*2K k"LR)`PHIzU) BOp!u|\Oaq~s{q>ϧ=n޴B^b{nwŴⴻÌ Ɩ4t =jm()E&!SYhKmAb<>@ #SpH kW(:xS$Zo6;x# x|W G-Z=SLb[x`{!CC#un1ݤ]36QTՄҝE[By:Ӧ'G!($[NHDO֒$W>0 9h=HM@b/dc]||L"JYUKU k,h7'9|V(uۉc+K'orlswoWmcu3ꮂ ^@??iSQT=u uUwyWxO<pr 5q-AeSs{LjC5FMa% DFtE(GqtуR7{[@ emǘM!xxk[]| =hwu;N۬Hx~hz~ nfomLpQsAnڤ%&T ꕛaT*뿯Owtny9G.Q2n_idOvsiݿmYn|G+Onoov.0}d~<*]k=}uGs[w+r~)/Jf7yN.HsfaZZan7!nVQ㏣|Y#T~u߀!"&h vԺ^jdf[( 8]fLؑAciU !TBh" (DP릒pFB}2€nS%xsu9U\$1[Wϯjw'+퍚1ĀX$gy;\ d V$V tUETDC>DIu"Ib 36͆{/Q<{ީU]:_ȻcS2=>+`=tn]L^>^,So qWtJ̣,欼P j$v X0VvBŮaq) +vr$iNƳ >6 aNZ ✓Fݘ}AFtMPJdE:֌d(0E>tB A1ZX!Kk`l8-knMCO+utRsL 뇠u1@ G] yz>Kr1.Hls$rw5ܼS&.xR\l 1یhDsr'Tdi\)"QeMQy%THU` $` Yxi$"[G4Sc/?[_~|JQؔp&e-zE˛%DԆKQBe(V>_] Qx(>bP0A"yXZ:L-.Nj87* gXQX)n&'^<)EU*Wnw{XKC  $X)ֹb@$hEb6gK%FQraĪRkssJQR6`d*VR5cQVi IƁpЅ]x,]vCE=ه{O>MN8_$%ɺ֮V٭xW(hVqֆZ= ح1ZHDe C5D][#j![ Z3OT@ERt9!(fI f K#PC,juf٭~WhfqF8hm9+X~IH!J0@H Um! Ƥ_RL19E \J̤QY,Tj85񫎬'ݧdl&%EX/A/zq{_R;Ovu, Rvb֨!DH"2AF/x x*tuc}ӇGPa ߼ 7Wb[BяcE?.f<ڹu|^ :Tuu49(ːh X?O/h+=KR-}ED[ 2S+\Q\z2d:|g[Bw1JeBLj: /:D+ %RfrL>[fM%d 6@;P0㯧h8 |5EpuҜz:J Coh6lnqY!2.(]uxJGB/#aq49{S[1a9y} 勊P~w%|NxQQ3[d}P j SwLp |lg1T:>w WȂcSke'gX#d"mKvu,D$l=I e*%- f\+OG{SUﳺ>2a^v4E7\:˗ut|+Pb_[wܺںj{N,Q''qlrˎ[( vFs-ْfw ztèJ¿C+:rlgSZxX4g?k*]Rwc[ YwR{n&Ԏ6 Hl/MI8͝Խ?{"uYm䢲e z[#pZPʃjw'Ǽ~3- @X=C9t<뺄X3XyKb -y!AQ\L)hZLY-M2[Q㴞nyC+xy"ɍ8.{3rgzg=wݑjk݃KaWCᡫU * )$I,T1$zYdIbLoοdn񣿸@8*j)GZJF眝rt>8~XBe"KU'C񮩢f+m5xk"I"+ShDԢI9XQ g7F1Rw8)B?<(rr+D<Ur7^>tdAǞSrr:EijWC *9/%(k)t.}$maC,V 2H@X"n a,RQ `* dBBRQdAJjm gelUf-ZMG[-t"oW7$6t_5drdz-r^D[ؿR"gF#5VjFl #]0d δ=@]:{!)cTl6ud!ZHN v}a8ؖGbdz4bn%ZmjjviBrNl L" C551X X5O"$YE4n32^4iCQ) *+$5/Iة=l6wg0El&ZD""q+>TޫR=FH%YAAɐ`$<##PF;OQLx`8Nڙ"!d { NH"6{f~ՑtjO{ͤdShIdC;c$I Slb2 xVAIeAQZ6fұ=4n5~?c'E,n~Q1VmE?͌Vrz bd$\uΎ(Leb5d-ΖpWD+\V;ۥy)oܔEy 72YZPd"yzp/oQn!cl䐅䕵:2$7!::;dSr/l&[=Cdхّ ckɄ"ȭv`rp<\-uҏϐ 9ɨ}<75|тEtZA!Vzf3tDXqO.In9sGܕnнq҇(t1ܑcVKҺR}!6[l;;AwQY)㿲7#W-|$ݍH 1b$ %y!$3rL:L|_DV62p4VkRmٞQMC*U96wJ?XR>w4$siM« $dAYAl9Ya\РH6Q=ݽi)Mt6)o[vovx#{c$T_3e Cg0]hT3:8upap*AA4TR`Xbƀ*[]F--׶qw<#u 7< 4WyqEno~ȳT-V^H}Q㈎v9ixM  ,DҐҡ hMzg4.2uY2T&(SSTtG yC8iق33iC)g-  x5cIHuil$LƓ6'% /bפ[rH׫k Yw0mo~z;zZ(͡٧n.Bt ܿ#[DQ ^3PWS:$4yJ81"ZC]8 7ӟl2vу[ 1᥆iYC*AgZZ뎇I*%cr^F[Fߝ39]`C-|TEybYa`V:"m,F8^0 muSR@GPen"hFxؗQ)FR)hDqT.ˬ`=a2q)QvS' gsJ?Mժ:9{S[UmX`D|qbxo Ĝ<+,YMOWTC*Blvrσ hx˷i Mhy)4B23̨JFCkc[҉?e4$ЏAm,O&U9bM=qc kJBI(_w Ɛ+Y +/E94"$J6:ZLo6G_] ֞I\=@nX /0UF˧_f\+be$'Af+ǘD)IAM+` DFdQg;JѶFz/}4" r Z,>~ _3?0o4ڷߜL!tq.^؛wqyGk߾׋s48?g`vPG(JkYlhX{jx/khE=8m8M%lџ>Wvaz(|C#+Vb]]zq?0Q}KW< Y(fy"ruK}hv?l}}=fB^WmR.OjbGb!;osxYà^}s\CNcÛYzׯw,D64]e}0߆v2JqN. a҃ŗ._6k| j[8O9 p:CUCa[q׽"Pх?itQeBɒcU>mC0xZ$m*BiP>6 |}%3eHvCܼp%#ܾvB޹tElIgp~qiP}ۡᖇ4 Z0AC⽢ug&Cuٙ^ ^n$fn}_,;m4:.ɦ$\-W-jy^⬕PQD4eubR:Ba:JŎM6{^tN#lK &ꤕ/:ڢg+d0{މTS. uC*Tt|o;Efa^ۖۈ!.{[u5v8J5[~ꘕd`nRdVTu4uWey/َ ꠠRLڸ%կ\!-?w-n4d 7Рm.`DMB@_d@KB,g?f^|ВQ:-(4͆VG-y}9)<{u'2N^.uv=E%7j["Mc #Rm$yS,b Pp/Bɓ]09, .HҲG}1c^4;j[![zR !_z7Vo][!EK(ҵ(%n'3䅜MϝMRy0Osq(9%(a1ʿ*\ ;w ~n+4|_#rF8?z'j%RβF%TL$ )+cs.Rr^!g}tcf9x5uevt|QۈE[J=|wN:/[ χ,7[3­#ˆYh_GG%TM\:2k`6QRfr=+{ tN~S\NBoаaWcwExh3E9S7n-ZEwW/mL% g-#48 3S S|TDp8'%p6`A1zc2B'EIy TWN< i㹳jǘ2D*B2!׭DEΆ Ҵ1'B[CsYwʹhqy]DKB^bW6hcQ>2p d<M: yCYƆp+Ɨ:>(و>7ΖILOT)JSU޹[pV 9"Zd(|8l= ,b ̺K@SN Z%\!0@)!b($e,u4;?v?ִ #A^XjS-#gI${t\#cIE\4em(H厤|N'%VS'DI%( w&S$zbX>b\9/6?e ]¬g#9w?@wո֔|1ÞHz\u~]g"vf[9 M@]~0wPEaF6QFE ۏ +rx$c'(/cwD|qU!bsFb?-xgyV|ڜ},݋xӹ)Kv<5BGGUwQ$1 BM~RJŕC!qټkk'q*3JrҧiNU<DKm20&si?;;Ɩ.G:Iy .%.ֶ-]koF,/w1P K4F]|8zhwٛ9׶J:u}UqE3)Ӈps90u!tUޮT{*&~* ߻:'GoN^}~?oޝէ'=9}[q#0>پ_bsYX?};{cә̎˓|`gNh:a4Ige;~ZźYZY/]fVߝyӆ47kiIӪ^OvU.io!TAyΓz[ܫGHݠr!9PAENq* 3BHEUMf*R[鹍 3น@h>)&f'쀠 ), ACEx&g:4K5xosu:Z/×2Ԡ({V]&G$XeD[ 0:PD /ȳHG4Gh#ۤud[#@4*Y &Hp vrkjxjBF܉%`Qzϴ2 mHn4EAQ-5F:im ӛwښ[62HQ|֟>4k.A=6^%uW90H2T;ʄ((8yӏ88i|n ڽ& mL}II3n3?Cݬ$@w2R&-CMPY EpwSo<ԛrX39(&aHj9V-&!9h4t ߈].D`Xl3ywCa'U%&K=>RaT) 0CIM;PZyEm&Jzey"9Hc,ȸo5Ep %NтR /EJԝ-6r;qǣp)w{__fMiktRGh5˗, De)*`t-ۢ&?R& [VgPTX-!*/yB> IB]GBF"* *ƨJExdQ CD)8e -5:Ukx@ȇʚLk\Fŭ4$K5yNrE΍y\K{c/Æ5 3{n{q`,a*) i U=J 6foү"_>j=jR6~m8(Jzqq(|iq8r5zWd{bvA I\eW\E\! /2W_R3BS.`0!wVm/ӥ\L3uNRgE&!H#oꌆhDm?NRzWs F+z~+m#1M>{#\j侈ib:S4%# KzW7i ᧹c3.VIq*eI4po8̪gٳ*%4@8U?/__Xeys3R.B*NsSF{UM䨱,B̫wu35OW\o߿>>0A FZpFBaךZ V4zin'I&y!3˜$R0\!cSNH.AE8a?κ]Z8ΓgBd풭cn1o+RZX("TpOL3(uS~2M3h` 4L&Si5rٚj4_FõO Leͷg*hW(6=W`?^\AE\ejnTjҊP\T.ة?,e4tʎyY˒Et~\#PŢ/.pu_OIjc ͈/rv=Jw(]zY®J/66eq]x7YZ ࠳RA>O"Xi%pI ^Qωq"X nvoՅW" w7px5J򮋴[qٸ:ȟ=WfHw3Kp<Eu*8@ˤH Y\$ 4$ׂXNwYmuJ{>`5o|mQ>&睓g~HK[aJJV喓]̪?]m>wu M˙iu#~ isGm6wT;jsGQm6wT;Kgf)ժm6wT;jsGQ ū?u_wO>ޙXp`nO:ړ'(lg2r#EC$)4ԝ1"H"b!B 4kjfk*Fjcph#P|Q X `Y*Ge06r6P־8]ܴ\8_ϊnzbbs*\}ԭgMt(4V7!P`B`ބ(IÈԂ$MQB*)o#*%#d8O׋Bq_k06&B.TJZp)c "gu}kl|c#c66Ɲo5bOdl܊Yk 9Fj=qpDK"R@4)FglL4JDٺuݪޭWV2J^c(0l1ԯ_Y&EVȼ) H@vbzcB<2ZĠxb҇UL$@y긦}(F2Rd d9XOkfy稫=F@_LQ]m[ܱ)PwRjk,ih"5~Ն4˪R6[ZS˘HjP)L_T{V y)73ta'x)Ȳ۲;߷([X"ӶIl˩UXuiV;X30E1Xbp4JU`cEEnqW+U)g@1D"쪫RjbqI`1YˊuVctL5Kݸ.).g+ίsT$旳̌C䄢!4fErdT2( b$tt:3$Q4bJBT;+jܯmBK$\ܦ̑dcJɿ6[N0ut2ˑ狋~/]w)`Кb9X,WH!em+UUvSTSvKaħFV0*QzK05u^[# F(e`Se "%[A{eS2Ҿ=$M(XŶS ,X:#v܏xV;ڽc[֝Q[=2;7 lBL`FsMHL&E D5o' rto&/x U1DN7w0C bEg+}ް81bT3ީu~<\˝c[D4ь8"]oجڄ`j'*r\W9(a&3,ugT¾6} 4!1;:Ti6I%erKڈr;L܏X^u\/ l툋#.%!,+;:ײs@*@vABbX%1tS#.>. vmuC>ṳ"wяh ɲ5k&HomV%%p)FKuV X騫M`䧜B(^Q%CKpIcaf/m.㕉vzW* #'LP&x69AV odr2 tZ{柳5.v_GOoigG+^ݲ^Z_.#z&I?ӺRoto=*gsQp{ ivHkeP~"[W2l~"HFEʊ]d9(! (*dDN&YkEI-@L@Zay[NWU1jUj5@ABRm*X|=a|_Db{K1Rg|O1EK2\|9LB,+|% uxGl.8R;W%SL19bzMH1R:J!+ Ȑ9:ԕ *L=!hǶ\'CbJA` ^Bjwn٠`?'W6d}9 A֔sx64&'Mf6x7=LrMp[W$ȽB^Qb uJ@djdU-fʩ@ʹ5;zߪoʞ ʊ<ӂx XJEIg[7wT5Z42xT 1(s6(Ad^ֳnPzPKPS&*j9 X^!`]Y U|ҶЀފ;ct T߿ p?DTLF>aTJڌ8(vXkJoF8 SHQ|6NZ)n׬Vբ(H(dJʋ E=eS.(1(&EFs>;?ٙ~db=Qzރ( xe -L1NfkA۾7%'YI )svԶ~)8)MDvM6P65fSp6PsVAvy\+!٤ mR|Ɉ2G?Xaʉu-•8]_Gn @;_f]6dA$Z2 a 6y`d{2_;P|5bеUzt֝ Kɬtg\ۻG5zsYtQ8P-D,Gu`Eb/_biCIXڪid#Atf@2DˁP0k? Z}\;ŁEtTkʺ+.@i+T])V fCT^2-CɌ,?8ˣ?ҴQ&%DȒaXx`Iնù08[Eh(r"Um|~UAa1Tmo_d)dSɧQ  Z:WU}~ƹ~گOX\3)͚o:?o|6şej{<ݗ]ﯾ5_JOX6%oͼ_&qVMɥ5zOYYT⏞i~Xo&75ϼ=Mb42Ziމ{Zm^(M?xk *alIXR"ì@2o&A<;kڿYߏc22Cީqr cy zjdG*hm2g^.ir/ioNo?Ө66frmLwO.no5uŕ~8?;UbvKg7~ՙN\̭ sd2;|~PZo5SZ-u͘x\UaExO2m*~<~\NsGgw amFrZ]7VT«V(/u5D|z|ۣ<9zOO/{/"x$8I폯ӏ_~x?[mz~O2g[urae.Ff0+9疢rOfnp&a[ ʤ,"#?9v8G+ci7E{Mմll4.*skvU, ̛1_z~xB0l;Po_iݛNww7a#@RSFDD b FрA,1 sh6qv̅Yhk(QIC=Տ޵] 0ihtxsHS>L#9DZ' RP*Dian0*R],lH:*'!u'YX4Hggގap/CffzD;O9 8@O4/LbtVÀ%T8}<TRT˔S"!wK9 QM% C6qv-Gd;-E>kzǀn]K+hղny2eI}rã5*c"O&6c<&Gf'X5#0g/~^JOͼM?u?]z1kGNg=W:3x_23b2U V]:wÅ4aj0XL'R lEh֞)]⟓pRzjC**1BES?}g;al_w7\tѼ&_=,æ4BQs9fRx0)f$l,PD6MX7˜ʋ-0ӫů+,2;k8B^uo{OvJ3K9 s~Zu CPX7cRL ~*$E$ tW? 䶄FJ@/PJQ\y;tsXqJL&z6p!UX,xv~}w W VXV|aP"3`şW?3D y1e>O[@˚`~>۷&LJbT_'MWu՛J Jgޥv[N?k`.|3% 1d]aȶJEf 0{L]FBS~h]6~pf_"5m;˔:MdqQ7HD3{f>I`8܀ԦEr^g8[:6~7mzL&j\u"=uÍU;y q$Du70ߌ\:aS7#g( #YLЈ0Krg*4!@8)DE`ϲO:>PB3eEdz+qK=%D;ӑŝZ:!"Z8,^3%ͅH5]|fQU67q qb?ˈ V-xs+CG)d#tD90؎D``8V #.M%~ 㤬#=uz,>sĕ'cZ)˜v+#UXӓ1W .GTUc7WIJ*:s͕taZ`)re~^|\p~+Ƿ~BpbF}-?y~bԤF|HXcOƔ nȂ\ԦjcveppT`,f:V!wH-{L͕n[K" R%d+}~^j R=^ cmJ+)0Ãh6b^$'(fS IZunHq"n`XRyJ fJ q**IRU%%՛4WkO\qA\$-n z;J[O5W 0~5J|**I{ ŧ$ĝzQF:[i ӴR)ÁhK!m5}]|X=a0ϙrd"XVhDfiNO9Z<ς׻6M&ZNwu#3`0q79k9 h\4Yi?v[s̋9.ۧ,F-0_Գ\%8'Nqd% R GJ/MWmګ#&gJ5Iƞ +*oQ&1VYE 62( -#Mg= 1vu'|ŧm΄m8v&L(BYnBܔ`e(%bZJ 8"O:@.$9\HrD.$9 +-w'sL)AT 'V G4鐰[1U^X)|X cYro0K1*f/h @K .˞<.<2͎O `]%|agMc3Y `*q&R&^dJ˹: P&{^RUlw;NGq; a s Qp+a^H^ .%{D0P lV*C0!B*Ca 8X1c|豉hj4H+-aj6qvmk@_ÛkFi3bu;sM5å;tfhr-}&$,[nVݻ 'U !D@ȍϒt$ F@ b]\b8.٪w#Pnc]y8c煖Cڬhozu;9?~I=ηt绛_xʋ25?m}_wNŽpWNsҟc҂/u~|.%)AJ&D(0<2:0;ӣdk%K~YcLr gHa 5%a YUM{̓i绾=%VOH$~ y(g=6]CÊNʌ#]wSaR~ 16j:RjKf(L@Dcf"[C1bIkgՇ^6uD}뢽ԴߩgvQr[cR;ĬW逰 ߜwE%@UnsJ*-7T+o6b>|?_cpzZRl]\nT_~1v ߃Q|vV߆A |wK?﷔RK.w{[ҭxt"vz䚷=G0mh<>zufl~-{k,uqb9/ 8r#aѷO9WB䌂.dRbJ[w`)`k!wɃ&? =rn=r(pvaVJT1", 麀?Q TS­76_0~ŝukw o;?&7aCDR@&iaNFB Se )eJ7|YAoAw'C4vybۙ{bsbܛ5o^ȿ|nBɹ`1X9 {1cgmI %ȗZrV:Afx缎r=BWƧE ]ظ o ?g]x#މʼn[RBAMp&۸4=0*Yϯr0tKnӢ$JђdKk,)QsePOP gtxxSQc3B5Xkd%NXlNdyyV+*or 8M(rne"erbrX+B"sǚ= I cE`z3= LLz0/xMDNnĕJ|)d:&cϓSKi_.JWpbyנtՁ}L"V\-d8QXc2.VXv5bd\:)Ar9̵Rp%[: UToح;[~X-&[ -vx52^sNcyq4tQ_0q=FUURmQ0 \lbI(mș[+442*#Jb6uRp;ž =.mMbK[Ŏdz|b%Zmla#75lvk~6&\f9B$VI@T#R!*v>Uc]H 4w C aIK*梒łGP \MVIHuqwny.Hd3bǶ;[D-hxYڤHOڨL٪B!CJzAFF;aw=R҄pbHցfmv&昘-0i$J.ܫ;_UGU}}uvm"uz]O⽯)$+3 daI2" %E!TXT^vqvq_a7lgw`:}bzяA[H ̋Sк_L[cNAScoNkV`RvQ/C2 b}) ̾sS*Q;(J1>WPP*MnnmV]U#}|ٺvVk{V}KX Y^ :q~O!slWK( ){V1IkބPcɆt9nN$/\\v6aZ>@;z\b9۝Bj%~t%Ԋ0mnhCYwVwϑAyy{wRсKw)NحmK%tUf }+,P2 1(LAR릢r%AIن FEcUdpP`]siHϺugebC蔧퓍\rT#Gˆg|-E4qH,Mf6RPA[/I8@U S,H\v&S+Ecj5qb%mn=`#Xe5b2* (,Boo*YbѶo8bP5hW \Z)8當)a5LJA`tE1@ֳnPz@z>AOK@; j@E^9^Dp'.MtHcZ Bң T_(a7,>3搋഍B|ʱj[i xU c#tI#OTNܣv >b1-Jf' @);AtG'Kl[g5.c^Os/*P'ڗ{~oL ARdLLfC˳3˱𢌥eF#"};?-CG*/zot}S ԣ {Q$<%6&RMjE'ʵ죎Il;4bD2G?c{?lyHuΙԷ=5Or˰eX?ۥkCP@`g 6adZ@V9=g58W-C % U+ё>(7fP-AGWQRH%]d0{i'@ 0NE_A E6&msJD598Q:֦\x' Yli&k*>ޭ;_Zӽqn^8By-7e*G*AEcXG.dT*AH8(_,%CMueCd)Ưɺ|0;l ^&rB49k"(*VI$NHhD j4}^tkq!P"ʬ,OEg騬7uZ)k"Q*XMĊBu>( ęE()("ϳ4JS-M(49ȭlLY VJF;B+bRI𒳵XXQ>eFEHr)߶)`SɧA-YAޞ3UU>?4fv暉!}hl$_&+g/]YC_[õыUZV̫O3yfY2㴻fם Jˢ!`ϏyЊ~J2ˋmdt,D9dDsVk c~>xP~;)Ub(BqI9.wwW_v7y;Jof6 *}8'Af;u|Yܱxm^uzm@l:?H%M쁱2rhsY'޷o`TyrcLwO7ךpQcyŅ_|:03ڢѬN޽_έWs{92~=>M29f6&#ˇC?;-/O{?9z}滷ߟ~xoO2g%ɢpn\J,(]ȴyo_eJړYʜ U9j+aӣYWd2]tusMl>-wtfhoܼiOVMkMv=G}1vz]RyaBnK;,6^x-ۗh8l9`tKEܟ]  u\1\ILԖUa rC9S;VeO žEYQhQY@SK:,22OE;(&ZSe鲧Wad΋ѹ5sv%l+wsf.xN<-PkKW5x.b9yn9e [ Q < UEkD-tD6lF$j,'s"0^ $T9cɇl ވ'j.YH2Q1YM(3֝ J^ЫQ< Si[0]f$E01ɾDEX2Xp$H޾ 쑦x_1Ԓ\T3$H*1ǜL&!qD*F&Gv݁Po(c!5T'fIŒ#BXrFhO˭j Su2Js MIM"P2{A)B0 :5;Ud ލ:G:vZ;#{]۽[;@[^A;܁4AwwꥬLq`̻# աŪDʴ~a )NEpwnqMC>zd4YSk 5r0L,L CPN_8 f[+В_0$EQ EAEBMU;/UٛN8y;{ug+G}i2 3uk\,O]4XX 4.#:z&tI)J)a*x1%L[o` SߔvFdiYqR& Jew}ׁԙZ MCEQttGRRvUt _ 6E+qC}pb%iZ䇵5ht`b@HtToDvwŐ^~?/Eh4/X&44&gr_Ĭ/=Ln~[&ya价=^savi&^/%7}߸9_n|g7D3iσ t]n\R7jKRm,EFz1m +L:C_g5*.$wSAJ> UZ!f<*FDdT]r6,Vs"ҿ9cѓ)3' go-\M!P괧iJv#;o[;6;s044y(Ѯ nzӣx7f\0y<9[w?=pԒ6T/;o}/ RoP* +$Ņv[hZ]xkʩm&%\7gTx;MHhÐ +Bp ŗ`|ٽAX;HLf߯zXzh[mw'V#YXbրGIOX>'r(㚩H# R hUێP⹖L p9,ٜ'&bpp3bNCѵ 5yyXblՕL^͈jjSg/x\ ;;A"'< ڬ 9.$ZK~ g~f4 WDzbLe9o󏋍AT:!_\ {C5\Qi>SK\u{_vk&0V?#ZayZd~WAm5 ?mzؘOE x?)W~:/bbGuPYW(22VTԙi30-Dᐑb5s\x1;!KUrM<g{|{{HwMS;S:#l; g_f[퐖T7b1x aivnDK3{|wxͻ<^OWp=~Zv[{#CaUZ>_74τŬ4bgu٬/;^2/8ZvʺX6 Wڴ,GnsvƳx?:F}B͗&e9ⴐ,Pȅ/ c$zM9 %d})DYyB 9A3ނ j2K7r,vz(U!eH/KO*TL6Z Z|кV gUgt}=C)ߞ~OoI%-%-3Ϟ8Ϟ8F>ojLCyG}SOH:DWU+hW* QjB{ztŀ %j[3t*BsAOWaVYٰ`7fq!:I\-_ ~0~kwC4b|q!g8ADk?.#*J+J\0y!pWe9Ɖ2y6}3qcfvϵsk\^7d `AUX2G Du, d6D g0љt-#;D9ث:kkegV gY.%cSw>i,WzfsrTsW526*<P'hY |z=J'IKJlӴ%wy( 8 Q*5"V^ L,9ݒp %h%ne`zKZr P!B ϥ*Õ+t~t(e=]BB0I;DWp ]e}fz:]ek+)4CtM!\N+tѲB(9ҕT1!ʀ;DW*qLBF> !5ҕBvDwp% ]eZ;]!JMHWFjIHC(t2\BT¹UF{Е_~0|/7ǖbK^Y_!#}sUݗ~\Wz6VD)Qǃϕ'E8.DxP2#@'DLc#ˆl qE? E?`d:KY^b%he˙`s*E `<$2brca"mr1^gաogKץ: g,ٺ.貫<&L.ޥP rve38vbڙ:2J;^ ;S_rl ܓ//[/T=}Gjc\+3W:k֔z.hCL$]To$с>ipTȬhc睒mE9e܈Cf-hU0|ǃ9>5H!: w\ ~~ɦ_oqCЌvX8|1M W]]Fż5X[6;Ͷ7 0D%l\+iݙ[[}~Pb+M Z^9l%)a uyډU|,uԳv ȮZ:OQSo%iIB=#v -f ܑs*T I }"uܑ^ݪHS}=5bz!-~qHsղSC*qT;;eU{M >|G53({Tk@cLqq5"(I > \[ 9P$]jo/aeoҷ~K'R6^{Ǽ٢37Bg$7:9Uo B8\Ot׋\#۸wuߓO@G G7lJL*d<E= #ۂ:v]]בn ikqg>RGmLo,6[M.?R8&h7\ xyW}EDsT[GpU[mG&2) B .9/K #J!$7NH=k 8l :%*I% I43*ŸdM Z@.\#}d͸ZZWξ|4k&G36S8оb$F%2ԘJj0$D6Z{`Z)(QdV8 س#mr'!(fJhmL+]d&ab kM 6Y`wI JQ Rh*1^DlP y*ʇIHeӂfw>jЊ1$"EDW(ºD  a186W31 # 3"g]$ڬRv$IGDLAoA ;F iU֦7@8 $(Δy5D%E{(nϜŒX B.#NC^UuNbRҖ0/{^%$o)*;wv%4D}!O/ .ġDca1hˇ0v| l=r7ިr@\<RS<j 6TXݓdQ*kU+$RFI.Ԏ zvHʔn'S ,1 ĭ`6Z-=!*iH4ШD B2ST%)/RVis"䤎GvØL)UN+YR_Z NȡgsHhz Sl}{~5^_ͷrڥ>>wCUkcC[ew b$zCwQiE4 ɋ1]{oG*H~0^ǻ6q%)R!)9aUzq$Y=6 [tMOUb YKf%ct$Xt"q8s%QJ %Ծi*Q'#Jӭs<R812(e"+Ck=&%Sf0HaGr9Y#W}b0.F˟B6g*RGLʔ$ JJFk!xt) /z@zJ0(Y\3&ko0 ZA4vRo'wXħOFCLM$$61%Hv2"QΔhW()Nw&+'dž= ߏ7٦-2H.$J#ˑ VREMR_@T\(Ӳ<"Ḋ^(u"&| ,'SaSБ S0%͓(#E0ģ cv*32uؗ=5a{0 !2&g#QVǥ( (q(ˀ>䙂MLy2ldtkRV`:K& ZO}Jgm:.͂iŲR,5Jwn% gˢφdT@wJMLR; @*K d$k]Rj4B{KkyϠٰ$Ju{ghya`QkǂU6 simu,#A)hGsBfWb uMK* KI!2'e*LZBE(^FE2Wn * CUK-d<zMX7adK)mt\3g5EGz48Q?EӋQSmM/|i@[*RTZ% IV#1Q9Bp2[$1$|Lԭ m:"mr`hQ$H̜ޞ'2}~Bkv5CP«Ӑo854r{`|~+bƟ$/T?}:#7]up$m*MA|ëx6!z_h[|>R']qG@q0L읲#k2Obo:#xC_/-u"" 11`שmbzr|p+rma  v BNCDk{mqF NN~TK<1?NFho?ӆQm>q\ʗL/aTy63-FWŵF6.W6^͎ #0+bN¯Qk+^qT軳Yӟ5jq8zkI֙^?I4}c[,HhGF? U,x4X.r`v͛cUg]>d۬mjiլӒK)kb'l8WEm=^YM㦆x* N@߾)?߽:7?=|!_^{v? 0mϥ/2i '}ٯF`JݞL.KuFʠEFP+'ih[+!=ܚo|j[N-djS/y-]>r˼9$@7@Bx_ +=ҎL a)'P_i]G|_$6h?Yp`s%C6"U' q (\HٰT.;mT;*;BjM He ;.yҒS9L3 {'>mNv~!:7묁U}j(lR/nyGJHk$m=A)m%IJ8K{M )Qy 1-gdN- xǠ3r6h 0NAv)J5Y'Hi>+i4'5wK&Eo uvC}ZΘ,h eV霋$ֹI'ƨИxG ԛ*Ih/s1qC*>("jsA gs 5=H ;}J*>gG]I\iwS$ <B#sXd, 촗O)cdwwgr2|,_,P|VՋILy2-ŀ>OƓGLғ1Gp}QkRxG)H. DYH7rlYy~__-{q0$±Fe~#F7ҵuzݙ>v~:E>_ղyKVxm mw\olNW~hj/'Xj^3;s{@TTQLj7U'xEd|>(a ?/L[:?X!KZD%Tk}4]AR.RL|PGr⿈ Z>^~C.3{˷:gfG]<֭F=BRB'u)MƢN !\J$ZCXlňD-n3wZ=qd2%ͨh`r3uPbNȹv(^W1JɏƬč̮&#nQ^!&$w֞՚vŌws썹1PJ'CGMLC\F6-?NG46>PH`(9A%Bs"Zt=Jz JjuhL!4̘Vll^ v8޷-o4olOHp`'($u*8f(NT+aU:Lq,% .iÒ#R~.<=U8i w KFoRdf+BF)Wp7.PA&@Tt .BK%L$򒈕 h9LbHdE)TlYz)A\ď帻A)P7ڡxwhN⺂JYDņ-JڡP&fl^\rJ TBk'wIgŐ"[Fqzf T!TYEq',rj`eGߩbVeiн13 X O{V^Z FB"R'Vt:#gq4! @iiVY+=OǛ.۵wߦH{;Et^LpC[gt^(p+ViuJ3P2PRg-"H`d%Z+bF'cG5ye"G2Iexs:ڐ$?ҞCy:<4,|t㈱WA3k@ 3:g3Vb@3uZ5=(FR^;L!o:hMGz(ڲ e@dGx %^x1h}O8}"5.E" ,e<ĂOGÕ̰L#0I15CжWWU=]۸{'UƳ&AlE0ТrMbV 5F|P]%LH&V9w%N6YFCAWdgtsØ/g)aamq1JJp ONB}lʿ[=@1QvǓy[IE] 7$ROT?:y'r:#c$czFy_x㝈8FLyq USv5^U')#iwz>P7{yƣyL *뀻Ji!Z__K|f- {Zl h4oG<,X<"TZI MX8}mQm,Mjo_Cnl;nxhŔv XŘl4 {߱0SxOL켪Oha,; M1OK8 `$ѠѲP-Fޫz ܌f»cM9isbV-ń-;ݺX.f[b;'70BL/6(YtPFfUil~(]7 s!9.$]6'tǢ-ёY_A:JBQ>+t:9m2& I &iiz JY8n^jol>HNUiq몝w}kMa(>K-78[r;țH&[#2_# Q'3l̼is휶Z'_ފr"J  .Ȭ JXZy?!H%PܥJ%NgW󶕴ʰש󶕔Ʋê󦷨:o;O=gZ/nyT}$ ަ yHU^I~bp1;\Ŭa<P TgYIèʨb⑉ 4dYݫUSr))anS9W1qYVg9r,IP=[սloRvBteB.~m}Y8[Z4& cV,{ͮi݅7ozz5I琮 z Jњ Em'3;!1WMUt vռcfQc(8l.7jx|ra\jw{C[Ķ/{6jcqK,EwCYUtQt͹b3ŗoС _܆UGzGJ_\!=X1/J PTy 42#D"9zߏp;ܝN#'}'Y;A$z}( ҂yΒc%y9%o]uz1cyn vwޮ^ \ΗV-Ѥ9RQ\!(dD!exLHZfnX͌cղCR#,&L&`ʙFn Ѹq-?7VrIӯDEo&HC&m$=Y8vgp&?hF+‡tF1B<^EGh?_s@iI=cK}';+\; Z _o㞩\?^a\YUYsUN\A gͬzlBNd&,XXc t9,0 $ﰝβ tJsKoiKI^'\2}LW$0䊸E vWEJ%-9tNN ķW$dx,pUHC{+)@;*=Y_pE*F:\z0xDpE] hxHc=•n;"*1V \i :"%!\ ,ڣ9j/%4dz)9;+[G Gd qH ppUp C"֯ϮzeN\i_־12[jש8U؊+Wvv:\)WZ.]_0/"qWEZqpU={+Z{ 2OV#gN Z2E-X {%&[6,I)^?&Bi|j-zV&t^dul]Ce-Md2<{[è&"]$MT iSXFǯ)̧'u)-Z&ҨΟyl~IN~:j-',*i>nA߷uV hsM4Ҋe,{!u0Ҽ}6z\@k5QG>|'>VJq%\BI" QrqŏUuTJкM>Z|h3O p\/ V],#G@2G8(uw./JVB! 9.pDNAq5dcMKvi"ҒW֗a5r5zp}?z+ċYb4.Y +)湇lĠLΘ '*P">T}*.yvJ+?Oe}vO$o¾^JQB΄֖hShW"{֙w<0XfgoDA"YmTDC6*Z鹎I8o|:{g2鐪sPXxĺ"b_x23eWWy]9)/W4[܂K}]R.<%ma`IaISRܾ42Vo^r8oyfc8<Zӳ샢PKkRZ)Dgyt\bh8Lk;)ZY^ [g똖eld<;30Prjit1-@}9Oќ+{b9.'A>0u7sdep+0Z7ڪEQ$ޔ֦~\j3ZW{J/QW\o*. D_ )S4I:g@[?2%wj ,dW9DBGLLcd1yȬ1o]/2Oxr}oGn2iIv]lvuk{W#j,`J%Wѿ)NQ3?4:pTib.~e[q)x+F<:HwӇ$(HaLy<'Lp +i?Me U=vm.5m,oŪVkA\uqriN?O'qῨzX]ůV|:.c3bH(x$S{MJ#i$IX ! p.|9F7X8}{9ҵyNk7;O$_s>co/cmFElZf2,S;߳ٹ+$j~,z&us6sؤ˫:6(m{;e'[" ۛ`-hgA9)x:Q@p&DIdc$= O (s,;#9ALdVeBYdA)9 =Yolg^lqbХgh3a&ZLc =gu_@HJZR`P>U[Y-PoӯKG]L4Nd$T.Fcɲ$uI 'Dj`' vҭˠIA5A!MɇF@!j⌎tށEFap p^okb'ϿڻmYx=d )GN'rM%Yfї KEJ%2 w"J(@/ MP%rhʷz$a,`9∛FAJMȓYYr%9ã-Gȴ'@Xp7HC'E .!\9CYHAN%YΞhL eFEgJzi|Ab.n %P8 DWn#9:.u>y@[15~gM\MjŗKeSD41J(lAzI*:kD'A@-T#{GѹϕvA 8Yc vv ºOd Ye Y$qyAdy`",r3jiAhS-\I&-䚖IcJZi l&ctOlc&qX.o=t&MFzFpgz{FeR5ӛCLjS|YK__6k ZR~&M $5`h$EwYD'LЦ҆I? WAY?zĠr'Ql>OQ%"P 2%;yo ≟L" nN&SR'8p_ IZU:0xdNe"3w]v7мY{-n NۨJ-#0|cxӻiKy}ztzqyzFhJvKrW)jrQC'J4}F٤y[ӼtoVDɛWW5eg.oaߟV۹c Gﯦm״;+ئؒT[ ܌6Xޒ&ZD3Xff1ѓ6UζV70V'n'.d4h|;RKd6NzJR5Yeq//i97~M~󛳯y{ܞgo|E;_hKMR߲~\vdB}WuK|řLk.[u6 '6Mʲ__r'8bYYNOO˚؇OoZMS{4mjz>5 jyCoW}+"*wb#A{ jQW/U4Y C̟#  's ܟ HN+h0QR8-VqGzjcBmB8rWQbv9ˤ/^@ȓ0Nx B̗\DQT9Gt؝T99*jkzsoYg vmGh93Mw۹54 O7|^ɑ 8졑Ć+kE') 8R2Fr)FD4ڐJ 68ABYG6AMUV1^fC},6%A# d|V4O`G{^,WϑH;Q< <ՑV`Pޫ9Jddl==YK7A[K+zcPg|&s⛧<;+y61]OtmAX:;wkA‹3^)?y{i*x1<?b)]ap~Vik{X/.AVz#V q;Kme/^,/:7ߖ31lob@FzNBEWyN&y9-]ҳ%6ꆹojOz_Ο8z.Xed'3,:J/+Y:oo'=Zڍ]>[9ۋ{w G? YiӋU17k|qDJo>Z ?4k3>|aV &=mԏv>qGEH; axHNbʥY }*_)(\%坍d\gNs/9z+Fd*s պ'IOa`I`rn@`wv΃G!E2iу1'ʑՎU;%x-x4M6/VĆݙ6pWtg2),[U.JoNw}3x@!uf )kH!)qtÕ'n w($?n ɏ/Y?" ɏB"!*T\%BpƠD>#9`,i]]:FK#*.T.$eh]lTk8äE @ug7cJ~`΍5wsȝۉce_(g'xd0GˇVUӢg«zݓ=>\>MLebnbpH}lN:XO:,.t$<-VA;/.J)cDBH {\ZjfT2,8$>R"pbə "'9z[UպIL7N9(X޷n71ff-dMom}pܰMLxє>l,Hxz _rֹ1%=\ط[6)l.}B۫.~` n_J7۫NvT,;Gl+ lujM'5]mkܾaOs1Gü7;*oO7 \ Qt_ORfSt{nmGҟiڣ;̭Lԏ08#ͧ]rZ\\{όsy#uIͦR$c q]!A3Hg2x !Jn8>=SNp!!. e^,-Cݏt4V׿ax)ii?]|6k=JNfI@6 W௴³_K3J6Z:.Z!\^{pz?S Mzyjb vQ^iy*5$qSa-@d2$Ch$\̦O0fS`F ٓV0B/mػM@dZWQCsTפшOzo$z{gpCxu9v'  G 4`Qu℧A]x ؖ < pj6:sg+ 3Ac:bb޹s1VJo:+Zt1V1VJ&./eo4~@jǍ y*Fyn&gБ:w)'}PNZ)&JH2qX0G"ʕ'T|p!yL,+L6%f ,3H9Ykc!ˍ6LjmϿ ;̐qH[Lhe^ ŃmB}9Tx;J$`ԞY.uKUi8/qj%J PGe%m C MfJtPq\u`+ŏ; ׬#)ndj8/PɄp6AqnY,$CUP ifEC*{sVAe[Lbmxj/I}hY_|q}?h/ͬ\a )VjSfdR:>GX:2ɤ󖘯LIA>)-h'LBdu7cF: 7[?}}p1)i1V ZiE.(QҲڙ3}$33h"u) z! $$ %%T 0+ jT|?n K ?ߋI-CIɊ[Ӛce1C ISZyu(f/` `3I.DZLgcmSFć!* a59&`S&Dc ^iRv',A Ϥjkj#u5RMV[» .<Ӆ{•HOӫ7|=38f9O@ATZcsXЙgBha52Bj܃ a uYYDAUÊ,i 3Im mAdK&ce(ΑtI4QGa4Lw(hZP 6tZC[/'c bg#` )/ dɉEd+?Pe*(|B EGA}=6}O-X[' u gƉ#M!xUaom9Zz%P!.4t㙞IpWu߮XbVXjxy9Xm`;zZG$yZ_ O_@ʀ2H+U[r=Zq.ـ+>"YIt2}+BRtuteW0 "#2帞l#]]Y&DW21*pq0tUj;]TNWh|޹t`u`p7yNh4wJ3;Еj_30̩젺j0^%`ҒL!5ןSp;t]OpSb-| RYU6TqQ$^;o~x!|X'[Y {>^Y>rnVu_ Uo\2BKRt~l9uP} p&;Xo!Y:ͩKoujDa~yQ/ w鏚%bDu6ӴX"'7%}(N1:S++ e7o?N~[gHdj;M\uE =}g#jլReJW&o{+=7v_.y>DHewXkXd1Ki&xC Yx|n +\ 0Ww J+vkFW3cz|z2/ғq>dNTl奷UνNzM+k}Fz!rVo;ၵAP{`rۗz/$Ƥuݼ9_ Wm69V[qO7%j7Ck)RBH>)xF85]VxIY5,je,xO=x%beU0;,NV[ޱ٢wHBQn.sCxm޹qmy-gV7<2 cxud6Gs>"vm} Y$Oi6;]Kߴ|t7e8y:; 7=jovPner Nn3KD=A7+1E HR^ݺ"g|'JQ^v._" X{2ɥKCͻ)_OyVӄ%L 4!7@Sdb&5tiy:x5)絚lzj_#hw= Os$X #i56wYu7=ЁЛhرn#Gv^.Bjٽv1MV6HD-7?hTIH76amK)ߖ%"|N"?]?ٵ6..6Ȼ 6YߣbBڼ;ɚK}4E7QB~\6^~s>MRyE\9̦[cr:-#++f- Ѡ '"R" `Nvj+PR" ZQP1;$S",6sv>_2bu>k|Z䖑j5#&ԏLjX# |iw+{K#eZ)L =/'w77-"Ywj*aytk?Eʈ|rRn{J'J8G=n 4W*[y0"lLDS*6=t#T19Fa4]Gp¡VR#]"]Ic4U`5BWo#%cӣ+esuGWvV 8A\O`tu:tɐ( ǻ*pq0tU]c0xte|HtUms*h`AHjt%%1G"Wt\Ǝ;w#ZqswDٳ\G jqHW+,P \!BW.NWec5{ӡ+JZ]qq0tUZ5{W#]"]i ن?V18sxY9ڲov[a&%:/ *:{iF~ $4Y{dq:_d")Gzl]%1ɰbvnFF߀ec8az0a\o Ro8AA0)]`v0tU9LAHW'HWBAM"˺`8*h J-F:AReN0X;"[ NWNHA \UAz?wUP9 ҕ!M`<(;ՃI,hm wuteU3^Z>"c骠CW ;>]{}' LWeG %,_@Wl}EP +,n7 Z}+BtutJPXUP誠պtUP5)ҕqP`3 l^Ch&ptRYvl CP^od1w4 7Q;dQixWP"OA FrB(n#ӃˡUA{sMvCٷ$/#`eCW ]ZB((JI.I*p ]ZtUP "]i0װ TlN-貃єiiz]M$줥➗OD%i2 o7N~~ϟy#{}HfAY=yt $cs΁Q <ьYXwEr[̢&F9ij"ڐY.Y#igg1 i#2ƇD&bF`vB+jHLE=fUD u ؓ]t!}d0ڸxH-A1Mdm=i'Ie6SN|u%l{  LCY(]p)k)$Jc \;,-^'ۗY%duV%c.p&Yqd4iwל#nZK.XM 9DKAbxDTL\Z͸Q-D,S ;F@ΰ@}(8dP bed!1^+"P.n Ptڳ X4p%F!x1=4n[b$it<ȵ BYAL%X=-@HFΑeyt,oA|N 2}ɷS{Frauj`N%yyRyKW =9ёjme@ vY;eqQ $2:js#J˘L h$n RRKFP2Ɇ"y<1]A SV#eCKhl~""F?{ǍdN|1(f&"5ce:l iWShաZ,P&y' 8V #&v12~l>X ơ035Bk. 63jw7j9<Ԇ1v2Š|h2bl_N</0=~FғYQ[hQ}O goeaF 6 SDbG D+(|]Ǵ7#sT&%Hy F=$ -L-hыu &4`T s˿3ೋC4VUT'Z gZP)k*  wf0c6@oo{Rr,k80(Q^bs`9|h% 9 _^Pndб":JrWTp=#v@AJh@r5ˎa>#6;'p@W&h&KG 7ƣa|sE^NY 0ja2 ,12DzF ɕ10Y/EVbA \g& 6B;f3PcsFoSb;5yPctփ+ȣ5m+[731Bf27a@WGbD\${J7x 69NR.6OgK+C`0~HC_:T̎IwT* 8PbSO qH, ֤r>RtHtţ1C`j?.=l_O777w[{27 m-pm;dC>m9 /?p}1:ےdi[sۘ^o||{KqnsS &ZdG᧛7tE1ww]<D>Fo>駻wo.]kw[m7V>9~;|1nk0ywKK76mY.qk0[ wʄ;t;[Gi U}O#k! mO|Yj@^jPM1 MI MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$z@(SO'.BpOr& dJ= vIq5 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@Zoه@;:L iI I5&ĦI MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$z@2ʡ3 Q7I %$TI&ČgI MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$z@Zӗ'7?\VSpsÆ?\wv{~So>\=?ʶ&MpIKpIԦD%kpik,[O=N\W6/W{Yq=݅q6jq5O.9>O-jJ6fYr3pWmz( np%rWPk_W2D qPpOL\܃p\.]_+QjJOq? J7!}II)^>ԓ`\\3/Q,*W\Ww#\Џg^6-W+U"ԓG+2;uwW#5)9k:ѳwվWP^q ߮#\eyv?!|WPE__@֝?^2O0]͓ϼaZgtnY3pWmzޚDpJ& DmNKTc(փ+9_m-Y |o3.[N 1QZgҐLҢ-ҢriOoCL|`:钏vY~ ڲ3_b9H&hAm9<~RqKrfZuDB&~n|{s5~lt޷ϖ@4U yc9 >nI r:Cۻﮦ EwPP|P|F|aINێ-UCvrQrCEk+ p F-[*:ms ̮1ˑGg~u{ҊME#r륢z szp}ܓ|7+Q*bUxbӇ {YaΌyr_<7Oy*V  6%l:FSJzW6JTFV\W(,(yw%rs7xy*L*|Zx3զs<#p4^23ŃSx@˼c<]X^O+E=ƣuf,ue;:e^+r<}i\M%'rc襒)-kr\8 Nqqi[Wo+9yS'8XOf 6v@NqCrz]?J D-/]A%ҧZR4]:•ȥnp%j_l *X qő WLg,!tS +VxW92w+'"7^p%jqWĦ&ټW'79K^JT.흼7U쩧Ed&3/C/1.Wri]՛*fW"o%7 DOKUܿ]qB\%MS1ٛnp%r\Z6KUiqg+Ws#\7X :D.^plv%*>zpr೟9Oq5G3L 穴 + \%sv+Nt+M/.WriK튫73)SWΰq\\{=6Ϫ"~3ب9]68e{<[uTAp>{fsh橍y酜d 9=u+S7|-Ծv:K;Eq& 6zDptJn6!ZJTf݄F\Ep)np%r+ex\JNJ*9sO` +Qˋ/EnBX%~?˜zߗ߯+`bnbpHcC6e3osVcMAxCce{[ V~\zNfoT>cTp7l-Fr1TOa~ŭqn/[e|{/nJq{|=[y,qh;?u᪾-eMqkWFɕ .y*heF+TL# %AZΘG9{7䟖v 8x?c ~M»a5wkH˅?O=(}e=V(^q)Cf)9ȉޑ&~*׾8E/J_3g8+QE//jJ 14Y-[O kON_>z;P0迦[,ص:La{UnʅzY%h؛ RnBoMVkMeɴɢ.1&)4EQ'5-mc\ ɏz "kso4sRǟe\ΗZ>іgu1a0~q[m#^?>~L1|Q'.ռenTŏ)u0R{1]]\~OG|X0ܒk>|XU%,,ڟ*NGF3R\R*)?9jmw8K]=D_6OH<⎘}aEX,L3K1KTn_IWps!J7 VZ\a#]2ZKo+XHtFjMpZSe)،caB6{A-v$~,ٻ BȽ pCrd3K@orzޫ'{x 1x2x OKߗdx YZqx}x;ZF|7'Мx"<1+*B NVmTȪ172 S3K<-ߤ } IVEb IQ@8xB՝1ÌX $z|юW ` tpG)Y7Lo#3}YZo;qWY/[w6H\G+҂:hEJG ǚ}1CK)S3f N."g{РA2W |ٶ4P+cοzSr8C;)jKzK̽Sbβ'HVJ̽W9f&Xs N~H˧Fޙ;Naxy^I 9dˠ.~M"El$7j|Ê Y5laS*WB=E-@pqij ҳC> ǖ1jUqݱG ԯ6 0kM}2e;!-GCZO:rnPlaVRn,ӶEFs~CR^V=dHy%*%8c`5pFb( Ca\İI2[LXe1!qIf^ͼ-[[:g]X'%Ngv+y!"Y#sIN:mDEԘNhdV*SBb3*;Go_Nk ~Wݪo@2 : ]^VN8jy#l)YNRl};ǂ u0GA3*䖡5 8|cbY`q.KUyZ3`/mȵsMٝkzN(4WÛ ih>>9+-`V;[2;OhPU:*,)H4kч ̄ 3X7Hq-vR9HA菢U:tX\M&# YH.PVA䡛'cm9gҔmohR7G՝^rZm_bĔZv.;s6sH+̩\TX*c9a>͔g3iԘPaR@5)[A5)l3d٠OlsBgd *K12ɤւtIUV9ɐA.8k'LXwG.O[غ󡻯:%gT1)i12V 4#gAiE*eT5T8`N$巄"(:HfceC&nCq-bD~ ;TӓvPy}*X3?36*8joXsw,bV@[xt3["6әV֜Ee#WL ) Yԅ0ښRY)ؔ :hU&e yL6g &͵+39ڞ8=c=RVӌ}*C =}Rmۂ7,+luo0ds 4{8t&~ Q-ms5@%Zgmd!-;2kE L&jL\2.=FtKGfkc P̓ڵP+{myn=%;ã)y6&Aρ,hY$+ 䟘Ob(6>xI2Xt$C,j@Qp$ >D~л8SP="tۏA 9ECP>Sf(d2!vLfkrzg9c ƨ.kEwgB JԨ9I`c$& ob퍢n!H]/NqRee(:ŭ=')@D,dҿBR rd,&@ Nt~~dvϯCU\rV&)޴1rg6,xfɡ{L&kdm˄{.j***ʊpit#BSPcSwCTHgP&uG3SOn;;"tBc J[3ȴKRA+RRz '0S(L)9Ƥ}H'mQtmp9S3%NȒj1щ| 9dʱm_z7.[9rЁp[AhPaǘsYXC6[e&r [d9m I5?m q,ʀ8dԖLfYm-LA  ˶ZEw2w]ٲ,mnm/rȝf8oLk0#% GFhr2%Zfs|,FA%2{i#Obޙ3JU[<"3sFR@;!6))8ʼnRj< S`s 34[zN<`lN3w![FO%:Aϧya nMʪI|$b59 HM)r6irV/QqpDOJAU,L0GfC}}/>E ^'F Ix``j%IoB!"ݢRI2F 'Gk]xg˨ 2i5宵 ,c;#gÔX m3. #t|Z+s/G x) qZhƋricQ8$ 6*j#@S'cܕb2 ϧ9Ӎe!8V`ZKІitB-!@ h4P"CA,Z3qNJq(\x8uIX7ahs*ε7T$E84F(* ?YQMf>%%{\Nlʵ` $ #R^DpUީxYJT-~^gg/\K)#φӽQ+y=pbh99} F1?Kc[ٵV.wW7ޜώ opԂS`r{5ל#h<|Vg~췘a$dkOB鴭ލҭ\g(GJ? Y̫x4l&zzt0[78-Z{ed} zm&N!HX"|3i,Ʃjcb-+0y37ڊtu=w r-~s_fo8J1o@LzT/ A8To|w߾\k-{Hqk/FV*AGbq?H`ȡZQIc!2$#3+NCo#=Q.;xڤQ;""\jM9 pmC1Z?5!"r& .mt:٩x3 {fgŇtYg ۑ4'lRoQ :mep:\IiM>G rN^ } O[d$39= M瘠 sJK88-Nm&@'( {U*^X1C$ 8$#]F;}Aaڼ:I&Uhh0DF"L*U"BiX,3zkS2T R6C ,x+Q)ohV ԛ*# Zf|&}RyôĠ0sɲ.IC;=%[UƳ. UĕL.FuwCF XD)J8'E<0KIbi/cO)c8i3_oZxW;zw3%sw`RչB)l(<.Jk'̱D~JHadԜvoL0kd{ hk=Z6wnqMG>xdfa4%yX .&ҕk"$/" Lh K) |K*u{.ҜJ! PGÿV +=h;o|E"nduS!P@74L# }N?qeѽ3uaߧ1Oze\tأgBGyp꣠#\&alQ]:_메Fz4NQ(I(r2RJ"[zmv~ļGIAI훇8!s3 1-> fZjv8(XۣdG0-cۏ=6=N2!G%&.CJUGgi%D锨A]EeZk4O {l)$r*(k}|L_CE>х㨘af~5bW˟It~k/Ө%,vxV@ƫ@[ou fKmXoFrv(־&-}qcF#= Kն-XGm*w@zڮ# V=SRAs'pLd@S 7/2!Uy[nx彛~8O8}<O.D" aq#aR_?Zqﭒк2lo;_瓼?7 ]-hWvW:(~_>Z\;?V*yQb3h:WAoL_\!o?׆V~jI1CkA3֕P Eihn܊ K60V|\+¡_tcC0&( X\u=$b)X 9ӊHTpX7ڪ&pIĩrT*HA@sHs;6a,ĻPBe(Ю 2 u#Á_m6 ȏgzd7*v~^}eӧzM flP*%fan(&2!r]|DjM*'W D2/^^1|9! sVD`Q##T  4vP`$2K-hs-Səy="B,'u:#g~4! i)B+"Ǜ.ǵx"[e<2GIOVJ#=FH"D(|γPi v'9Z F F'cG5y$ <yFÅs!Z.ZKP9@;4zӉp݅a 8Ys|R 41ׂ8]"#F ; sK}gRGwH+a]-2)8d0tQE][v70=/uij׍V$I}*TT@BzEABXHBDofA%8BWWU=-n\AFY3"vNvE7Fg_g\<9qrG5~]=*U&m%z:hrYPc\o0;(8F" 1Ey6tXj<=}a&3ӳ:?osz>ODžpt&)|Lɼ,ǤcupmRĞ VWob{Q5UbzR_9f:.H豫7;e~['|F+:*#*j*XγO.+)(+Gںh^B|,y!*quN*7\(؂6 oYjzZSߎi7\:J פ_G4DW񏷣X/~4q㳿"Hi" 5ox)P^MRݦ< nC螦<53Ew$fu} -σ:6MxrigE VuG&)$@-Taw< f[?1M̴zD6rMG_wX=޼ӭ嚑«{m3׬R/r2* >qRǓ9bKad ,8@@O >Ʃ]JuϏHޙYIhjs(>{ӯ{,*`v ;VRZt*Snv <(5:h zB)put{7X*7Y-ZG]™hep# 5IQAqBq]W?늜%x% x2R[Ž+__p݌2(E XF2}fuQI*t%*k] .hI%M eBOEw΄d;dB0!e. UL3-#0dg @I굥܀rv{&|$PcA);2@J8e%#OwwiӘm9gg 06̳JO7=5=0Hec6D><]XtxG킖 aj76΋ttГG +bCW] ] !̝tz,8P\ ] BWjsˑ^"]H0V_luvuCwZxsz׭E-<[aWO;5a Nv;=s3"ZpBrj©`xOowd vu\/۬)8MCE}]Ųt5Edz\\^ߜ8V~Y]#=3^\?ﯞ.o.yoXd^XMſyVPY0|do5ڲCc~1b#]rЪ'׿_QpU{B[uG_)opo~k*9mvU «jh9ɳ[g L*Vʒ#>:aCEY?w(v7ux,9W_Tjlz-9<ƭ(1V.m[^lgQ 染޺Rݟ_^ʟݳC?ޘd $'~8C=OO=H. ,fpKDh]$@)zDxD̒<`CW.@j |HWT.-\hjZyCW^KZ3p.80@qKu9]u9 `Wi1t5ϧ!9 +d'9 `Kv1t5ະhotb*FgnrCk@WW=N't%<&&zgagrЅ ;4;C]Q] TctJe1tlRj:w(9ҕU=gv>O?=u¢7oh|/лH:w\?-OZ85R#71XTZRSz:0ɂt,0. ٣bgZQt J7 K+ݣbgNBWm@HW/DxAt5[ ]. Rj2w(Ytʓ5jDWLIRje3w(g{]nQfمՀ+f)t5U#]@R\K+%3{u5Pq%UxI YΕ3w58w(HW]#COǕxx?zZ9 ] Y@WtK+j,׉.;]mHW߄o;\oj 7.?V@9ѕ,ƞVhhY ] 0}]?}.!>>NgyOZ1{vM-|)'@d&w " 9'B|WJ'R{r~(G'HDWi9 b6ttGzt%jVpbV!J{TW/B\`p=/Z s+sHW߄_]bV!3w0[ѕѕ+{v1tYfNW@+asW@|HWQ%+փoCWC[_J2HW ]G޾6,I Np9Tnh ]ꩡ'djVp] ]S %#]@x vi1tw(͜ZkNW;Ջ> =G;?-})o~GOloMe?>.-gGq"y@c~BLy&iY>o>?R~%@ď+;4t#/^!>n u[돗W~mV=Ɋ?ͱ%ڵ$ho$Tm&#{gr</k"_(90|_~d7v˾^7g(Z}n~ juvQop9yc"u?ǧ=`rL5( LY(qLaU דί5hsLڜ1BR*ΙTBʹRVL<Tg84R|;_jкHs),^Md [-hsbtDQɌ}6ZCZ5bkQr#C7hFR)%DU7k"w>\@RMR"f|bLj4@}PRҪF1=FICK_P`0#ZOfh.G^3.i%d|+MHflwmBZh2Հv1[c(W hmH.[ C6ي06Yʹ#T<&B4J\k%8>/ h#=T =D+n@#M'tKyJcMl)cFgdI̱3P8}|DMѷHI-JI0QQ-~B_[ҋISh#e`DmMu/ g4_\D@$X'Z+e,ͪ!)![Ӎo&hzꩻPc!c搃G$XkϒnQB,AF=ԆR3uiL֤zW(P >2r`9dg^-TTlPtԠ-!xqwաkӊCr4.5 JV**>:'S Q XRc}S`1גul01=b]^џ|/ CBpL5Pհ֞a Uh[EJջV8V*SO)^b԰OVҥ:"SX dlِ&4:+d (P2o%CezUHX2p6PL&SHuU %TR K[CdܠaS`dxC'X!aAYPђ4v&߹{f)ʨ ߚ $D3 VU r2C@?J ޛTtgc(Q@(ʨq`j ڳl;"KQ` ;ljA!O9Ðd|F)$usf8Bi2) >&TYg}DdDۃAj 5Vc][ }FҤi!>HJ1FwvӋyicƬsDqr|1&PT؂)Ca8"_08`r]ƴ*?hܨ|-A\/k&A ڦkka$! *4&DtdI] Jkr Re1G=GM`%|^Bg {jN\`2Q*ڂQkfT8 b_= E }&HVBݺ2:@vTmXW% 9ՠQiEmU;1m md-Za$ATd4X!&>߯.nOƝNfͅ2XoLB&(c cQ.)e ^}L! %z.x WY; a{v]:| $$hL)X*!Đ-ChgW1C;Vy@'^AWH!-RBPS`y6W( @[#i)σ7" 0HdL@Gf̱0 "uJ3512NKL pyP*Z+;썇#b/õj ` 0ͿbXl9ϻɝ٨A32pmaE^ xyl:Y B @ -OaCWʒ@r@b@7 [h@ٻm"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r}N ɽ)L 9¬}qZnzC N q"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN rے@ q\#qZ{RH}G' DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@_'EQkS~3 dpɛ@MN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'8> \;Cepz6>wW~92~g#L |Q%jg%X1%T<% 2.qjՃ` 3$Zf ep 5Ī)Wp]g4iGeaTXV`P!0owby)(h^ݣ+a07np(A._|{ee?[mFY0I}N8Ogt d9?P1pKuur6'0J({R #E}tpZt( C/X$(+kd1 +ʹY>ҕBHW]!`[R;u^]!ʎb%^+ +lM9 :S^ZbDW_ ]{VYc]CEdV*:'+T%+é?o%!75X߰JJYW*xS XMB)p.c}J4p&8JDWp[ ]!\Y ]!ZNWH=+ϟ r Z_ ]!+@I]%]n+) OsBBt(%CsS]`QWR ъG! JIASHWn b ] ]!Z{BP d9 bB-t(%-=tY w:x dɱKdH/$MsjyF*VteGM+M>^`uP.Hi{(CZyv2Z`G*wN]4mzneGNŻC}F7i!R̿=NG;)/^;ەi e?v%U1QVCN.gżAC{Z QJAZmVL{^]`#e1tp,}+Di,ҕQB r57h)C)4ҕUZ8S]`M1tpm1 >P*N}+]]9-'=Tp}+ ^Nu5_B]}5tYiOS{붃ۡOfp;gJoAWUϵBDWb ZY ]!Z{1=+|oemAt%gŨ+D+] Qj+'aL;LJ1y7z7Nj9Lo)&|#.3 ih9UɀղsT˥U͊fTxGaz`AS5u39D 1]MB/}+D4ҕ6S]!`_]p+m<㽧+D)H]#]˭+)h ;ŋ+y)th;]!J'p mٕZd4?ipfsle >Y??8:#tGW7uWu"_ֱbsx̕yGB[m!||κo3($v-?|?Zjvznx -Ɋku1ayctm|PX0mL'&R\(粷J8-SלVحĿOBX%MuuCIWR^- u}?~v}XeMI!:=-?Lᨕ^Kyg-k1&|^//(ΎFP 3xWo(M x?߼i ̀R HVB P56ueʗ) W? gGhyWᠴyij'dMΗOH0Gt%m7}y[/\?E996Cd j(Cԩ'jvqHvJ|So~jnvz&8UёRN)靍$%S< /8~mkMsh> dc9ڥ tM򿝅 x30UGxZCVA1?B )u҇Xum}iTcQksum|d#>zYϟ? &p6<\@S[}4_6kz7l2Q6}|g hV7az%aka/A?m91KfC<]t:5IOo9G%dI!3ݯLQD(~󣫻0G1ו)VQWԮf _χ77 kj! : L#4\oSx*\OQCp.UIpӾ]1uKDHx8EqaC$ 9l9&ϻn.|{& m9[!\DaSP(esP >TGHmEǒ+&B@c(nl`@׈JPW^jt L'fI\#u^Φuhxl[}.໾ĮsOe*0Eh F#s= rKagP8LXǚ7BKlȞTNT 0tCH2Qol ߾_ֳuf^\ý8ra9k/-NYٝ-9)P%g27ЫuLck gQ5VT7uԶnn%qIk^\j@:65)FFε ԮqgpfiqZ;C>50q=p8Ce|t=ho4ݞbͅ h[XjJ4ҙ@=7\ P0,eEV0sn2WVUb^Hɺ|Mil46*Ym`nǜo`z7M.UǮt:SްZCY96)Z|:ZH50] kןXH6Ͳlvʇ66i Yt)hD.CPF'aRm;ÝIޗ$3ZiI%D2f?I+LڳAD*a}iǾPw~.lmzff#f2;:(Z$VG4 Q5*u連I׽Җ,/T\i>T38mVJ/vL^L%, &\,m3 }hajՙdTK{6<ߜoD^h3~HnϮOG7d?M*԰Wۧq ;eݏ~.R U2%Tu\@H9 )RydH$\Er DKuiҽU,kO\ EnOm^ =&b4AڒC +cWQ$@%fP)<,$\{AߛCΉqy-v´z?MpšL"7n%jI4ɇ'*Éu=;(Gӟ{ɲ9ꝹtjCƳ #@N+Ԥ҂ ;}R E3>l>$d EQ ݒӄ SZrJ ܩLކ2"ORBf*dO0&"e!9r.d)yoc铱}L)ZI}vWXA`0 lQ=*!IX8J\_M FH+YbYL9kVY-DHUp%c"_ W` M[]rzuYW ~\rي;jnL=jb̐{ĜD`mv[.  -H{(.i};ܰ:J=x>xZݞϫS-[p46rMcr>N6 *=^ힷ m5}:&/RV`{aex!-ֽr/F`8a\9@AH,-{=T}rj /HZ[9IdrMl )D2 v:*HEN|\jK"46u= D6Aj`R+HOւ5( E0Rc!uԠ3qg'^M{zEntI,nMʏNxmRLׯXHl«%dq)g"e(j[P@ ۉGt<v}dCmS, zC›5vo(AοA&y^A qF؝jblʪAxIΩW|. lf7 i!m5m@φ<#Dw'0 QVRMy5G25$ԾLs'3~MXL7QGKBX*L+ֻ_ ;!DA[c0'6]X*i4x,pV75 EOa\j6nxsx_|VQ3Y+t,0HnJUq/%\^J %~’tmGӯfq+]TqHZqI&$V 6\N*_~~|tcѣ5yc0R^RIP1I_ri룳dPS*ٽ8.NJ}jg΅}t⼧cЎWmR?#9UO _lHF%kk2.+DPE _ rD SvG8G,z!%XFnD BIrlYǝYf @FHB+}k$R= [>WuH>n7s܆||ӗo7HG ^nZcS<:1 Д1_m݈H(KblT(Ӎ^]>6|4SW$23H',:ep؟1<4z#bpA{`F]$~"2ْ0RRu2^mD {XGF}hfw] ,Pudw '.x5I/4돞b7*&2$O*F"zaDሑU !h;}lvxwnyx&uˋ]iv mDi˫>ܧYu'?@+G Z_~4}շJ?ZR$փ9]<) 38Lrc5\\~9}8Uqee+iY#=1G1UnjdeP"ᑑP*g6H(J9zkGEHlru ӫBI8TV\'M}>4)4wuRo(?>\>Q-*Zm ^f #^0{.#aL5yj| ''vCx8ŐΛi)fX8yB,?6_v!/tfMG)=J0ޟ_3cwz`a?}Byriٖk?pOP˫so>֞x`䬉gE 4?kz/i[.n;bM;5q5oOho^ٺme%ȯBt6fmVc݁'i9F؀*QHނvІ+.$'AQR3d_:Izhɬe@O 䬷ɘSM.=o$rd7Nj`*6YԗvR^DvNjyjv {zfkmcRfQ>i(6EC]绤(ur0ismik:~v0h^}Zz/28YqxPC jCVL[N-92'2V,Ѹ }.z!R4F ƂDB׍Mg;>G嬰r+}_bG]$r⥐k`b'%FT.#1֭LI.EȱD,3^[>pW30T Y-" g,Zw_!3"h_S9G2)3n7QXH.Yth2V+LJhAf` "3qv֮3[pPJ}[iyP>`fɃ$$40*.fZ)`0.MGOm$E2`D|VԠ)E֖@s4%_1 EjZAt=!NZoDRqOFSLmOCi D1c`hT Ѡ3Bqji3ki~&K{oGLӖ\HdN%ژ,S! !TsdFvmBR9 4 ]F$׭\~ѳ_g*7ж<Wt?hfP,Py_PRh(k5)@E jSV pϵVj857ʰu9<1b3+az0n iFQj۔ S`SA/dY$=OFֻ53KLjSː/'ϏK {bᘇ<^~kv1v1[OQ do=: 7y~Wykzs$GxRi$ś1e?@GPè9YGq P2BI!AKq (f"ͮ^oa|U":!r YxMº^\R\vhM>Q5(O۪J|Zo@F1Uy6 ih_ۯͯzՏ0 B,Gi?~4s #,*y┧gZx8_!]q.gH-IQ߯z(CQTS Nt}UM=Fٸz;uOٵ.#__50IR p·ΚefnW#xլnq}o9q6#q~ah0RXޠ҆83g1b7c/M7x먔luqmݫ"Q93.t$,pyL8kFWp/ޱ/61++k׉iiY#~̓d远|І[ 4fhU~@[nweHhPCo GA%! Ur}tMؿ)NSag˛ѹs9Xg v'l1*sf tHSII(R*$'O)eQ)]"pkgDt˴'479'.Vdh 6 =;UQ2N4 .=ʀA"shIT F:!MkA?twgrrI,\]lABmCefg9[u-x{\rZ0DK &Tќ -@#IClhFF `#VG#$YB)ZP*EJ.WL5x- x2R|y` ߘŦ vEoY>e)伸eYCqeNgmuN~ )焭U*mKH$}&$^oJ!v„90xdTQP0FMT*j#2@0dHSRS6|$dFp.R>4e"X<>*n Y*L@ˇ;D9-~U;|\; j+7a}!b o0ƾ񾧱h,46iގ>_UFrgy2O,VE9֟OB~Cv^~ /꟯^49`W/~O^雫S1ďGer5M.c?"ZN}o%-l0լQR Xy{gd0,2p1.e=w[,q_|5'qs4~wE[ [Sos,fd7ıs k<ߢ!fٜי52dQ%N'oz; .jb RB|:#iNn+jaT82}t|xq09C|٭aggƼ([Qp0lr]lI s'1[]^ovՓx@W}| .Cz:YMk}-!+K+:l Qivֹjե͗͞W}qO 6gWxv}9{q:G{.<g Y|nMɕoNsW^9sz[lmSu)qtcoV|Eˋgzya(݃76&a!D# PHIQ%>b"%5C1*Pr,1q83ln J +q1qW%qZulԎNQt:b>TAm{n,ۮ%mW*_~ 3=L\͟7aPyO﷟_8ie[#uB/<6^ h\Y z7^ kl5&;fDԐ $kw"f&aF(.]e?QZ:A)2_ɍR\"gi:\e)yJP`]eq=γ,&=\}p* ,UW1՝略43+Ai3Ϸ\4/K!aqx|9_xt+S+^ BoUύvd gG3G@qf=a*同!ZV%Fт4":HVcL@_(O| %$, t}k5}>WMkz=f=-U/B!E"D}Q_뉜roJq!}q}Q_/B!E"D}Q_/B9UENˆ`s VgRRU[`uF*%@s ePٹժCU7zW'V*!jM[/H*lk)X$C%Q4[Ѿ~ErE5AZ&靘Ggqɭ(3e2f}Ƌ8U.3+f4(ojy\D&u@BRU !9G!:$N=k 8l :%nU $ۄ Us?2*Ű W-X8^[̌[<-S?;O~sBeχTdBLHQ $0 (FkOCLK"p%"6,ql@ sL*/<(jJhmǑHH@*gXŮaձ+xc `jAR6я hY1|WxzUIz۾/Wn-,wìCՈV&[ XS|-XI%q09ƥ`,$C(1L3#S3\ h\zVx=r 6)]tkKfM2a|.JԣbS>][oɱ+| H}6g=@%<F_%4%e/H]̑dj(1 YNUuu|15jo}^Џzjzq8;Ѭhў"H %`NIYM>nC>c;W.~ wNςCF)dŜV0SJ /j&eӇ"j@fiƤ * DR)YāF Lx63%{r;tvl`NnNkjdV.< ց+ÒnoUQIކ,1"35ח@2%bt;ӵ}02yʨSBR.!+GBXHY 3fd˱`G{η2HM=2ːcʔ)2( C .dDS KGf=6SШaiw[$\$A%YgQl9o\R^d(zlgoo'8>gY_SMAgפ#6a-M w{fGx};֋hSr6%^/.4qh1ևLX{IVkuPWg+0 1 7GAtۆ6ۋ۪RO:9VGNȓcŪ`o$R{[w-jm~\;GNv Pq6;9}6u ֭fmQ7b6V#z&ŲL!rj쏏EŀfݠDBcP"o̻[vڳkG^HLRVF8# F Tu4. no\K)RJeYu(tu:?F8ߪU:za̒VN(" & &"mD"K٬1$@ WW ʈ<2o/{޾]Ѵ#?t5O~ha^al}Lg$_c@fyv:S#SMQ"7h `F;p*hex^2Gn[t6HbtTYXNOMY9@1~ mn U# 6~PP׳QS`Αp6  :{]TDJrJ!دI#&$eQ8 H[\6I$􈘊r ]5֝}MАXsHwY0e.um%8Kt~lv%8Bg>,ej|q.H $5V=@S\V @ bSh0Y{J&˾;؞?ԗQ`NֳԌXucMK#u.P"bE1?^P K憄#XQ٢-Oy'f^۾#|6̞+瞲dg d`Y5Nc16rh7/s:-0BG]t`K4EIG 6$JHB0R);Dر.YtaJن7SEҚ|$^rR28:-R%zʝ L_:SH0w4#*oQJF+0$L| J Q\)c@%ے0 1a!?66IwIU, Y+" .@ @_# Y2)I~sQvJ+dB&ࢲN+$۔T1d"T޺@>kZB=9PPI!P>1 ku::xv1ieS`0LF)}/3ӿbV9d% NȆ'U0mK5B͘KemؚI1`<VoP+-g4I/$(51W`59HnvJcwX`mUEp,hy~:z1DD~ ^k%Jb;&J%$cK: `k!cdTl{lئP7 ٱֵGt/bMa]iGue/? k $j: a 6y`}Aj~1jwzm~ڼ@!X%k Jg7?/FiBSP9:ڧ1^eTiSn@rn먫cɡ% RZ/ ivc<}`Je:Ҷc !ۜv,tɯ+(h ~dbs ȓeYWJk4;_ԃ}6',). *;&ytR: dviV~\;# 3C?dQK!k"J#ȱd) xhJ烰0jBgj$g!&j"Jd)JȐv,IQw(gk# #8]1Xf>aFFꗑv٢Ц`eQOgKUł6l BƝE9#}}BM iVU }+.NCy5-'Z۸[{ Iye!O'yӕ|OX7zIk3&qg^4t;קWN<J[I<JW]ԋ꼁lTPRH-88rVkAX,C}9X,Y~8o.GEFuBP $%ᐶwt7ywѻJA4_Kin'!S֙~LNC+D'eZ)KKP =)nrfiԛsǬ>cSNn[?Vw~no>__^xsm錶n:!|lzt[sd2?rl3aL!Z-}jFu7c]g3׉WfLV> YZd>U][ݭjX`)iuQӚGjÆ'g|A2bPR4e$MOOoO~~}'{ok8XR?e2i# ۑivbUtw +qƦʴjG<x?$]kbQ֮O˖cV櫦旇7u!M>G}9svͦ!"ۺȑn|}WOufh;{vHr I{cA$HJPLJt9`Q!EA)^qF驝 k2qfFQ@AؙAv^{=ʌҎMB)<{: *J7Nw>_ nUcD2GJ+7KsÎ=(mep4@pa*A!+#fL yot@ ޾ # S/2%ڜJL)E9s2&P6thzUFDu$T}RQETEƠIDBr(=ў;3Pu2XeT9d`F2$eR,`g<*ىLһcgi3ο-hk ݁~ nsv@ijj L=hrZcjqܩңi㓕F*)$P-]`}TY))l C9|Er6m(Jd?swGV_P ^iz?^޻˄ +'+/>g+yr3#c{CG&'3Z7VU&Md@X94G= %1 !P M47l):%=%uoLvPl`br1]O$W$Q2Wl@O;/n8dIu+Z@4cI<]4PtFUQtɝ[ϗɒm k^ߎirB,R>Ҥ&4,3/X'dj&InekkWkbVn5mWt_aF𨻋0]]z7O źuҍ9V$;>~Gݬ{Qycې5ڊTxv=@ÏPc){z'53l vpG`ЇH ( &IM:yJ81"=)1;oq~柫M¾}9*SFLVfo-\BVl=`T 7Gtd@:s4*;M*ZbM;A"*d=ِ}[w|CjNN>iҀܠj4kMv̸~`x\?w7=}?Yn\5~޷>4ǻCKe{8Wzu`\'XpOkTF9wzRMZVvWTkJY@v%;Rp$Xzrd:p 6"2ucI5CHpqzf!(BBǍЉ' Ad;1 NE+^rTP ;ϵS +E9k-o-BΊk=kygz@iti_M] Xlzgu ;b|cS{:a JE1/$mE.<ρ`'R- . ӣ++G y`2.zH`3OP8eH"k " d036 u݅b!pa ^1%s4<,l&_#sƻuN)lBtMw /|/Il7zUX!](E)E][v?tcّno'd>"# Y&U`'-S,YL-2et;'uM uFŭ]a iS=wA0P[H:1AwG]+~->%@h9j607##qMiZ%!aulz,t1Ag|y_OSԧnjJ!O~(9rtr dMzNfdx]:Ic*hC!lpfLFJePND4fK€C$!GgQ?}P*7I}ʒ6w%+auJt|x!ꐴ(g4ߺ(/oDKy,q|heR< :$m8ڹX:.F/ђS&XC5YTIbIY>OT*m;taMu~`OSk%eJg2+$JhIS s.&8LyrΪd9W:3'3!qA#ʁ uE|x4U5vVq!6EcP-[_Ʋc})JTq參 u& Z ȶžSbKo3!b+LHYz@UBFMZ|ILH L(ɽLTNnӮl!#7&I\8|dy2:#:чB,Ec3rInpy^TB1k/]p%gf^urWpVc?>jlj첔 53/&{%W;W*-3bHò(Z )?07![7ϻ|EW/~O޹YG?2>''I'qzBoGOKTJS2Ur^Y*ORBRf`x*qASJ>(Dk@ݔPW?曼ܖ<vQ,νZXAۯj>A{姦DbŃZ\Kl<&]5w-M_ J˸:KROǣ3qժ+6~Pbw \jAo;\*p Jp\U!W]BWDbWGɂP]\7𝯐dVLH8`p#HI~j#k.jcB-$^|{u[dBM ݫF?}4/8=b'ˇny/퍅 vaEeV90N+lҿ0{^6N҂ZE+_t吆o_լcb8pt.q>4G€ߣQ,\,ˮl ^aN^YktV\lcQl*M'!R*TO8V_ҙ{}]ؖk]M8vh27b ^vSf3ZQAvžT(xAӉ)QAx\JDDmaOhSʕdIde EB˹=:Ub9o躲|g_س`Ud|O:_(_QZ:5)hGUښ&usUch>|tЋ&yy@P\NOiZmIͽG AdIޙrT9#`t%3]v$ l2g3+u%[ 8i"H[zdfc$] OXrXve'FNܠBi&2 +L2P!, 2BDȠœ3r֔F lTF˟CN 03т8TՑb!(O <@/]-P_KG]L0Id3\&ǒeI)N 1HQVgVvRo'-w{γ 76Y$1|t!aw`QQg=pt[k3بUO{/{ |1i)#S8"hxQ ʒn*2;EJ%)ezi#BƉ~'"|ȶ ⑄~d8M\`N &t-ZG$ *p/lţ;X*cuott!IӬKX̜I9X;-usK޲Ԥu`"Q-_w2``RjW&z%&,Qqـ&g5rj30F 6((zPtJp: &eD sKhMX&js,bZiWܯ^Eӭqn^ +˕݌9JXrKiӨ]4SF:/#NaIqD<+ 뢽G(/:}PلTkxM7t4h *AZBt&@D`&#$"9@ ШĽ=T*m0cKoxt| a]#\Z.2F-Mc "zѳags:$^ Zgltd"F"6R ipu"Q$|Lԭ mMLGV1gWY9IdcwfNOOKnb}~Bʵ~\]X!m.|*|ɩ 雃iZ8;g|JWb_9~}u䆑T~r BK{ Pi2Ilpt7yә4FLJ*qC@q0&NG8=7|7c[?"Hc1K*/u`ڤ%,ɍopJiՇ?r16It^D;j[;^Bӓՠ\0&Ǔ]4bKS{R6riیΉ#:>B*k?TW]o/5qQw;|!_N?LSCaoպ7V ,Y﯇$;]<pFʠE hOɻb\M7c̍4bj0ԺU~.b޿]HYNb!F|(/ &KY4/^fU0l__i]&d#'H`B.ɂuٻ6$W>^v~1`%Avm1H8[=áHCRdRitU?.P{$PԱZ בlaБ>.=6iB^9XAז;[C2/"רR Sħӫb{ {=Ƴه:|>4UIT25Ʌl  G KEpI8$SRy4_ a<=5Q"D`Du(wCF4RpNxa@$(jc=Yl<{^;Ys7mA[++vc`|.s"sꌃS>V\>VZj"S)V_c .Yk\{ў!-һxA%l|H%k'ʡD|dWy%02jNYp*^#_}?սx{{NM3[GYIlۊfWo`2/DHTeHIgBЌH)ra}K*܍41@GHCZ-(b {B2 TCd_؈YCԣx25>Z7(z>?_?9!:b6/1Y8HFN]uMQ.uGU8 jSAS{pgRܜ 11^ռ|T52# x8TzW% 9ZEPIb+/u4J:%u:*a fGƋ>΍;Uotw}_q8HeȑJ,<"q0((Lk :!Sa7"H)bHWa8.Z:<>r*h2NF x&@K_FEe~%7IGjH~aF~x7^lƮ/# bgw΃3~m~F2jK`V˶8LP.CݤZ?kţ~GJ\"hu$TiQ@ck(# LaɌQ7,ٜZy+sZ&Pꭰ61Ah*%<$b)Is+"A n+F[52Ty9 ܄,;e1qj\V0ueف#vO"Z-o&w3rl q k> k$BDNK: meb'zխ` 22g:[\>:E y4ɓ&QpiDPσM4 rP[ Ҟ yH ]  g_+dCM&T xAȈĦĴ8x6\_{#uZtp`hQOE5O " B<3rix_ 8g7d^l`QШ(PcћYz P1uͨ7dG-+g=$&hխawdЃBY?}PÄ<*.K*u%m%-ʠǺoF uvfocX}͕J J _x*9drx}Ǐr\^-n.^O_1#tִ嘍~Z[Ԗ$RQ|}Cvt0jhN<̞.\"KWM}~ϗww.n19fEk:FuEmrj>\PW]S~뺭\{w}x3S餉 쮜`I-ofY ENX#!+#4㨞‹@uP5M @IQO_G{D񏏣vm_tz_H),x#ES?$-Tfiʏca.LR\;(Zfqdg1{vwKi]^<3Ia2xX{U/ad8c4͛mbYrwܨU7x)kbYS3%XrړA4ƐҤ`x#x"TF ZWuSァA`$5X_=_^P!YW x6Hz1P& F[J[r xj9_ wCdw`H^*7j$^ѢVU .콬mU[gE)gO?' Yϓ%-_Rϒ>”[ oT@qt# H<>Z-Il`;ŨdboqR!"eՉ%,wVz=Saj1r B!p*ޠ 1:J+iE"8e F㴴Rܮ;tdW~ O_ok[xތm ub> g^/Tgv\ۛw3EA[*-TPYyZMZ Tu ӻ :ObstOftߌG/fA{j>G0 >Cio~٣j]]"'\E6brH֪}x4#7ΐI!ɬ%cRQJwANݮ%"D,`fPvqEߣ9-r:x4$œQ\}n.nK7g5AsVw@o\?`=:MQ:xDցS1Nz@NҚ ef {6ߐl]fmS]5 fTչrKdq3ܽ{}{ML*9uk؞>~;tƮޙmCRʛ̥`ۧ7ҡvaK3Yڄ"uf]J-.Yp:핷WS&U`;-~x\Gؙ\GR's.miFqA2H$FIm[Sʝ JcΎ:~Z;+r#?c|,̛94딦\-@  >$(ESj-Cybvuu;GX-Q]_QUk䠻>j,]·y166+ T`a: -ƻLWoxy'v&\1~z:x prv"ۿevO"R I2"Km4*EMʕQ뻌| x$|VT_+c/^5*?,6n8d爒GWKGCC7uOȫeCk+8 t\F5~]5Z^nޠ}QQ=轝Mt2}_F͡;=7޴F_p76nݖT~P.Zo_W/ W^Զ,zf]f/ݒmkCW%bU**ئvO}6?IYOB$J}g=az="ށ&*!Ƌ!cXi&#rEQE1E*DA-US#eNpol@J3sD&FA[bh[I+rAl̔C m/~|i5!& ym !V9$ l*fH:(əNZڰXA73Q<*=!6x8rJy(M\:` LbzuG♦HQ[E`hQi$C%L[@ pj-s#ELd*Kz_P'3V!-ABמ^tMݣR;= tڣ<3ɏ %a})0v&p)N$pWm{%MJ8* U/Iapn!,p,n7m>m_Ǿ؎dg2YbŲue~HVUcǝOs㾿q5'/o/&wW.?Ulg\{k/*q &i&YKW[3Q Ѣ皴ߪWu=~dOikOOƖh"]VvZl2yx5 r˂t-/槫WhSվ,v݊ʼnַUvXA)tsR" m1o[Ԭ'6E*iI2LJu0M|o?~#y(vc2LN4^kgUGe%Ƴ4GNjm=?DXBcFqIψk4^³s:T$u#q{%N}O% \󣣮O7ڱsx^=/K% Lsv)}!A#(` 8L: YJWt4׋CGF6~mHF?g:R8Oe+UTȸ6{8yys[m޸)K5rz}dcD#wWyiG f:{+ 4R kt*Ja[Z浇|H>]aR"a%$9.RdKI5Ƅ kL}MyTtڥ/vd I 4j?=҃mOos$Շsa?0Rm%~J(@i{Pu0,ihHF+o2+01@+aˆ^XcPRGAxU <+IJ{0,g̀1;jhҳGǢ}!)9:^[ncWHLb}_b1GR1AК M=B>PS268f#Y4M&s#uv~Y(s{fT51Dԓ uCt))*DR}3copgt޸7ؓ 7->p#ȅyx9Ň;|/4n:8~ 22WRB 45prbRkL`ޓ fTI*5È,i"0}2FtP+5ب3vLqV@\QG_pgl}7Lv_ұ/kY=j_A?)&^jǁT5tR,5RbaV- CDi3*7ȋ;4C.Ɋ0d LP8 7# dT'\|5W'' 1{}zfDqd/'{.<ÔbwId %H M0צ3F 8ft\xGL*)NLdIi.wp"T{p|Iɾ({E1ȋ_L8lsp@;F䑬$~ ZZ,DH̓=W[1!xq0}{C܏@aJeO6}hgU<8qFF;7F?)QH.䡢Ws]p}ռFK[F[FǼpmo*oKŕhUVz}Năv…S+t޾bâ]}bBچZnX}']^mcN_~WjVN$nv5_9`_,(\ѤPu3מo&$Ow5Ԓxy6Y_{OB WDWX \jЛ骠D9 ҕFeU *h:]Ҏtute -*+, \UA;|*(*TMb5tEp5F*h9 JSoϘFU'q\8Zy$Rv+;վS5mP]`X5tUr ZdCRN@[к"""jZ誠9t*(Ũ]$]d~9mm~<:\rF^;|gƘ-zx9_ƑAC 9s&&Odv}hi449k@g^y"]iF"VCWЕ`|ES+S `]1Xj誠+B)]"]Y5l닮 \Y ] J;jW_]1!:VK=6]uC{ûQ.芏tsOĊ>vjpbztUPnxF:39u, \kVUAHWHWFm1 xI;L=tD#/o+U4ֱ$î|ѱ_Η:g|<TcNL*,mFqq]CÎ#$h*K=rˎh'/(G?IZrR1V㩠=Re('?EBiXEtU \[vEh7CRN2ej "m5tU|r7x9]v|wtQr+Xj}U JiF:A2Ul]\êѮ McIU),nk$VjjNJ J#]}5t;N=1%4?~ C7xlW{7ȩ@GJ v FwRr z"UA ltUP  *+,@TCWZ誠E:]jԮṊ7zRlsps r끿RmTcL=)Ixd1Z֤7XcknEz=˻Pzi v9sכM$$dtZzLL؏덻 ˆ^ Eyuo$sOͨk߬jm"YǂVR[mζ6ŸkNt/o/7^4w"ܭ/0jۧ MeP+ag[hn"J"T/ x܍ v`1fO(n|_b]un'Lo$j_KxbW]_!GKewQQuADrpcXa ~RC16Oϑ)=&q:ɑW=-.?s}o&߾7o5 νM&8!MN\+Oְ`>q/ 2& |?}]Bb\J#rN, w[/^-5 Ysy+)Ȏ,Ϫ]&{Mf: +1&yDkPtu$r`.2 vGQ@0r5ʅ}NsͶ:==2 b:eI Qk-i:py=Gb>;I[Jk8#UhuJ.%P0xmJfIh2,2qnD39rYJ{P[a]9\pwE*6Y1DB錌se s4)LZZL\X$ qw1 KA i14fII>@ѓF' 2dl9tԪH4'aVBQ_xSe5 >rM}WD% IDߥ@M4*.lt6k'>+ I1[mI!~gFeVI޵qdٿBvڪC1dfwHe0APϘFDo璔Hi9clbTխsSCmV,HUFK{q҃cFgdɺ5}P9;oB4nֽ<'RIIbE1HGo}o)Di;z7Eע$S&mI!a##+$V 3_' H/"(-N V\jH) AJTm4Jt)*k_c37% >{(.Q0iJ!B QȎўD }m.5ˎ0H%GxHk\4`)[ݫ N6h N@kah:^vNq,=~0P&ܪCZ]ė ڢY cMm̭.tt%DIIـ`64ka7Wg\/ *a p7 VX4m:dZ{΂PѪ J(ڑkj $*Z@ԓ*e , RO [ljU!RcfQ@2!Q6p.7SYBL dJS%Ce:|=Ʋd2 V^X szeY r3Jր1u4 Da(!$( ""*f"HMwآDnUFݚ |p.#hҸA=aP]/fĥ"n"棊1͋BIJBm*m:/ euٶnkjIkqm-x v`c{{]`!00 ƛAy@xd.i:]t4CҕjIJ2Ba1%OpHv9t,,J茸pΠ 2IIX@iT_tВ&ZJmm`=A;<6k.~ _հ{LY/:LA[.n[`3 .-zLn~ \;-,F ݚg)DzI(ΓDCkBm1&P.V = #2=T2=n(!/[tCۚb樇 tyB1C[uŹEYf1uA;(XfhTYڂDezT [bcW,XB|D4cҠNn B`~?M bX0j`R'JQYTPcD&7cQGQ1abRuJ` t@ QYj0Q3j hNmS; fnf\HkҬUg(j̤yfj)څ'еuZ"O{*Un&zf ScUwUdmQi`NZS0h¦f@ 2)z.$vO34%0h*夵BSp*qKZkCWQ[1xx@A4DmM\4*w]"AC, c4&鴝_BE!:S_0)ujI-, mT'3tE#BPޙ#B3 ՠ?,П_q-n^aqc :@2 9Pxd>.9{􇯿>|5:t:Fu9Ĥk+t+]VTKzKۛvuksz5>M]fӮ*Çwk_GCu]N0Nޯ|_3o~vgX_~b]ՐzZV+Z8_/ֵD? 0z7/BqO?^bzŕ^mWs "69ka}})ņ]mN/]/SG+K;7 >\fov:h㮭n语4Ϥ{EJ %/+Z:+ WAp}:5 lz+φ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpq WFK7|X51\OpE9@9=rdpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wl+g7s2\ p h~+g!p WH'\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņk (Ĝ W| WepEw}ܓ7\9\lz>+C4pņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbդw u0xKZK޹ک[x V7{{âP=kcGFg#9a2?Y~1H;1"2̅OLWϐdfDW8Sg㾧R>u";}t5.̈H]\iBWcx<]J!]F8BWck#]E3+ZRalz3"oJVt/CW7~!Wӫmtap'~2AhDta(}Zt?4C^"8#;̆sW?u"Q2]=CRcfDWJkCW+?OvLWϑ8O~{)ShYC(bb* } *]5( .gW~q7Wg B .{l {O?Hrk؅b.k^{WkZR} o. ~I h]q]f,ʫV6K[lf+f^_cڥNS c<1'Um &fj AL߾4'l6c]} ; 0 @n͛B9~:!%*}rtzy=ןטԟ٧^ʓ'&\>]v+\ hOLWϐN+B΅ROϑ [gCWfO$'0]=Vj9'"v>k'ss+B B5ճnj #cyܼjuq7xs)u>NjW9opr8[=~g[X-ߊVt 1 A2NC( jX /玻ΟGĹ 9w 67/߾˟wxx8znCoOg lW t\0?m;B8g6kv4m[/c0Eċ_Bȷa_a{y}8`st}P𻠿ﰹa_4y밧ez&z/󤼺~Mr:xc[rx7`zC1b"zzܾ=p /oV<[_;{]wzw!ݏg.|/qZ_#Rgcz,(Sa1eA^Yml9K\OklI֍:v&L!|.VR3>-ԟƤJfS.έ:9i'CAA~eXvwÞAvۨnthCܾM~#}^N݇BwC~Yv 5FeS>_G5Ub#_rcԎ\zwRqw/DIB.u'Bmc/s6ISڋdE'VDPr5_xN6se<&<εA悐ٻ6r$Wွݶb`>2; ` YE[YHpKZV$؉"SbU%D/JQ)ЖTZ"OG*gmKlQZ#gG9@-J?," 3Ke2)eRh.Z2;hSn++h)+,E"FEv1R)3&kӿa DZZi:NZ q9Le 7 sYWӿyavO Ғɔw'``I<mc[ 0eD +Q)D)V7/TI&1U>O˜+UZdVk:$Y8{ggZ(%OiOo[w\ ݳ2Y]!Л;AhJIT꛰MTLRG}R3)W wd 'š9d2%xu_͟-~;CNWbyM:L.nI.Ѻ<7y Ng}ax׻[ԯſZtzu}Q% t09I;!?9цQ.6nh0jvξi޺7k^\kEɏ77oWbfIUAr߯s7 )J|xo~kivHJoi<ì2;ZOe.> =՟nܬQ>bM62#tf )kbuЛ FW)?2˺=T-_gǽԿ$v~7/?9=q@+0DIӼVF?-ua=3)~f'iD-djWt߫32Ue',r4,B3Z&O.f]w_ GCOU[CxS mۜ|w8)7Jk:]ABx_ &K=Ҍꃗ:gyfNx˭:x!7n#I"ZK:X伤7pdYH8NZxyllX(#mPG*|sL*YG~F8XBJK&N!3Vt*5J^9CtLlU].ZɋjνwxyKFɞL B 2mB' '(!)=xd,12E40oD2 :4O9e-y-̫Ѡ0t xe|dM>'}I&v`JID8G͝' '|DH= x֨3& 9HU:"" u.BҠE1*4&:^ if2\L吊O*^9G4p6@YD܂sR\uab+mFrUbU+.BifoLp5c 8>|nV֎.}>;VP{9.Elӊ̃fLу)$%4Y:E˔&S&dKLaUSWDk`rWᐗqGϒFTFz0Aq x%PH`( 9A%";_DڑEסú<Ὑ%̘>ay?9Jfm y ;LRrjcRs% t\,2!sdqi7e-ϵ_.w_Z>b袁z/q؛Rg~=:ֿM\>CHӫ_ÆB5#2=Œr?]MzK-Qkb{w Xf?OJA2jKےW]4?^O%i@/;~oly^~zӧ\[J,|"bC-JڡP&fl^\rJIpB%tF8F(||E:;(2"x3K@A*;aTC8(=V*K̀`%]kA-wS"zZ2m|U'_^k|vX4S %U,yHB."ωFVս\()vh<|4N|m';+] r$P/T{Y'R$'s%iV4O! _gaV% ! ֤º-dvWp(`)ڶiaYg^jC7\x2Jp]VTY#OC4/)^K.?cAt;5|5O^Œ_D,TN4*+a,{`%+b>j漊| J6p8q1=b镋¸ְ=AՈጥxe?^QsW+dIh|t.>aJ}ziz[~TST׃0*xcJo>> 9)^]Ӄ1=W7dy]xqFDoڹ^<r{Y~,$ub!9JD~m?&dzC}1~|^tI"]͢C?|>+qq)~?VWgDMY]-U |iXNo}Rp?ZWWH͓oJ`-`]VG4^z^z=룎,="ay;[bnB74oS  OS|<0 snUWmhnUwN͆'\os.zܿf;v$a_$iXy{DgvyLfl&ud--Luv+t! A#2_VJt3kK=)%"N̸*E Ima.$ǥ$˦l⠞Y$"qZ:jHP<]GG R2gN'gMd!=i9$" UAeN.W80J{Qg<ʒ".U0]qag,Fkόb싕o&vdMQ"q:u,K'r[mi4uHYJI+RIJB-yfU8 )<?:R )„ԥ&$BTͥ-wQqg^Lȣ6q鿂⧤絇U\KR\g9*ռJ(H)Rzv\H@\$nK?tOwq[7fzdĀEP GƚBEg}NCQIW\r,E-reZMۤPOƼ< +Y_|K>JWyvSpC7[.˄l+kK_̦/fӰXJm://fvӋ7SkKf7FFHCc YY6ÂmSAO>y/)o&o ?Wytѥm(hNfDb>zZw=/-{EDC* OxλJ1eGb-+!jͫ?|'uϴdd!E+̻srYؕluI=8_{ywOζuQ5N'4mw߳}p69dsi\{~٧Y]lB׮[2u6uf%d+ʿKf']FѺEf{ ;$uߐL9Lb풪6S[v7uOF+w<-nx̛6 O5?|˃;g ;Nc]iu25_ܺ=Jct'& ߚ8bCvx3xf<2ve**2cB$VPo,E$40vb}qbﶌ{؎{R'smR_YK%ZA9%o] OhHys$ K-p)sOGU}Җ޹[7خd;q3iN45o5sD:۔>2XjUP| 'zt޻gG1f?b$+1eQ;ITV9J$JmB$ !x#7o7o} 6ho}viSOSI^&ǐ/%RJj7FMyY8b%&?+k*jXMm P o1} !IOJb8YLjDo PYQ]QӛixSd^؇NɡWuR/3宫u &R&vh}t6Y (gYԥ0M1{P[AO %Ul=.EY:-.HFjlGz\V=pB}痱Dovo<54 Z0w~wCNO'OO>e˼d_qZNw`A`XL: Z' QU[Ȓ{.r&~MMk dLqV@X VFjlGl?soPPa@޸`'4%'sHL@H H5.'INt56.xTP AɐyL,h8Mđ4>p>fY :WU5qak/vJ 0 "V}WFD>  7xfn-e2JeJ9jČ$WuǙh1zII6k";s}@t@`dIsi3 A.RKmE>:͒(+pqśf98 he7Jbd%A$Qq%`!<HAoŀ}j/x3yV15 G~ *kRR/k$ ;*!dyv{.~kK&B;0d(meyץ3_wkxc do i,#>atߗGG'u/2u }4Oذ )ۻJ4K\߾v+*OJtI/Z4$Yyȧ@@Y^%G=?z6'AKY郲5zZI ҵ4t~:o|o`wg Oז xDP "v>j]1GÏW~h}E,jquԹ;lQr<}F]ԕu;Krb+U՞h6vu9@<] ^O.4\%YJq$aʹ)cMlw0%ıS%ޟtLYw1~sjOW/VTiH?Q>TB*vZ82dV21Pȕ1RYPIz!T1qe r_rr,r&-K4ݛl%w,EB=?50l*|׿ѷU<"G拓2FEx4BZ_%(VP!X&)i"8  (1=f,1"LkPW Ҏp34쓲12 oK‘Rh wJdI!&c<Z ʲ e=!amay@d5)瓱hd!Ίf22bT(Z#'%mgU_q7d7mɕ*tVk :[kt6 47\ڒp7g}GG@iWw-%7cp՞ {ΛEk!o.榔ufx̮BzGMDl=h!D$ > 0I e61pdeЪ.9Q3"؈Vy0jG(MDch;acKO;)l&xFǷiܞ(Vʏ"[^]7 }76?ߍ'GB@vw _(HtZ9}Iku㶁f]Юl|L&ӒdZ|!1Ayq͇|pAɉ  -Y5T޽Z{gCE6~yY,\*Mq ȼQ r]Qx]e"KMb Q UR L7׹1KprVg1\cggp!eY@s2cHI(N(c$CL9 VQ9{t<P[o;I]3o᷃}H g#xJiA6ӳ'*7'HbcM@N%˥Su8Ʊe3;\/r nmm|e ~/8Rxz`~ K@nTrwJp+v9OLdLYN_Ҵc"dሢ;_jNŽ|ز-QJa1##o9σ!qy9ȒRP.RdY)o6KYfe8 "b{Cц\TM(O̕%y, $nzcaë́m֔ͷ~u|6e.i۾$|$[ٰ|ZwX?v %&)u')`Jh;ВC CZHF \q \iw")3\=AZ+}HkPH\UU"\!rvɮV,MѺe>2~sVml vv[/ONۆvfn_KF&H hh`>&`۷/Z`W*կ[ 5֭RJ\{fN{?I5wPU+Փ^'w~۷/ouQ JyH)ڃ $ڵH) oWE`HU~pUlHV<Փ+eTp@pUrZpUx(pUjHWO1pP[EZpՊk8l?E2ڠNޔam`83 F.uGh,J%^7fIh=_F叫~E&x?mݧhh%R:c*#bV TV]NB>(lu"I%ܧ ,bh\l?~L&gNwS*Xr#Z4ooώ~s oݹݓ߫V vz?R_i!GY(–YgOIO,{ȑ1$^E9e3񹒬{B$Y ='tϣ$ye1KLmчt|Lw{%$w=)wX|%p0/- FK])yۑFihƴM٬Sb= BKbI%94`Xnkr=0MbQ!jΚ NQH,3#COW!1Wt:ag.â3/P_n/泓 T]ng8Oԟ|/z,2l AvN`AbSog9~O:|w&w}K2%KWjc~A1Wl_qX%Xk)=^3wZO.+dt-E/ĿG^!+B׌lӳL}nUHcʆ5M {/~wrL|:u7PNN;Ʊ^=|ZOk:;tx9ooygծψqJڱkʿqHWvrh)nڝzJz{׵mr`7߿w9e Ƨihrˎ[vٺ{ۆ*CZntK۩y9y7+=Roկ?L[orj_tWv l;{ɷPcejqVi wxi۰ڂqI_?| WHn.;~{4E+bdl:Xc׈r^~p1ְM4:I'סe&(4$EȡBEf˖4'LB)uGԽu>QE l;mZUO¢1 &zk=KGT7NƝ8=d}&sMLu'/ԭbZVlW%ټ{odJG)gCkW=AL])׮21ռ=6)νdu!Ԥ:h#b D2;2E2=ؠ\R* 8?H56cF7<~ ?NNdSb"svꘓce ѱ32dl9Y XǾQle"nҲ 2YlOD&ar!+0JY- HK{wRݰ>} ?}lPERL^g-=i4ʄz`88;S'ucu6Kz`]lvqی`;+3F SHdFX!" KA5xx,pٱ=4=g`y/3a[\~|ʱl6:+r 15*8E;C=HLF U>s|@=Uita˼ṏӕ9à [*cSQj.m9Ʃ0vR><V~C$b6'w?|?ZQFK %QZZlWeVT"{W])n9]9h)d Oq[LJpDi u5\R('I#:ԀV`H& BQ4.0Ҡ0%P P8*;[ r!(06 M$aX`Pin|17ϧSl1 چ-FLȫFM"VlV,2,[x/!HӔ#ҔJ1&QI#5fOtP>u#vjNl|Du[6LQ[0#ji5XVF_ # ="e+egd bs]'gGqꆀO[aN?r~2b{}SӧD. =:+0t|:kpbӨV:oš{GP)YW&.C%"tɤб_K@o(E}9}v0bڙ3*u+ղfhdXRSf..<*`wPKL&1r6Ckҍ9V$;gu-#AzqXrVv7DPYsbӏZw+]-Ey%Rvkr 7lU)h5nT_fٻ6r$UK[$O!d6&Fo1ݭc˖v܎$M5⯊Ū{LHBӺY#oX` Z~Nx]0Vkev Bm\į%qWIr;f cq8>kuͭ߾W?ŗ녥 -EEhPx3ql00 q>@Qu,/l4噗܂(\G%5@7 66vP b|B釳)BVEn9˛2xSJ@\irˑ. 0hQ%@qyNwWy*gXw1ou>PaxPNSAJ<$BZӈCc4)c\r۱bwRf>j4.9 av8?ek>C[rh=޾z8ϴִ֕r1Ɵ̔R?~~We7 ;_II>\Tvn6u8 g*B|j4C0J(FOtObz t?Mܛ%;\!#$:N(Oד^.p:;:yʣ<9s2f/3[)E柚UZV/ ɵ˥FM MfN|#NJƸ:*.i]ͳk ?V.;o/g b@2 ~pp[0˹]qYV⣓Cd'.sj^nnZ*9h\ql2b>eMS|m|z]7VEP /zq.#cxsq*}ZFPanxDx_?l0o~p/_z:|oxscᏇ޼/8㢰n\b)1 ]X e(L}Ngr1ߓ\Tgu}Ө,0Jٳ>9yĺYm'ͧ}̕v]V߼k|Un&~myM?}/ YOCB4;ӅY u}|='!Nz׆9f N6 0.bxn'aK:ZIg6\ELAJ;"P^yO u@ii7I@Շ@TG#0'w)n ! [#v.elbǕ|Wn!6+ %pSUo+3秧׹yX^0h\oW24onQ<$xU#1%X2!>%݆Tr;~܎$RA\8V0!m!qPֆNpc5Ygxh7A).rQڅhe3Hf9VC'DL05r6OdCp>q徹^Y5mu ]zީUºFuڇ* KmnjkCk0)ZcK@+1$O>9K8;x|mZ/=Yz&m<-y],ɲԈH汋{܉\N*v/v~؍u\>2H o&G=!z{%8I { GL-r6ιl7.nѸP0a:V_*k.p'υ)O'yr20E܇>KPauDm^O3qBc5;qo_쁓\DJp%FƆT68Wy2{m-$rE\ KHy*'t™2\H3[&fV~׭'l;o5 Q^00n}VI~B\޲Xr\Ş%5R ,ghɁ1RWH0ɨLOE]!tOd;sTW0MRW`tHꩨL-vw>LzNϩ*;#Q{),jRx2!H=dQZ^S=>&9M NRsr-e\gh{Wgocq8ӏ?g?Z)uNH\S*8ə.+f̝#?jGXC݄a~weczW\nim=X(J6E'u7Z* k{{)Y *΍^:"c cƚ]YQ]݊gQRǝ Z0)r5(fi׏!A)N&V^x\Ҳei ȮPM7E$a1Dj 8VMZ+PHp(Q5&!@%!eZf 4 m$UYEeMcٻ6r$8`&Y|sv,m`Qu%G,baɑlEm9i~uEba)#S1%e;'% 9u ӆLp1xz(餫1ȡJ<_ %m.$98VWr}{v9\e}9>ƛ 2s_Uf'HOݮ]l]g_O~j*{ G&iw!㞬 Zեbp.if(#ɩ l؊>`6'~+rZW*RRCOg8M鹲T~!a zO Z>{s6piq7U6@kփݱ^Akض'*Q7_V0˺rKZmy6,[i=ƺgE[| V?(ֱ;/TFtQLdJ$m!6~U3u`fcDM͙0::>J^!&^ܶOn Dz8cj El0VH1$ #$ū딩:AǔOŏ"p%c@(#-"%QTvvW#ng7Aݣzfn4cR;^.u2珝x"/k8O3I(ēFDP,dt1YfxB o_=ܱ[)J.;}1k(vv'rǀugmkkr{9d윯clK8L>-&wӫBG]n]|mk}ҧ3ڲv bTiaI4.8n`j4t#FzhXAG=juW+MywKiKkIBʖwsX0AD"yJpT-CkP>XV*lJiK̤U,6foaaks:l?.fByqiQ}sfB}V}<_iJk6/y"裙UprNH94AjPU:*,ih׈FF24 3X/@ƹ*?ˆ=PhS%2*x]6ր+Ʉp@,gB(g]]@͚>GcAemҔ[wx1J?Bbb뙿sA'ٕr n~5>P4q؟W;4gkPw ]:uPw69̭4ه.P^.GDBpP^G*K#8cJ$[+LI r"d,'dc݌ #j `CLArดEf)99}i@XNRSܪg\}c~OꔜQegVǤc0$~C9rFO\TY9SUPQ2s:pM]$4YWB/"( ZԂJfceAF݂ƭ`)6աdP'{ePY?}{OWmhȖIZoLē}j+xSGkosEgDIĭ<: 0T{B$f:k,*bHBQ.rԌ֦LЉTF2) ds`\\III֌ȹ[3*ta5SZh5t.Q.)*}^df{N~I&@ѧA250ɾ @CIFHMx :Z묍,2.9Qu:,dIP=|)6Qˠe2%ɶce2Q:#hlZܭhVL̃ڵc_+kmi!ã)~6&AA5Y 2D"IF|2C?+m| V'd%Y, 5GPC*HF5+jܭ[~qr(Cшc_(*kDiN#^lV.p{Gu` s,z=J%!Iu'ԵQd]ֆC6s3!h%jԜ(1%-o"ډ3'\YKՋ^N/vz.G DZٍRBDV&ߤBRBFH XLY MpŇЋ=wUe}Ӈ–i6kr5yF |8qFF;E?>Qssq?N|4lP(+qi,o֕Qhy)]cG |U.b.J7cs1qi^(ُӥRHZKԖVJ8pLہ$ls-- Ȭ/N-hz, ; 0eyi3[0z1b(:Ϻr >NΆR{n2zdBRRv"`,RI9`"2\7s/Ǜ53g砇%sL1aU.ZRY R3pJj%K4hESMַ(~h-QeQ9'Dekxf˹ `R*g!x,'!m߶67`׎T9+1"`2jDIk.VJ)l` < tϲ#@h812AhD 8:FfКHa9;Y+W~b2mJCvrJ ΄홑Ɂ 9D9rJORـ[$d`왷"謥6 l2B],3yB({$~\kGpkn%$DK 4S㭮¢%9.͜~ޙ&؛Sk'u1E ͼYP}Ԧn8od1q`3C< dJ [$"1cLdfr|Kr[?guN pZ\]G>ayO3=^.|D'i{oV O/W/5\0&Q)jrϻae։0v|z J߄4o{k}?ݛ=^vEӳWl1^sl6v9?_Nٯ;KضڒT[:֌ތ6[Ye7$ V?Q,h8Z yz[ڪ`V0V'9.d4}: FW1?DRM<^_;鏯~xc|sʅ=}_N߼VL1Ңp_Z[+ϗ|?L?'dJZl!;geZ/+aaG43GuҋWlw`~}X6`335W?޴M4mjz>wk-f@fn )J ) @ qŖp>}СieOS^K(x[x ːÒJ㶳2 CFjhpΌt9+ʵ Q s8###y UYAԙ`x)!jc ayPp;F.5†1n+a 7, FчAwZ+n'eך8{ԫx8p7@' %j8@udTp S0ADmR~thh#एVx=N9c".H{)vj, BX8ziR e:q&eUDj͔ )d4&j^Ā-ўRʃUQ"J6b sT #4NuD/Q$ e?:4Ǧi3̵OwZZ7g2G9W.rH/ (g;t u8MLrܥT!3 \s:FY^Va`Z+I6}rT电 *C_ Ye(Cf~5_,n_ \`\iiy7858%hfwG7.olbkҭkG bԍ\ؖCz͢Vu S6ؓ3DPu74kxH omx!X&W gpݻ^ɄgTbƋBSU3si,y"[ _%'%W >oy MYܜb8~}_?lh-Z ^i`+ Al)vI5w&,\ᄕyhQtM~(2/~1M.b;n jW#.ާF_ptN?QI? JꥯLrOESrpǓI=,;ݕYr3#xvUt}5m6pKuޱw 8Aѫ g ,@gO%/&}g?}vקnfː2-20ϿxM\ H,rA1]s?\R|cPJ_4͖zBw ?4yLi?7?Mx H1vl@UwKfy[nfE X3fG):ZEJMo`n;4vvi=& ūG (in:FCUveU76~[1@rYqv͖j~;`֌I6UPNs lov3 .J!p@3A#,e. JЄ2w);Uu#cD&jiK<ݮ-pg`:SRK'D$T53Z"Rpwlt2TA:n^~iv:`p^(>z%W1Z WRg~(#U>{S@(v2*+c*~,y~YDp= ` $sЪ Vv[^{8J T .Co 7\$PlV浀khV< Ŵ, 2hwi;?`4f7t9*? ]A7WvUԍxkbG4^8`oN! k ]#%O2@Ε{:ǎ YO5{1pWFF1>3 Z_*GA3 cAw)1#uoޙy^۹\/<&P}(WY폾 $sYj±D<8%J:Ia쩰2ZMc4*kc&PfTetb0XJB%|/ t<Ǧs';^p]fm+zC38n],r0Oo ܙ|x)>RD&9 4-qDq,t< I:BN<U{)%:H ByاC('JnpKr b`wsʚ%{gqo]Q1#4'(xAc5qZQr|\as~=Y?]q):=4cOc[L 46)sc>_1Nbͽh#5ssguv!OxSU~4_ z,GH jHa8jJߣIfYxURn0f3 s>qG{ZP@>QOMZj nظu~,,%HdoҝBnL'&5CV>*nbGa[llrZenm;;vۺΝW:rxH[Эcy5=qsU8r[6C~n/m*DorSWMnOs\CQf-mdiK*.zBsU|24JR\~=lq!KRBI@oϭЛ<2A%(-vn;ҖAoc63K$TLkJ(V(Q0( %8:` V_zk:pUGXt .IP&o0CcaSr5Y"7ì L rB 0dJxѮ')O(&tBpS*N\ \^$\QFF ׷`ƽnk9P mgty}& &Y!^ ٻ|:V%2 99qhB&G7kI,P lfFގ,5i*.F zqs6&Oƥj\L6߀ar)ax'݌\h:}L&[Ҵ6yrJ׼W{ 8&g CttxP8b9)B SFҧ>b*+۷5A!ͯ{$0n: .7土R5X ~2|d3`e ;jL UXNHI%p=\D 'g y2pĕT*IU \` UX2rN7~')•dSRO`lk dUVUR^ \)N ;%S;LO*{:PIZye0Iٛ_$\i.>%$0S'W d' &iWIJzjJW$0&agfWaRJpuǂ2AO@`:JruJRJ +"ħWI`NR \%i5:\ ^"\i#3D/ JT7<佁G#~`.G|V!Enp k0||Q"d2 &uD9hkcaL>5[=0~~m>aj{ȑ_qhw;`̗RQ$$;Y_b),KnER8zTȊJgo[E/Vfk(bݠ7ՙ L}\.P=:zV@CL EQw|\fAjbQE`J̼Պ% h>!`/ȓ#W 'WUi:OzrH}ApUb]=;J J);zp}IY$bઊ WUSBxpeS]\UUrޟ;\Zt5•uU"*IkǮHJ^!\9 *+Ҋ*ezpwI /(N/HZIIsVW_ \gkwzv_ĵ^Oc RYJ\W=9Vޫ +%?yauRJ{@R++®] F+Uml@k+ʸg&q7ot}ueQHc @ U\}1u;mR цWH/x\3XYMwٍy3@CBZ~}4x13UŒ4.O8 XuEaDS)D楊p Ϊ/O[;2 Zj?r3?~qkj/|?àǼ@[1b_"ox U;a<= Ml{o'ҳO{\^ۻڒ :e}:짔e2.mvǦ-\շ[}6xͽ,6 (jQ-9ilcgwQ;AcYƇQc3NnhC HC>`Eh(a*yDJb\XfmO,NՋ:. aE=B/;^ ͔bdf4i6WB~WC4̿oӝvq&hp>{U~e;$z;nKN+Ƶ0&'h\ˍtw?"r~n>t?o"!.Q~n~eayp%`X>jvjμ[RG8zQ}ۨƳ _tY'44j0 f7Jc79%Ia0 =my_4J̋GVS[Y4qƀN&?[%T":YImh:F//V#:7 AoV?ѐ͢5%zyN{46oJnEz7qPf1AI7JpH͡u9gg' q 5nXZ`6GDEcU6@yUHU¢bZ%'6JE⭲N.Do]t#樄R(-B,BKBHLˬ5qvΆ= gDC窻7 m?}o4RPRΤ%"ZoThX} 46|%L_=FMI%ɺ:g7b0gmiǡ-ZFmѡv`8l@TMeEQ@x"+ ӈƟx(KxX!EE 3$Iy2!HJ`g=1x1k,@TZqb)Ғw\g#7FRq&昜BF.%AYPf2"&nD|4ԑpquNִP\T-pŧ4 hfZ$HĬA% xʀ<$06z1p\58ux#@r&&S|̑; ~4&Ip]㉲+/N{|V.8i&5;N'{\h)q\D}zۻ6$fn7lJq ݍGuĿh/;RampPe6FIX"C!"|sA'QM.VocWo`%4} ]Z͑6҅ŝmҙfַP:O.Q~ GrzQ*-x^hD\&O6x3\:?g{2O΅9 cպ dΪ.y/ved 0ig f.R{a~+sI%r6QXb5!6X2t+' 779 7Tf AֿMܠ })p 㨲|bMu*`0n9tIBf㶅fmhA?l;u*U^PUxWujUZsHJϻu`1\C7~[ h%f2Ǎc dQB`Fpzݝ"p%UVgx??^e8u Λܢ̣. *]f5MUEXxQjyᳵUZBE$%BjAkv&GgVaKCGZlڲI} C" ,{@ ,.!JdBVq5\ h;mx`Do,n9[oYckј ǜ&q9![Y_R1rvN5H28Ç 1tT7ͭ5Aw<H3S+g3Hں1=dQD[?@<6wc+t!OqV"rQ z|2q-N6-LGܟN qLvU|Agq}b pKv"o.̶ p#/ x2շfgx}cb>A*%^mϮw H7CȖR ^ x~M68a\k1:I3dhI Rj^&S)&җ89:㇜;8g?I}#`Y62E=)KF_,"~Nw㪜;s.Ӱ?XoGjNǖEƸ?MȰn>_k2JZt)leNq93J_{~LwU&OokRj˸gj{ȑ_i!m@pd2dY쇽 Y-yĻ~VKe%m8V7E٬SŇUx~qI6ӉsMMsz1EgUU@UqیáTݮ׾U:*]lzR/u\=*j^Ʋa~q +!WQ@Fgp$tr\P2oD54nx80gؖQi]gBPxXչ#CL,1efX k2]\}wVA7+/2wb)'{0?R Ѯ.nWk > Nl5 Ƙ8c\ǗYx@[ǐtvqTGz}6u K=Vu\G~Uj<;QJ n)KBx7YM?Bn4vӟGMCnaK<&q6u9Ppx蜌`f*}fJcdK<ܓ.&xmw)."AեO-헕aMx 4O3f_7XUqӮ f ;цףY 2|o_x[k cHZ)4x"T\4n8 2ƽH K&;ӊ}16D-m@ Os0IρP& F[J[r xl g5|<1]OlGIY8~W%qD[Mo`}-ΦZkǝ19cuFBt$ʢ@FH)p4vmD{D8Fk!h \?2h}4b94E'NiN Dmug;:C7e Hۡ[}z()vJ`*?{aTx!hx&P)Ax%B/Ing~T92& ( ,~f;i\HH˵F}g$5hNq"GjLi’`%\46®gyrZwvJ‡{DGT #Q¬=,ʃP;!GDIGqKh)_+Dd_\y4ldUF#ђ(5pO 1 ZA0=NZ oH$osM@4bFCbq+0JI8ZQkmN쵵wݓ}e}.˴SM4xÂqNHC]PD |N"(DP&y/myDoq,3_+ 9#)D)5jz)D<@=Cc 0t6Cd.널o :5Ú",RVE,Pۥ HdɂГMljrAT8>%qpDOJAU,L0ZXnCO@iK@"螀tUQʽNHA$D%K\#K̓&&2ʭCDny|05u1Rpqh 3sD&ƅ̃4% FmugKm::9lx-jt3\t(!/!N xS %g∓(x"ڨ,M2u.\ e8Mr2e{yi1m CCi-y BE p; dF BT\f Bm?ЉU $sPw1ZuM}NJbtѤ# e \ q@ !C%vpANp&x.&ST̖F!8yDuD"K6qsw|0% W'U48V٨zG.s8B*xY\JT%b񩮯BGGK%W!-aRaGp8#L1מ.]̋g~\TInMO VنpԲ3`'jlW#p4e6ڿܳ8|!ijIƖrM-75ÚQ.(qQFxy .=jsp>pZ4.զgI>28}~C:"Cz@ ~RioTۼp~SButo^^|2sW޼O8pQ8hz.y%1K]X=_f_$==W MA^  W8 ϡi`Vf־ò>oFyM[Vޢin.M6jB|v\o#YΡR!iKBU-E\߾>&d#eHRHJ%(H,7) gVT=@X9 LHlX(6Y(^ye .A9Pܡ g|Nڅ& .mtZ9^):k:`UOyNpKN8f}Fco Vk<~Yų9+'y-^y&f b~:9{WT|_^>loߎ^^QAV(KYΪEZwx.ώgkJ ^1/x!/o"jA] & ^4/Gx烼 8[9F9uT_ZyG_f'ᢁŋZ-.l<̕6PzSc6.=Ëu3jX\5vb<8~zgsAð e]ax+괕eJP"$5 (*NMxCCTT{g8c)͜c9A%ԖP'm7U ɂf^@B@ =$A&' 8$#]F[Q[vu g@YMaZDIs$@6 o$Z6 $͙iX,3zN{ݛI)IeAC BL)HZ!H`ы74ϣ{{7 uϺDHp-3>P\.tϼa:6AaF'e\ ;܂1{\Î"D`DuwCF4RpNxN hEeC26sbH@~K߫؀A^0᭻/e:/`C)4_;He֗B#+!Qsʚ5g1'p{hzvt1-qOC6:eb2: pTR9R.h. |P'-$"!GH*@l饷%u4=JJj<Cc~K1)> lX-^t8*87U.t}_~8Q -ːR2D!Y9bIe:%*E`PWQD4B \o.隣|PiS,O&@Eq8ȕ﮻F^ma84lьoθ {3IŸ`z 46=+(5I׾01#!y sQrc[N|lx"V=R:;w%ȉw>_a<DSW+TܺRxj̃+s[+IJL>ߗt($A6唓V'ȑGB*: *fLJ r&7heS!:DrT*H@sؖ]zug56^x O%l X\)!%b=,8\/ƃu'n>le>,M#tKtTkc*Lj3t*-Q:LeT2IA1 x "7:h zB)puܴ+X*7Y-ZG]™hep# 5IQBAqBi;WWk.qW h({_|}ü%BˬŮ-',nYVi_V&D}wP+QYRpA˜ŧL:-i"3ʄd6! :dBN2ᇀ*J&ARΙZGEQ|(2Z3Sr굥܀r̶&1PlXKpփp">*ᔕDP"ugr\/wrΐs?ĕYuY}!Y_h>!0ƶcU^0sn*U]moȑ+bv.p;sfwsGYHg&AUQL%h=b[V|fufݿ ח?Au )}\PskJ=tMWDM=Kn*Rupgk1Z%2d$@dAJjeS7UÓbTdPc(GTH|%e,B邩tPeX SV%OݡvKG6bU{޻*)mX䴣*t+2U Ár|z;k VFs좟cB92_<^ԬӛJ2nr냪.Ύ8QjD/O+itw}}nfbWک;ޯn(kƿsH5$E[]?0ˋZ:Uuja8'=7ݴhV7q\kk>5|we>q}zgMtY{Z_]7g |ư'mTwM{:fKn^ŵ}a4;m >%|ggc7HA+%ϯg?|`/$`V,n a4e)x%i<{q{/A#L^(sm(f#r-!DG"1Jh bYecf:ƣ9;ߢN7Ѹ(=[I(7:S)L(*́`>eI첼1}m2],G~),_Tv|:>yg`}>&J%"_gE<>+>nM Dg/sy->f3>R;]o㺬 O?Q ޣlj:H)7΍mВF NBn79-N%d[G!?L'B mKs΋\8rBc\+gB' AT;&}rH2+9D{CrXɝJNbTJ,BZBWBPBt QaJȦ]`#'Rk=D{( JI5I!D! \S+DB tutԌ'DW%N(vEWWT 4j3+#-7)yWXP ]!\V; wute'&DWR ]!\ ]!Z{BCHCW4=#D).Nf%`{t "OJW-2uj+#jvCDO$DWX_K"BY^fƦ]`Pd _KӜ99ҕVߺzKM]0G Ug 7}(iao$t8NtI&/!2X=.FYe4tpS%7`l2~=q^h߀(3`NH0"BJBWIOW2ҕ4 3!+H*th ;]ʁΓ&) [+++D{-Q,h+m2!Zd  ]zO Q!X*iJtu:AkyEhs<ert2teP'DWa־•l#ZvDiv6= VSzv婏N Ci{I[PSÔ9~z}=Tt҂L&7YN?O03@TzW[+/i. grcDC W~>%+[Ũ| /O˙[ϫX8j|ʔJ\QV@k-s [j+Nj ڟ~j%y$[G ,N6, .ݟQոWC㇃W=xT,Q׫tZA:E똏"\8h޺QQWLV±}kvLZ$"]X# Fo^wٛ >P؋84Y+ncJmSkJH TYﲫhfsy7Vi[\f?6Y* -p'}ض&F[xXf7-xMӛEICR'!t1Dkd߽1DY;xc15[vc/>9=zez!g|%-uIjk'%vPaRR43jXwSѵ5ڍɢD=ZԂOդR-jG9<@jVdGi=j~ތߗb]vsM J[[_]_Qݖ1qC:d;D*|$짺\Wt2?-u㯯촣hoQG■R۴;tZCƳy9[黦wVgjsP*+x;ʲji: n!mr.AqH4=MAk~Oqln׳M=gr/WQ~,w9 Gl`IJ;opyM{tʬ2RmWʬ;~sKC%yQ47EGJ 1nUbNq3%AJE^rȨ*Zt3X{^hr,CܐnlQue>-cgU4* [-9I1XP^rcXĐoG&%\#7(5+FL-QM2qB0ٮDLT0 FB|N7P$'@;,IDx'VCO "(3 ,BBWއ{#Z5·kf2%d ]!ZNWo zRV4!d d *wB tutD$DWX:-g+޻B|8{te2J&DWزtPմte&vX #]YFHi Nfd*th%;]J)ۡ+P0^C:],(9u]vp?-]C+iJݳ+ւ@W2X[Ʉ t誂kS+D{,P-@W/BW(]!`+ˉI-'}+D)@WHW<ɢ1&EP$QBy<WñN\4\΄u0X#% sGp&O*֡2_1}!vkoι") `Œ! V J:,so9~t)KgWd JwBtut%D'DWX+pu2th-;]JC@WgHWJp&HBt9¥ɼG\s+-6,!wpO}xZWR ΒƤDWJ*fӡ++X2thht(հ< F)̹7AKMj:4$_֯ ҿ_7޾۷oJ]U t8^Ru](Ԫ/U+Ԭ#;BVu,|_J՗^7O[b;g7 (5\oU8{qXN,ZqߦuյM\ `?VgOg NƘm?(ɿz6<2-qͻuk4^km,e!l՛݂*I~e͛ׄeV\QW͗ԲGcT.p>`Xva?0 s6g=¼z!F:GN{Z?޸/Yt'5no/~ߊ lꪰ1aJL2RexpVPUX8)"-^RG O^[e;A?ZBJ?r1["ot&!~+bKK],0ФNr PH<4Hk`YRA7ЖDN-4!sN\ \&Ԯ(󁳒C@EIP+5J'%0ՠZ\% Ă̙6h ʢTT< nbhu.FPf) 1/aX*D .)/ue,wVJLD Ck\Ud,k|3j.>+]JB)"RFL%/Q +#.RZ 6 A] ti4OA%c6YJ, /<+PG'“hl A+h4eVaGiРZ hCk̦N8KEQaX~.1 h*](KKoKVg%=3bba V[ZDg ЈĊI?l||ebq2/9j k[pR֫6 Dk/J漹ETXB་ϑSw4IoKJK F# u}ɨ5]T0Ϊh)qE.Vzg8L035c`܁@JO.i+KT`JRX/ l eT!2i5e`I;=Q Mtd3 FJ"2n-]u\]rǵ nǩ"w;/GZBt ЖgBML1 '+[0f %x"s; VAmYdPB h]ÂOuTV􄺝0OP4BZ%{ lt)K(<4:{j**u(:ݡ-!xy9mm?83 %JJo*c7%~YZ,X]'plCskR6ZNDQAQi+d j*6 g7$Xm#d*zSJi_t#fH57ki@6.a `-ePȄv%HDO4y"WX'S[K1 z,,x/:L;*q b LNV A 3G.QQ r 624i!\= :7XaSќ)Eg߀w,\ BE[Ty'RX6 NZAaR! ƺ8OR׽( ȨPMbKR[ BQt=),5K͐1P #o e${µ8C`BӐK3 ;^<{DԊR5xld1fPTԃ,0'$̿2S;Ԙxa_ٚO'r_~O`UzT L0mZI|4x݃Ky@r *}rl*MT$];M1[Zm*d`1:DbQi.|Ѣ`a ƅ{[ɐRa2QY,bw t4X ey{:K`r@!YF[[Qx+$m CK WUSUߨ^ cΩN&0l;dz% 0Oh=Ӎ_?F3"d*Kqʕ'X1Xk3tdD4(cǠA]KrAyρ 7DR&/ @ab'7@R@"42O0F/Z%\) GK֬kdža:-[ ;fc_XA|{XuŠxMR4'w I#Gѡ/C(oV1 /)Ca ۢ R،JSt \$XWa)1QFǤz ڊ[Vj|c-Z RתMJFm:3Ag2v8\PX]ے>ݲjeנ&%7\sA-i.Y;8mP Z4XAҢ5M-m.4Ŕa}!uZ)A(Oz|d f(8qKކRZG7$Эhx itlp1:-jUeriL  c9RQ Y 5L?GR!6.X,tQ@õ1[ U\*3'b1B`E%蕲`j?&C'[O_7̓x`;h&횯yG{.><_~suSC?n~o  eh?y[oO;wW/T<~u. __ x֛aZOF_ RѪ:q@w Z5$Yo2 t ,I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$v@`>cGJh@g$֯3<:J@IE^t$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I I}eI TƤ>{hϜ7Lyh9OhJ JI +Igw $$ jI $n1@7pK}$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@Zk~$E-6#%8q@0I $[bsY$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIh;Ilw*Z?vo᭦|Շxs~]{oww B ,H%W8siqK@K@\ϮO5UnVn>n` 7 Pb8v~?훏*ClWԳz͇4Л]st1ȻB0o? T!w߷]t{;|~^PX?=J?~̍?^z&ƣjy_\CCʽ)'5ܦ=wpy(g#9%z3橗ԉiOU=|\O]%GOO%QX|\?oj}l^fxT{އξpCâ4e)w쥲QÏ6t"$7]1WO]1\?m=F{tlt(S ]CG;dG+FVTF*F0Ҧ)v䆡+(thu}9]1h6HW)F3xa4 ]1Z k+FyJjtgv=] j`t5n祫Yh硫y(62 "zl ;]5+k(thɬԕիg3ӗfCi_U۷ ٿ?\ATZvqds:ir|{C))Rc*۔ H[~mv*^Z{ERNYKkϹnoH6SF70\G 6F btz 0tp-BWօ *Er=]9~nF]- HW|P؄4 ]1q芏9WnJ#jt4?O~ `YbAFQ/tAF0-hA;Ύ"%gNW@HjtWCANCW 7Q hrk+FiWCW].Izs]̓{4 Z] :\1r0Y{耖Y?2xu"Z-V픺$}i҅S}1 =h 'Qaq'h-:9"4 ]1\7 ]1`NWh ]m@t!W+kY'QF ]y@tŀ8fm\m=J*8m8q&;^)4]m@tAaj? ]1ZNW*hZJ4N_bIx~=tByc-|([<aV'(G sX<.nsf Y O3|:Z\-*t.\0rɜ>׍kW~<Gqr6ڝLA)HtIadyhu2^(Cf8q t-s:Y(YUAWQs^^JW8Yp㹟7헾1rdX{n4~_NWٻ6$Wv~?w>'`؋6~T[x \ i*#fAue1Տqڹܿ?딯S{nKпIm+-\nf +ĦGn^1,[l-Pxgnsdnvb@/yVZ5/yo1o)޿v hW7V:zQk-9Ub)@Ë pWa`J("\'aZ\3R ];&wCi GcBM܃ omTDɨh:su}¼v]܁pH4/coCX?EMDuz5.}wv,F&y,/0hhHhzoc#W(qTxey(k !/vYr1ZY5v|qOu Mu{7VO.9-.ǜ8^ ?:CKHt] \nFN/¯v}<f>[%,f~޼?u~ KCxӶ|Uʍ~{5~Z7tj5b7cTդ7 XAo97edWlf6 dvLWg`洯G*A',&t>OAOUX-xLt'79Lft$KsԫuKX%y^aUS4s+V˦︄Ss/bz@_giMLkS;$)8va2 ϙ+ro`eCȖx>ulF7;G HM5;ma6,&0&@@:%fyF$= ́xJ 0`M,rАH1&0KmԚ|.gJ:Arg kd9\j@LM.w x4bv͐ ۤgK7n7\BiLH&YN)篲-;F Lj# e2% jqAkAZL)o+ʻdoݝ6E&b~-Nfn7;mg<}ùx: ͋ŵ@ci%>Kɻ? 4\gjnCv29Z`!rߗ;BEYnYrMk2 jJ; ,@F ɬ?γaV%@ i5Υ huI뤭M$ 5Ƅļ?JvV]7Q|Ka7hފ%Jo9wnYӹW`ńvy&꣩&qٯ:d2F*¶@udI Q a _$WHx]TÚ {KPF ]bs}FFP7I +%cD|IΌv`; D.B%̈́,S_w<-Q,TZvFU^I(]gU?ygBG7ޱ=oϷ rX䣣D1'E1k0?5q"3\q3:zlcNe4EJ&Q#tr $yK Y'S:>yTYg2 (!(rÄD LQH5PNwQ`\{΁ U֩ Z>9-(K$F)R#+feZqf1 ٧<2b}hg3c_G*jJ BUרi~PB)PR\UEAe*Oj@ )Ko 4$qɍAK&&nEOQ,wOB>DNJȮ;)"%.UHCǾX3]U!$C?ف29$H)r% n)(+,jKq)P>l1IQ T> O] N1&I>9YR'&cEѤ$R)\Bj[+Wi [#pBk @,\(m̸aYY}ZUlU棲 7ML[UB7 25樥r DPkL ޣ fO@Nu6C6\d{0MtP+5Q'툱 cVXi{܍~2&ɠv[q,jӖQv1}4gCbv!UNhCs IL0$jxOE NUN#gg7Kt!۪Tx1.䷫b0_|<7L0ϣR 2 H%$ݖ 6W p&q.{o&~;S&̥,f}+A{ /}mPnC5s &eJ,Lʈ[&!qT6{/'ǻ䅋K`pv%v&30?k݄\YFQ`S, Jkb A&XHTlxDI?|:MGlq2`Q/F2%93qb $P; QM$G8A\&2F"Npolg<T&8miĶ}@a'j+G/ 4jwXXݾuݜŶQgˤ,fϬBrGɂG%E"ux I-mЪHoqC9BPypP)j#7(M4q&A4ј >$V*HE>SSR!B*,K}aCP2j$OңF9DLBX*)c( ڡ9:e> ZE5E"h((dhi+1Z 2F8B@NfhK@ضkű6Z6r7lscq̹0;aْ;y&ƒ! FrgC{vC;wT ^'F ) <`K\J46)Qn]v|=wM;ciɧ~ClA^N x) qZhIqA SrOE-04xB=޵-Ř"e޳ưڃ}EiLkS0d:!J'B$jp{ۓ(}>lVC?rVDε7T>ѢTP1uDQa^g?wSs~,1YR!{\ǒkI@JyaF)Rtd%:2I2^Rqy'MWA,Dޙ(Ea]L÷Sa5],A0ッ,Qv'߿[eѕzQ'2s9A8bO~ҏ_;AJpTppk'CSs|5hd aPNٔ}ҊsÙ+K𣫳rxq#[,NT!c o(u `ōWn %\|JՇum"ĞRy7uTOT2 b߼TpE,֟ʳpS*USgRa3uflj[5ucBmit~a Ṿd6^Z++cXXՌoe=\~ gh Hh{q >|X2ِcÛeGCzV}Y[V֘*kn!Y6kl]Vf :p+1C(8$0f>9n#q+s\%1?jT0P@cZk`&{ilv6 Ѣ":/;yȵG?54RG\"K)nN9驄?{WFJßv/f v6| E:(W߯jɲlɶDm1fwWYŧjvؘ \t&zv2/A;Üf}uk //mܮќ#?.v#~O%].<`?qc7?' vza?)E#"%O| hU=k&k,9[6<9UKM <ܟ#ZA]c m6Nv>;y-p<[+Dmh{To燚O=ɽxC-VLxb2΍ >Sipv1_rRëu7juRroz-2 7/;d\QSrN VpEQѧ.}W\CC7%J*=3PL1f1AENjK>j ɂf>o#/,JBI!NOP.}bܣlQ :߯&U\G @.ۄHLRTJƢb1GaGsqtoB^S:)֦$e) + hϖ1a9 [y,hh-u=n$Q!ˌ$r3oV4ftr.Y%/PaǔrgY]Q REb4Pv7nTA#(E d4: 4ILwu1uߩw?[r`PzDRŀRGY4a%PKy6K\;HuB#+!Qs;[ @A{~rUoaJ-\L+C+}ɐYHgB@WXJfV^2PHsJ! PG' 1y!Vz'wQb?} uTw;i`f~tܫ' oN?f |in 4'=:BN0.Q[=jFz4' >(I`:RJ"[{m~~'ƼGI$}R/V9v)Ql yPE%&.C^JUGgi%D锨A[EeZk4OzAJC\^1VW\XyT}G 9 z몺~5vrcȦO?6>_FYoZ.6Csҵk3|9TkT\ؖCz%6*w@V=f=CӘPo|9Wgx4 eau$ԹUmh#Xy~;wX-jY{V}P>Jֆ\h*%<$b)Is+"A nsy37ڪYj>8DLJ 2[tÁ3.Lq<O4xƻϧ[>‡)oS6`Xb6 |.ofB4 =OT$r+}]&B4E8Ecygozx! sV91n42GPO48PF[0P-W1H{1K-h͙DLIP<՞J!p\ bcԃxw^Nɇ]ʶ[lMٺ}]nxei 7 79Si4Nk \x`($i"*gyP9|g7Fތ?܌WwΊg^EyB`!k๮Am4 O.&ypkFX89ʸoY\S!4Rr^9]+ r.٢Eu5dc&)&38]/OWPa&shY6-brIO=?6-/׵խ،5:Y|f^&zU*ԼM+߭m/F5$W.+W$ڤIhƍ4od7 G^5E:?4"Mo ?}.q9`֐yNk?lԫݕ'=.mLs#y 6gy}v(vkXHwXw˻>-v;W 뼢 _cFbvq~R%}n MN% n-,86, !N%PzoL]x *)UM,g1GdA~Ę}ȍ_Pԡ mr-[m=:omS_1=Lt!к-nj}}c{^os7] ವm"Ɔj[=U)o{<ML[d{nyoclf%(:Fuڞ[̜oxi1$M :P<*riй,*"C9I O;&/MThI-;0<.+$=NxpB D42muP*1n1*HV8 T'f)e^!՝*/TllT^HТ兖)uNA\y[˽qW9JywxRivQ\|T@: ZhX-ѵ1 kՊK OQ7z.'&kw|};] q\$$ɛ3F8glΨ@DY]5).RFGcI%TDΣxKeL(,E'NiN\ZĹfڥʬHh_GۺU=Cw(1fFyR|P4q0k >j$C\C 2%=>oMR0K^sinS?q KQB׉bfLp'mKCgV\pHr\3rFhϔ&, fZ[U,I#ZA)8 Y1qg+a+efjnGJG$Y'"{X.=,wV/"gk&Qq1x%UPN?u,u&"; ʣcd#K'RvJ\7E/CbP+̆IXg%>F8%J&Ki 1C!B1J%7E GG_۽*g`oX0.ppM# 4wAYĦ=3𹤉A佶lXg. ~iWC="3sFR@?!6))8ʼnRj< ρS`9xPznKc 0>(6BvdRuB5l`bid. i' BBO6y" EjM> W˪*av\<՟4nFC52H$gancH%񑃕QQ~$v;@Pgmy ,8@@u>uºKe=& [wfuY7so847g=W4JJKNm8I iPcTSjtAR)Ke;"zZG]™hep# -IQAqJ)n g=dzɡ[{ugiaOѸ[޳|"ϒYhqϲ1hnkߧ/)BDhu-5b^k&49Ew!Bׁ޾ !u…jUL3ɿ#0dg !kK-I&%PcA);2@D^\1*ᔕDP<ĹÓ.w)C0\y5#mtƞ]?0]cyeU'«ʗxM!w|AfA1'@lU|mnrzH<Q&_Xڃ vZO\e߽i?i:z{II L张S,KYϚY8K<g̎NU @SYGA@1ꐒi> ##8RzƁEF"> SeUywqs6ܦ@EjЕF \ޢFpWbp)$Id&hi˨Fz=.+ ر4rkpETm˚Pw#zWŝ kfQ?})y%B<˅ RJ[6W:/vfn/v-vc-wb9I*F戴qA}΄C#%i)Fo]L,*L#_8_ZӠ̶ NsMV^s*'BGzsjl͢Ѩ PCS6PrQ%w,tKNQL4|Ukx1EXKʼcJQi B|Of$/^ f xk/eHu6ւXgZ͍5 w;9L Q]Lwg@8aڡ?%hҠ1ZMpvBKm+Wh#<|2bh}7 |L8v2-$Jźb]FwIlJOyI 0gEM:btNȗ#9=1/Tj LM;rA pٴCsLQ55{s,tIvxk]9ܿ@?q,tvOy,~UN[sXNظL7T[sv9-Ҥ‚׋;/C.zrz/d1g)k Z,$?F;:ؤg?#W٪Zfw#?@B0[`(jJK߰0E Z^7z\'W^ }'-[V^$72NnT,DiUmtPVf,z=jݻOZܳZ4r?rtnQ|kβ8 MxF /O6ViۃxƸ>vw~6*}0g"*fXs4mXpBŷz˙o˫wz k^"vkV]DғCZn}8}yy_|7s¯+ݺMo.P&cgk0L~f ++c;`$jre'QSkGȴ촓wYڸp^&Wu];ej*purqGj!\\K઩uvj*pui0^pnpM6\-MVWC +/UbO L4.\kzi u&W^pԲ;ʱEWW/#\ICt7jr}7ԒcUS턫Cĕ'J,> g9~$%6|ZXmv~Vm Yk_7/͗u6w:2_^k4JB +wgWRkvۓ ᴶ@yCS7ڡW/t-ty!sNϣ~^1Oq_c3o> ίO۪iK4eˋ$u'݅7ۭl5EY8Hk>&yd-{O?ʰ& XIHaj'Fr8C lWMpG n&ZƎ W+mWM r%uG?OTpu2Y+vϰ~W2mνXM4ESP W+k!O UM^pԺ *%M:D\{[V'jjMpuNㅎpZnpw]Ac?v\y,MO\)8/d&؈=j?yʑM+=NzKê vZv&v+ǎ'\"`xm*Sorפֿ:~w^ 6 WPP魙p4= ̾@+͞Aj6w5LWNEpEkzNHgm+\Aޟ S+ij*M:@\I%,p%ɽ/d$W V+4l'\"pԲ;J=rW= ɥnp~0OUSWz=!C0?yU!7fj*צ&\ªpDɕ\5ʎWM% W+;Ӳɵ V 3v\5JL:@\y ^+fg5઩uTܹ WOHxbA7wt5LA*ܕ+5걦6L {arUSq&\= d{ƣΠ30Jn-/ dWdYd eL5{ׇkZr K݌h\{4<MS49B5~<# 䎬L餶/7 atOnӐkMcTvbZ Yv+ XU\5֌WMWCV!CczM'S%Tm߄o&f'9,rTkfk~vrzJ }*"\7 g^WVN~:^ON;8_.QUͳܻJ{~:t6]% C/O8==p?rWtDNw wNWp Qs{En/QkqWㆻ-9/ؙl0W|~sw{kgoRn=[f>o,5~|/Xeon##(]$w*^1ޏ p {y#F8y3 ߇m2%2t1Gb?] RKKA)]-dASLl22bmw^:G$X?>S{Y^\s?,Mn—?0,or0JBxjVD:t0DMAhmѮ`;m䚹u2GKB I)PFH- 1fUL\&#IKBj_ǹYʤٖ*2Bx\zZ8eԷA*"HKN(s`Tܨ#!]R R2bN0'BVR5S1xFi6E:⡎߽{j/ժuQ*cDrUH&"")],{]Cz6 Rpl43X4oFYu2ub戈'Q) ִמ &83{k} ElG)Le7Vٶ\A( $3C ) S0-"? b*04ZV!lM֠F'KJm o}# 'Hx0Hh oiH" g]H$V(iM,`#<>m\) Z'kPssj*%V62JZ8' (ڢ*D5b u}-ɻ]0y)(Spm::~N?DWIMj=ZZw)V@5t)"j)b!>q`"$MA@"FhOdH_L*%Xd%!KQ)/3TCU6D-7bF F *`5KD"; x{(!xwɑ*m&)E."ZyIl~Kl#Ohk|ADGx`Nbe*^*#%{N!d(M4U%nEʣTcpm*7)nYwJqjhV䫁eY33SeU\` CpjP7ڢSڭ9E[6EH.9msECɈ8ؒQ* C 2Mbm[eE{JUg%TJx*1&w $  :8\w)#8f TUD)S2pYe|a":bI9N6WgDB҄xDʌ7\I1 D`@7.*Ȃ DH}e2aIDsʈqkxA7EGN mG8_RląmJupI1iv % @j|AgRA@86X)A؛K:%82b1+XiͲ{ a~Ear[ubte"bkۈvd(N.% PlB[]#z0^ZY!$#c?{׶ܶe9}*5$S3pƩTѰ5EG|IgmPh;6+Njzu#5ȼS"ZJ( 6A j"bH v/UTx2V"k4iA#a/.KщHOz44U M,^dvNR'Dh.`q8\6XIhb)~7׮>uAB!0%A /ፍPM=LAGzH\ׇAh2Bfa1Jr Ҥ}W([ąsA) >@h3L QfhAZeJAP>/%$0AkUd 2inuě!q{dm U߫*PJ9wV9~ ȓd}h }4Zb#VXB(;KS6H!/Z@mD%|uj Зm&#Tk ) 12;"<{ѧ@h:'!H;v 䁀:赈R(d}BkHhmB豶/=V55 (D td:gl:{% .)2MV1A2~ȃAu CEdUj)yXT*U^$E8"g@ A169vd ۾6?rXIOHzgq; i$W&Eq#`Gr]JOQ!,iJ6P[1pQi!XטM]jWH1!ٱ4&iE/"i!:.XzI7 5>uޮhD;mt `j?_O]۹bZOEoči$kh(Ȯi# NloAqӭ;-hrO6^61IPѕ<}_`.֒b=nd>9Gi_,M:Y\~hκr:[9:.~;ћ.ssr+X^-OHOyh~BgVZ}%Œ:_n_u8N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b': .`q9e(ed'1:<"N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'8>ZGw^4;yzASM=zBݹnw?ˋ#L AHYFlM=%uz7yPƥc0.>`DEtQj*U ]Z+NWұ*w18!`c|5tEp 3u"7 UsrBVCWWUCWة~["(D5tEpeS+Biҕ7n XY]i'Myc%B8J \VDWISV3%к0u"[[hA *+ZZCWzY_eVTѨî:WKECh5qʭ%9F Zs q.zwn#kQ?ǃ}4=/5u&3M!M[awѕVX ]C/0mk= LWA{Y ] k+B (d:FRx%+++ BBWVN$J1]!]iXz&M\WM1HhPFtut&")U3%NVaz]{v}82x7}GOIhq(]tR}Et%eCo2 btE(1ҕRNX]"BWV˩4*iٟ*i2 vL{l-F8ڪ~pN~ ¯orn𙹐tG k"Xn든BLEί;/c Be{l>eY?y+B)RLe!|.vƥb JȂ,g1yEyTW [Yei"j^S`PЁYHʺ.xd~Yrօum m"*U1QJZӚ< .fġ_ބ|0>\3 ~wyS..6n\5NCfDa|V/6y?!@4Fг 4NŭNKn>~̯Xo8>~Y;'|!J t}FnwrI"dw$ ,p YYҪ%߯%[mv{Fhn6_]UIRṆdJ£3$P)A`t8jnoz >ِ+nXN$4 I2wV\^̑f)K( rFhϔ&, fZ[U,I#ZA) qgUgO=?5uz~KqD2Ju"-ry<e𿈜IGQ%c4 W7z;ߺ=0p:͕GFO\49-RMQc `P7㣟tʻ~\Lŷ?PTe^PgjՌf<=n"w5* N( @@z0@4bFCbq+0Jpb8XΞi6 ]vc6݇. &č '>^wlDôS RaV54$eJ{AGm/"u[#*EW|ȡ)3))HOH)NDRIx瞂@ 3t[џzXCю$Iځp`AeC?uIFҳ Thuw :cQv6.}? "#` NP &ngy)`3m離eeK98Y׃uQh_r7LKI#b1P(?j_3%e\|AI3Zܮ|7t';I_Ԋ'a"u\Wn<-,Mq gŖ#dO=3hFud\tfמ(% M>)U.2qka GJo{Y> >Pvc6{$!HJ 5:F 'MM u[J\&.Ffq{օwϹ "V@A[}Ҿ0*ݶb}zxNχe+GAQR1 p\(}&8'",Xá ߐuPBqk/1-Vm CGi-y BE p"㏃50ԈP2k=Qc-P(qW|b>ʺ?pt9V*IOSFlI[G Axv"y9Q?д",`A p^$ׂ)$HJya# B#H*Sb~\&,')UrURb6 ^um!DiImP$;Ǥ;GՍҬY'Ρ›gYf.NE:RWt~vP/񄶀$g9)~MX4qjr&8{J?Zh bH"NՁ`q FTyP@F fdp甇x׃jrǶlB|'PyDuD"K6qwu}05(53ϲɢ^Kᅴ:Ь%c&^3W:Eھh5Vd 7/\K)# jJEmHM,q#n`TnjSX%ϡ5Z~Zy]rh.f;*t2hƣUr{ӷb2|F|r oݹ8BUjMΒ]eW1ՆT;Qi!N}̽Gd|vtuSrqcv*(~ӥU ylu$,Q|6׫֗U,fcHȭ3LW;tdP&~]e6na4@q3o^}gg_~eW~~%8XĀ ]Fk/cֶnw_NQxC9~cU0 h`_v&ِvOwˎ 缙;EmE_Ecym܈}}ի#jrG].G@.7\BmB ;ڎtzcοk ӗGhQ>}Q$p[e^Q#;KՁ{:R+C:24}lh6i ^9XwCˬF@DE`h-~zk()"r. .m<2sPd>efwOѹLCQBvn,ezODu2%GKt(ѸҚhGCDmAy̡yy1{$3;= ]瘠 sJK88-NN2$ NP̫`(47XJh 4`wzb tVA_^ͦQ>2(`Is$z 1m\(XfHow!QSHSMI(S WP-dJYEBA㭌^DY>ہPo$*DcBIy c#ÌN%"$!+CCzuh:Hq$'C7*d"JQ9,9tYMG; L}]{=mApG .6+ؾto_(4 Ǿ(D*3Wȼϕ^ )S *.K| ?!·;Ã@7giϘxaԓO)YglW?/$K$z2xMD%CBd! ]a)E^Zy@~E70@GHl$ıD/E p^HFjɝ7@oyY6HDMd?uf¹r^4Yi[ZƋ6Eq#2n$Pjt-2y\U4X>`4֓w}Ƿ|VG9M$o_gUvnk~]Yc/??]ut: kCJ6A ?4{Nrl߻Ԥ)8fT/\{:BxUhM=5jr{cI@i^棫ś/\%v׍jR=,ޘ0 ~Gj%&n]SP7YCz6hmIE.\`a /wkbK$Ǜ,jR%EleS 6H*6OUWh^8c׃IT'fS(<}Pu_GWqAXo/꜂󶪳ڸkR6WQP5WQW`o*sU3npǓ@w2Qf?pV|}ִ ɗ>{G~o3^'_KstߌQ ?E>5o?HZqm [r?P/jTޠB]zV/+, _.=h ~⁺dkD+袭-!ۤKW+_,x /6z)TpmCxP547y[W_оn'ic|/ m{;iZ&4{vw;582aNƟK|!xfH_02$uƅ:!)P)*7?2NIr|NH-: ƐW#1% )I4YpIYRe*x=x > ]ҌICp/Y>Jrvq=pݣXZ9iOi ˣzíTIr+2wL /< XD;*4'NNlA]'6>FB1"a%t6^G5s#W 齮b$P %;*5b {?&\]!ϒ,sHh@FEK\%Ipm- tI㙷|"u\&z0A$;]ZmCBG$߯a*u.VRQ:Q SLҠ %I[֕Vhy[DB߶X.$8}; :BNҀv# ) ֙+ IF>=Q9%N&(ˠ,u$exQDwJ"Cg1.28cT*ĠwEy U`mTX.=ڮǐyWV?Ӈ ~,0vAƮ!2 U a' O7믳JtISJPl+n$vtt S]wШZO8akM M5K)E@Jje$ů)F%38C!刣s@@'S69 ܂tJ; b&3UT*~2hRU`EPZIZi ,N08d, d p7_ ~!^BmlS7C\trNΧ.P9{r?Goo>Ll *-cy^MZ/T+r$& @0n0&8Mv~0 S):žfHte!;Jcεo8Κu2)fHPՐdҊ1lpd'bP)Y[~ʄeWk 2NC(w;[Y#Y-D˴uS<ΓOzDsO߳p3ns|+voi + R&r70*s4%.L#o[t'ְ͚fݚ< h}z=WWYGu49˷}hx%6e>8gw"#<ݓ~E=7q}‘Ζ혚㷗xLchʇ̥3}9vPo/aH EЉ{.+;$@J O !eLY%opM蒦Xqbqw;;e׈ix/Yℳ氍1(C1wԩ\#-(I#ؘXTlnc:~ZK{ވϘx˼`EQyS@ MI4eFI%;4D)"Vkh[DkԠ)"j꺤{L@DEUL9ݛ?a]_w* GqUlHե2Tah,T9V'+ T)Dɺ;?fSqqqNTB$Ae"I \:RNA oDI<䑇.?K KL l߹x=i-1~IcԠNS4R⤊gg$AU( 5 G ǿ%1Q8T;6FPDiM/٭Cz&K%]˙SRyrS^ʥ|<=yuEp1 N p6QI%^O%ZRBet !9G)&$NhzϥR:$p,LJUR*MhJ[b춌J1[Xlfi =p^-|BwE.6#-#_~y *{u5>s[l8пb$F7*DjRr1`'Ρ fK'Ð=1~4\iS"hA'툁qG[)lb(,ڥfǾVڴ=}p9hMq6&NhKB5iI0txC.vh=>و&_D{^~󹴅Vcozr%c_:%qRL)0Ǥ7unKoM(17[AO}LI{']raG t'Nj0O"F΍ >T>P @ }h0/TwꛄoZꄳHd204!F'@2g:h"9élQsiԚ)c$pqw6KTj$/ (OVxP3WW9XEt'PejvdGh-h ( N;͖'JQFjHʙ&.%eK9H*vYzF14gZiSXS1qvǚ֭Pb#=gl7;IS˓#=_\]k$y A b) yBrURPȤEB\*᭫ W%S@|#ѲQv7ڽvְُ׈a/Yʣp52R:_@5N%MJ'D8\i'3$%$&9˵У"(-tbh{̨lMxӟi1K_Ouo|e @Phu2(.TQZ\亞?R1q-1 ,mnFOmF[ݪ=[e+s5ڊ%Q&)'ޫRH($U!itA7 `:tp-b*thq۝9]9Zds8%c09o{\ ۮ\䴹G; 1OFakW ubtaW!q# x߱$d۴&F~m{}50 BaBr-W*D`@g%?4ZqcJY4Ҍ S *F0"V:Qe5sv}V48Zq: ݚ ⥷[;m^7Y N >z ~|h͚;eCX ɿGF1^07~ĀDrT"9DQB^xHX*'2i!2ҩUc+FittNY-+pZt&3ЕՓ*vby|?:'Jsx?]yS+ֹʣ$ttP F+Rd;]1J#l=+DА]1`YjpJ PilY(CeQ䏧Xn`hHh]:K*v "?=J-:tSyId<I3A{W+/ %D  M{дG"v(IgC$S*=`Wb2^%+lJ*!b"pm2kW-EOW(9>IRV%CW.% 2Z_(!ˡ+S@( v;A3,=LWapa%hqGt:x_ݛdǪ^*wAK`h@NWs")yW0a2tpIR*tтUDK}dUJG-/ : )0` qS I&hEn7x697'dVF&CW.%CWVRE}+pژVe|&d ֥BWV;]1J#(ҕu +!w~]\ЕGPЙXTF6-zE*tx7%W5 ѕ:_; -FQR^G" (!b$D2tEH&drWTCW wwח vCWa(Ud :tǪ^q$DW H<\%S+vW'9j|Ic Ngz IQY|BQ\XAK6_X!f"`2 JDѺ`bA$Yq;=f ?w=,2[sqoԗ/vj߾ _zP]?p\=0>1kys,.^ufVzsdMǐ'lv<=}5oe7?ccL9?`Ҍr0^zp/RAVR m.q]_p#7%Π9wM-sgS]qfz2ʴ4NN'59da9wz7|m:ng-02F (K笫*ǡdm {B-x}^߰)0HyR}-[꛵vI{@ϧ=ٷq2c? i{TxzV_&EEtR店ROo]ObQV&m~\i}oHJv-x/f69eQ Ѷ'{LuK'&HkKh_68|HnmZEY/K?OՓAj<`LϼcJ $RW16pR 6/'L{Q"n{ ѭҀu\6ψc +7kO.9{.ëʹ-*GkЗzʡ3hPÖ4㰭̚mNhy94kuМ/^faU3r.Uiܚ!t6x'״iSʡN @leu8>ޫMby=?ޖIPZf:DBC(6 V6*'0[)'u=2V5U W&T۵y9;/𼱪zm,*dU/i/%6&RV.uٜ1 ޽CF <(C{ϼCnDk`Fk ވ6wQĪT^Qg|zRͮ g/@ b>tM(4S5Ɩ2TMU A9,oZ8!y~rƝ_L'"*gZXbggA2|vӓmFN|XO@##)9BD>gKI;HK 9l\eVٿ}` ;$'=8ϼ1mC()E#5!|ՑqH>|fߞ|2|d->bԗ&tv~xrέ5oK{4v!{T%!rmɗ_k$iɡ3T;k g?qgИzyI(}Jj nozr]88[sG2^wkOhY@O84e;*?…17oy4PPteD) WKFԪilS,Ұ7Dtt+A] ޛ prZtFTGcix5j˦qUVXmе{>1n0È,ڤpȑ'׵i-+=m _AN&I7AυV.y!*ɛle+ś(R7Y,le{@3BGyc{f+A7.f7! S<1eH&_@nx Lںjc?c-TڌjU+4#xܤ+;6͸U4Au~Ko; Zwv Y:mdqzZ> ?9\y<T#b2_|uZm.u ݰj+;wzkΡgKAMQKarIr=)E9/D Pf-IiVz9bE b{#ʗx.QO #wm.P2cBLhb!IzA B@Y/87=D?YMt>CHlF$ W'9$ q]qO rNc{Hp*AFf֬40%$هeO'dsU~࢜μz˶[9jxbûn{ՈH6lɦH8W,G!,'y-o"QE(X.oAiNa>conx)vkYݼgtY -+,`ݔ8hMSJj\ Ip1,sŸ'DQ',?Y[xWOV?9>lwITm[bџ30T: DP[麑Pu2 9ldԡM;&IPZ?GRUze&od&uk-;\3&%!WkoJӠTaT|3qv䲅g$):;_=bHK*L0&5ݓLtK&2GIx&ʎZUTUNKg g5>'u/y]+H}}e+/5ڵ H%qfG]L+Mf ,(+(+D fiHϊih Q|֞ӰF4jgY..o(L.L}x`3d\ϪIs Q$ -kY ?{QC[Gm]9=[ ¿}BdxW!hݴ5kՈVWx~%NF7em%ëV2Lft_ϚMX;S۳Ps! LdLPdo6]<blɭ"X-R c;a W$foh <`'}(59 pXCD(5̫鷷9r_q4.<)jFH[PЦԪFJFԠ5vs+ 7 []ȣw|?K/ۼ<I,da=nܺ$jal$UE,I tZs?<<.lAX\_'G8iI81!a"l7 NYAVBƫ yg~Kyֲ0A.С2rƙ&u%k Ap]ZTW?~Q/QɶOoBp~!i"@?TCY&|e=k0/"{ҹN_FҖX2KeԤmWX6c-ŷ<[*0?Ǭ)/ƞȩ$ϭ>=R+ef9Ǭ`$q; xE6屩/]9‰7 ;&i\=76eZ'hM_=>*x91J3 J֨x)))3΅I[ah{R@-[ P2+}M_&+Go$T'qyWo|(2m<\<P ub Xk#F1. iTh/eׁ|61r}<&%"NtJזkmCd:B:"&3CԚzard07.W\q)\Y{be>M—\>WlyA{RGqTZ wz]=YO=e/S7ZyA,k~>m7@="n󼓆[LRqО6(}jW Xƫe|d݉⑝樂Fs-O{\܃慧ڋ9;>ȥPW~Z6mѻTRq]5dI^EI#1B^IsiC}A;S7\ȏ[&f!h2udMNu何<,`"7|"82E;-Ar!PuDْCQo#ֈ4nD{r#M~6fgR@mX9nZWD nY}n1Vi΃S-֚!{ e"-eL[ ̴=0x7LtERD2I+tԙvbgsz@PU8kQ9rmzZ$3rgUy|6}yJ@#tP}·ı w#76qxMJ惡8PdKqmcY8QeWbeQ\0l1(O2qQW@"-u2E@hoplPF^&T-uE7D?q߂Eql65ay"J&a.41<)] yC#,lR;,8?ó(*mԮ- Ƙ֤6:Tx 1u RlAIڳݴ~Z:օ&qF1u 7J/ ':PRN@Mk^H.;H(cM1 4BUnL-H%toE%w`Oi}w DfR29 4dG%9VCRRq"8;.0uxn'8 u *\S(EMiNJxAYemmxq#IJ|]LRlC^״ ];!Z`B+|ABX=DJ*1j` H2:; -TpL<%x6HHȔP^<l)P*IX+؞P)'eQH=zQsZj+ʢHe*q$ H(l/;z{<a.^Xtn E\(h.k.&+K1Baq#vZ27Bix6̦k]*uel* Gl \c6~kv9=<7__ɧ2]|n֣:n@0GJg\=TĘH%'j.2+&9_>Ȓ-hhɍM/.lTx2nsܴ)եr 4<%4 EDD%9pqj4?iAVy*\kZ55N#;^郋  i3=M1]Z˰ƀqPAd^`qqtJm^QoU1hgS&0lXp(Eh 8RS #U}MLtώRvǿc^vےy޷ +LXǦ,yֱ/\k5Tj^Z&;mZXTKC{ּ~Liݽ[ijq:jeLƩ莍h< IL< >bh:/dK^i i)d&X" fܬ2%([u>Q5Y@d|F9K܄gHp;$%UvsE~}a$%F8N⡰|8jyU4X o d3@| |`hWꎧx]o03;o_;d'xm"x'tJ^{:~%AKٝ& eKC<eF5)͋y h2aō@1X!:)h7)!(4G?xܬSa*1NP*OER:0yY8͚J+y\ܬ@d2*gTh2H$AII4oHIT(&"N指* ^;;.Py5xM!C  /i))KE9˵\4"?x0_"6:ei= ƳJfUO}\U%\9as&oCzՎ7l5BC"TIm:@|m,WlQkjoDk3q0Ё+|VL^ \30Z=d.+R9TIoifdkL0M7,e5܌HF20oNZY<tg6Th9O8"2K@i`i7oܘFHc9@A8 EREdp "듖`WvdD .PFCbX 5qD : A2᮲5C&D5IHåVNb[*kkPɰaC7o}>?tuIml3ltUןGl2"4F 6 rJ_Nj&MؽneKX%yxxq"ZɦGutIIRLgffbޅm9&))3Ή\a4Z7LiSJ$nw3F#T (VYÒ0BЍvhNm෗K]A`N<ʐ m#D&newy$&FH  $h:֚GԈ=W~fhWC >>},2rRb직 qwa)1i eGoE8!-^?,u pYP?ӖPqh@OlOJӏlų1Yc~VO %fXOӗ$Q>`")+c5 P`b&P4Wνc}̒wU'{dܗBN)3b ( h: |a`fX{<5Ox}}u>ScT-!*b!V~«3]`[3yT,"cv;C4彈]sTwi;9}M$>Ow6h*q&'"LkAYGf蔓Cv%ƣܬB$R*2]89 hx=W. 3 _yHw,`Σ5*3suO0 b1?$3i)0ÇN]^ psb7؎ ƐX9O$)6Q 3 >p^)%b[f|ޓ|^}1?GSNFĶu;"Uqu n 3>n­E;QyqEqK7ܹKHM?o( SƆ꛷jߐyЄh{WHn0zh`{> xf'{J]J.}:SRTRei$5 F83K\r#tTዃ_'ߋJjrD\t?J1io}l|BIF8Ƽ- 20cm͸6]o;ƤҴv{y^if"*L^+|-R&ocsdCM~}*8!ԗϺ3p- ~UxeS*p5fK|VzVO['3ua shGxW)B1WJT4g6nyƫXNԃ}$U ˄͞QWnŒ6:0ui g֌b[r6%Sv}!pu=UV04{A7Mml?>M+ u&d=Js]Z6Ri2O kRe0ւeQzYahyq5#'i)U4(f!)1}v 5U`I*%*N- &<ԝ+@@mQ|0G`=orA g^W|Mt$Mgl~c `)~MSO?!MO}׎lKt·0'9s_8V` E]ew2g8H;7]>v1Fo]Dpmy9+,Ƹbaly'J<ܺz'-x2(r뙓;{Ro%8i3Ir)8#ga<ϴwlDQqOWa6іD/rF5{9`-/}eUbOo7D/xzw>e{aMm/Fx0l [YwZ9XT3Zo@Kn~w?O\펑 u`أJ r/{2{c' so;4\3m@>濻4Mfz VfjAՀ#u0ϟТ=znlS%S۔d}t;Y]kuϬ8zgF4o_'?'l RIj@iq9OM݆bگ8V1(c7SCr7]"S;y0;F mE'j%հ]3(lMQ8 Ɏ>X]nAJe~K|6C"Qe84(YO(*.lW '䢂۹rȓ\mb>9o8L/[N:(#*]LH0uQqI$@Lq YhQj@uhVX4zC(yH$jhI}nR/dMۃ_1.՛xˎ޹%ڌ6Y:# -R8cJ T~mNx/$ TXN-yVi9+ڦ6[F#͌Q׿^l˪_vIuizDS"+ƇL1MWm^*OrULuHQ u$r:%U %M%^ xTEZ =Chb-:18vɣl!4FL^mq5;?qAw '4rݹGZ}#hmP{/+\&wid\@ X>#my Q,QS,zwbzj {/{(iOf)}B\$2J?uZ~LCKTD " p"))R|8F/ #eA }Qo_:S!#1KFbvˢ3 WK=Wz] mb5/**x<8nӝ5@Z9`8ˎq[1#IuAJ' (;Pnќ\y!1/xNP̯+={C~:+V`l]R; 4_4?Zݿ8,A?[%]/ `_w?O`k&lgA63GT"LPc 3l}ޯ|~0}gS{|0x4]'W7S/[~\+M{{۷Ij3쵓aO9˃<[t{y>q6yqq}ŋzcX)aspd{q,xMޒ;;rՠJ2a%jXtYF] / /ͻFTp/wJp>4>5g*rقITA\pT'Üy[~7^[02s®7T/0)Z]%q6gyI> g2}ve __ϋmKfQo+ ;7믗M|nnL$z(Gfj88J \P%w6jb3 ѷhh{ Gd=E!xyq7l0Sahp`kdJ7E Z㙹hn_m;)~&W :("H"IhVXdPLE2`Y ~q%ux ܰ}zyvv/68y3Ϫod^?.V6+4%W5@ѓ 1uzT[=Gs",[XC,4J8%[q<2a^Κ r+C5 J (\(Ae2fJ,`à<.H؄B\%U4 ] !~Tܐ<5}, !qkx>*d=[/0k}|p?^1+113&0:~ ^&SDBE 7}񡑋v~1j( v(S EAGpYjAAOԞ).exs4jVwS!xx&$j7 Of%"A. mEHY2 *;Qly4|j{ lj ЫcPyZBQE2V88P\(*1R&izh:E,Z<|a:_߰{ jD 1O-2;UiΉa@KDQ4Ξ8 Nwj(fѦa:9uDغ,Y8eGqt GDG)l%@#|h(!:}`4[qR2@fgvoFMʆɭAnOocD#bNJ# Q C`ؠ;u:t|?a|d`Ko:uVFom9k*iD{u •%MA˩ _L PBLa ,U lUC6$ЇVYs(j(j"jt&*h&p]"L6RX>~y.>B_vRP֌]Iυ]26SO/䦞| t_{ݺFXYA%U`Jt[#8iLT ĭWpI~%5voYWz[$l1cJ̍z܁, Ct:r?Isn=&Uz=!^ z=|અZ(yejGuz(EA`I1[zrҌ1o;FQqr\z1-9HE~~ėX+<.Q0Kw{\d=N|2$`%1ҡNP¬Il{sģ|zݤ X,!iY(}H[c z2d+I±(蕈T@}g&imy\0TnEH#5\aຕ1-[T|M MJ'v\s]|҄Ӧnʾux(zT @E^6neVk7zv5:h3i%whANÿP'M$-ei~2> htDH#%o>OM{v;!MU@kK;C^i<$)b\ Sos?{ܸ2Bǭb,HR,* mNdH)R]ؾ* y}; 2pjT#=QM )OWaXjHSuq.ZFB…vaY䶯L\t `FӡYy^QK刈"_3&ċX39Jy+BlpG`B hm>x|%(eLًEpbϖw˻G(.Og "4FF@j1;_ؾk(*ֵ> λUixD%l!(yq^(mk&%pǛ4Ga_ep@R*@d7㲣d yHiO5m m8 .84wzMZbzr-c|nGa>/yyRd{E [a7hHAXkk)HvFCAh "?ً݇J`:VJS]~X#"|F-pg"y2'kВɌy,}z2Y @"h()*V}:5QTtr?nk{@&ǿBF 4Cd>TfR[-}^7dOdd!3'6b]"I)OHNJt̫pHq}¯.ǙVU`oLqI Nr7 DwL[Zv͏V'4l1EyŰB\[^SB$<5hk7L/eP`JEovn-TDG6P5{bi ݏ"iag~wmSp -c?O:e<ڴkƩ_`hg9;%1SYBFFz8om 8y8\5Nheàkm!dP"'`AƲ_+R̍_s!&52L-Ψ|*7J^@ё>SqN\Dy2Ve ]|8>èU$kQHI}8 m2R^X2Tu=d\h\]I}:FdC751%52.lj+JVqTRF{YbjHE52zi ᅜTqamI_K{\x={{)}э L %"YrTU ` <|+ _YC$e?.Қ)Ǫ6]F--FGůx5Ɇȩ;-]h _55*oQ88h؆LS Q9Fu޳~vaRF7JE! 09wH~3e|NP\i>5 M׹ zK@|3*X)IumTKdS4ѬĒY}yvVyPGkw,hST/;z>.gFƵy[ҜGPpP7I1*eu/̺ jjGV?\JOQ>Z'p x(.yfBUd|a5,j3b-BvA3&f"@.ےȨݝ{]TVvo|8dL; [iŜ?oRkxF1(NMѳ1LvW̯\ɨ 4N(,p*D:ʛzAXGh:ZX$p y%)ꍔg2QYpb)#"H@ ~ק2".|OMfiSccv QOwՋy)霣-ؘp&82 T!Qz&5tf*CuaQ-Hx\}82=GG*՗9J)XWdaTcF"`tV@j bԨX4*t< q6,MS Uӣq`ՁoORϟ<=Iq[\ΡFAȸhZ~25'P==e^2K5 .Yi'}GB&,ʉ qLFfsyѝ@kznDAO8rVQœ`=E`24t@9Ph֖sX}  >eCw 9,4EYAc g2A`+|/2ރ7Rza4ӗFtZH;lC?Yz<8oCHF:Pf$Ä(+>LfFpe/"&|sLC]:Qxf2j~S 2 56/UktG'>"ERKtrp;L|FӸ2 sA[4E Xbt]ՈiaAlA1KB3ڴJ6Aq]G΍P|D"Oz#xq+BtZtZ_.gP 2b~MU>܅rq:&e<6:6)Mh5 )6ky}GkZI"RnH"!|*IzR j0>6n4 Ĝ>~%V4,ͧWVW3+ ǒeA+*PE蹜>w)]8cebäǛW:'t7uP&$i |yٸ} Ehہu_=duQu: :8_6bT+AidA<4nRD wLetrqJ)uL|{XuoK);$N9NWC@ZA!,H"mt#N3HC3<@`\9] .xc=?&c\޶ڮZ[>߭|SYb2 ʭ}g| a#tx+R|p?g0 a`R VC> 2E 7t[g~(nj-%JQ#]6>fXPӌ%z[9\sX&Jh _旀Hϸ#&i57?̙E/kLԓQݎ ši x;cgE}R7k{7C.|/jng'_·x_tV#-}홗ĎT`QYҊQ ,w!7˟ȣXPQ)bԕugVMIxWpweIWhX񢅴=H:QLxY98 /ybNT(S@*6/FV@3Z`ڒ$ˬA[keZ[7rfgF{Nȱ~5d!!w0\,$Xa !`Q)8. YMIocVJ/Yݬ\1TTBltZFLQ%]xLec%W5R~Ɣk+}6E0ҷ c<3yo}%,)q&pSH.vk-cnRɤʄ1j COCO޲H᠟Wb uz $H[sM&iuses^X]yr"!kd\nJ*͝u4ɠ AݗײAg]lu[>fE[WEku2K"Т&Da#$.DQIx'%%]a Lo9U;͏;u2бD­9ʈ\ZwQk[ rҦBQHumcw)PAv!}l[ _HZJ`.!Lȸb'>ϼ֯ HGB<3/2fa\˨ ڳvf I]DW 2:5zҽv0f4, ~!Ơ:[rÅYW&0cjdT9! 4t|S/r4= KxaldS!c-hLK :3x[:P @^}(f>T# ^~%2^;U!);iJ֘<7!R`kbCQ\+eBOMs'|sP)Hŵ u5J/ ܨ Y6" u[ ܙֆ&AvQdZR48c﷧ +7ib9T#%F<+Uċmz5Bu}bӤg\?ڔ'z-sqT?я!)UݓJ2b`gEMh:1gCuZas)19#W1GXe:PWH''l8vZO͎d'٣6bJd  )v#Ht Nӂ&v}aLgi4k=>%))J~b X5/5so]is=g#ǭߋok+}t'{Fnh32j\tMpޱǫg:;L*n_TaQ絓Ru[|@-wvYN%55l]Ky/#0m.X6%[_b.1WqrROϒ|:@cFp<E y2.-b0ڈ7h.W>bs7`JF}YrxH\WI:ٿ]qVZJM ϡr&-_PRFKBMH\YS#wSwWC| i.Z2 쵨ԔB2LWAznTPxS)ǤBGH]:*Ͻ=mB(h|#~ ~tɧ4m0o^Bf튷mRSOjD@gup# @5*U gKl-icpFf-D~SY,E>@w8ysv j{#*obJ2s,HzC.eD^O=D㷶0Uٷصu*i֏Mi)gKq 9lj(8m@zlŎjZb#8LkqlE=9X0II4I)1^E3ݕ3χòz-<WweOJYٍ˲9 \fΆOsC',ˁHU$K5' \fJ ]l(3uIn'Bן՝kͽނr`᱑ z6'HNnĶAg`NPt zNP{r$o%&g/ݢVo<7FH*,#`ȒJTGHD͕Tg?̼ŔP]K&rnkŇ۔l;}mqw>=pwL f(@XR 49֙z6HV (\&mhel|4aH!tPbB)fZD xɀ"mʈ$Ka Fn]u6n-V|bHE>KGy6pLV܋zTŽ $j$Au"i^}r[^]*=wiѠ' ~JݶK]:)AeW!SFuuM_RkmY|k <9A't,7h=fOjI+S÷je (?FQ$_$H!rS-S$QቅҪQY?#/x3Jݖvjh;FS+8`c>PYJB#D_E|CC,VBC_xb.jyRFE_}OBJK~M N`rEuhd6ˋx>]ct0ߒBBlK9(OŽ/񪘧y:OOR ~Op#^Mz@N H%ZĤT< 1m$gn»ߩZ6Pڜ}hw':t9cuV|^m[֠=RƖoo\/j!ј=Dgeق 4Wnܴ8Ə~a-_nM߄Py IRDRBp5%7~f.Ɍ%b+7h֠ sl7w0t] w{qDocG8ܼ7O>bv g p|+wb4 9&~75ͦwj &7ZWhyҍulknN!h]ϟkuk1ѐ-Σ }>`0i;xHZ*yžoChKpY!8Rb)IRںg(Pԛv}S_<A#lmUhi-҆A*5M")n=~ۤ8I|󹇓m$=ʆr_W+|WxXMcB˝ә.SE. 93Щ$$ԥN)*"e r ՟*qrF~vo&t-ΒzH*-'E ?+Mz;_$5_Y+h-@EjCZ@ι\= Vs=3>}όϾgzόCrS |zP\ ?yQ-krtʜeG S"tk:} mWqHm87‚KZ#y&8#5:ㄚYhND\Y-t&Rh1,w,H.J=R,EU$q 9n|=}ne{/q^ȭ %Y2\6B"]sNS hä(UtX{+;*P.Ce?'Mq1rav(u7EvSĔTTi E: ?Sl[C1ގc3 T U= lW rabnoՄ:PAsӁz?S?rXzV-6}\_WNDCo!I";rL"}bG$rmsJ8Jd͌%\՚bd9FUTCi_?2$|:*6zHЧJgZF`vShTPF WVfyF쀱? o0ꉩS!32swns@^ , {W$nR1FseO[#!s1[4WµatrB9)Y:ERS*bs'DVZMvRzv@f xF7Bm|cIs j|ӵfL}{_<6*NuJe%roB]2z~,sdWLIeepz>Zks@kNkA)=SXAI0tks!ͭ=XHL S[%H rJkE=Ʀ5K&bbsU!_HB 4x-@8 磻e|RF&@jP&ZPiioVVK=HdP,PxOA"V0hnd31HD{DwGj:3,KPm 62!LQA92C mp,J;zt(PDHf8Z-scfڍ0gn/{iЁhjz!ȠTP;q%E|g)ؾ[ȡTsLR(ąDтڔvI(VXT戰L^@k틁v6$ i\R.k393VX43^JtrT? k[)$GgpNn|k]"3<@rG?2?[Sj!kZh8( ͡ݽHBKɵ_`gh䄡K$= XyjiQFӆ RzO G[XI߇7@ )4vz5Nw<}(hm8g{.? rZ&(x!g]#;T-7eZjʢM ѿ YfKFKWLԐ06}U&3Rer lT7Cu얂Q' /S %]C1Jlnjd!7wX끃(SdxҀ&|ܨLjA?@hOWcA}^M%P87 l, <9,Km)@ƶZhFVKڛ] p*%,4@+$ Ityʷ# 4o@0A`ѪkCrh*{푖KnGVxg]$`铉/FkU\3ܾ͟08$$XY*fj1dRtUI$_nt~C~+,1uձ2Șvƚ\Rk9nl[?*VXuQ9V&J8rI:ْ)hbT*Lc;=c4^k[:zVX>TrťxL&q-lhj@ I!_LdP).&#a0M#+V LjLT.A)VvzVk]в+eg=O7#z0UzIu/,99KCD&hz8(ǀ8բbB0.'Gxq¬B :L3pv{ +^JV}F?i6b( 5aaH.19Af00{ 1Ą<[e+K$ʫڜ4ع%ޞONH6ԕQ5Ku"*"1[a6, v0u!2àT~=CE6JU@v8}"΄,bSo.C Bu|oh*~J|"v]ez EQc.V%\kAH7XbN)uW4]~;#rMo{`6d܄q:*fؾ5XLP?qL5)}8rWxaY@𯳙H'm&Itf"tg"M_ߋw)BM7*Vc"eM-s u旉q,:pIU:>`=r|=:Ҷ5g>(hYxs6_w ||vp{q4_kAJuܢo =Qْgýl8>>a2VׂGN$lpl̺=Ŋ[-X1F y%T}J](]_EA ȊG)7>ֺ0w0a(w6tFΧ1~/>߻u0:GSIv 3=B1n5FL̄y_ϮޏCqOl,73}?p|rt|E}7=8]>Ef3>FRNѤ;~"C/'>iOZ++'rJXYl1ٜ̈́dG` Ɋ4`~z8ii׽Άud?ѠlTHmp!fD r[ ,P`ëϮZ3Ҽe+L2XL|Pp-Ay_vB~9YʨeE6&#SFb=_O/G{6Ѻ,ŏu F×>X(wb ([}$IHg2[̏XkհKtV'*zn%()^I>Ha+;U}bXߌ= co#k ^KVƕ)AY:pV5aBI֤ؤO(=1j\^)_)PsL Ss*(uj(VJ{dxk+U,`ٝڒԧEѮ(zFq9PD bTu`4+c ]F[7S7kx/d(ңʪ/vhhף0>Vj@&άkh-dAP% %0ԩ䵖?emX Z2ںcOBs8 !7H# |K賷o/a i>aV+'( 7.quR#;S;듟?;?kl'a9*o}9ZYѴXFi ۮh¶dPRhX"X~hWbO2>8͟GVܒ|ZwQ};T_ľzu >J]Rv=zƊV-~EZJ}kZl狶I׺8π7~ψ7(NA#@9)lMJڵZ}Na啅qB9n N;~#kSߺc5$kn]dU1¸KܚZ@,uArB_Og&GKA8%ot-p[PVU|9ۘ%,a6JsdE[ʑV5ǖ:9⌑}rRl6.cMK)G(' <)U=n@hrImuʃLdZcZF,-+(K-+ԫد/ *hC`z7m_[-8oj:|׹Kѡd(ePi|^y1M{Ɉ.I@ FwDI[^@ BĜ:v!]QYxzo摹>#n'͕G1Jt-2r{)o"2ΜwbxuzqgՠX}#>FPq!APkoH;d}(qgAS#1j8zpq,2j}ׂﵻGGDGڌ.gi{8!I~i]ՔOIjRH2l}:d=zS!֨)%(OJh[B_P1+B]qP/]_Ӷ%CnW7h/y[8

*~Z/cD龩#hyɛ}oQ_y#=,vހ:o??MY: 1VsOx? h3K" AT%vRf^}rla(%Pt֗oKVU RZ!d93sH`/K=D*CWA*`2+mh:YX({ d1P4@RҺMjG5˜ (58[Lb "Et!mla2sT gU"݉^Drw|9$@':6It:Ͼ#zy>a>3h6(]*HQDRjDyK Otc\J[w(Vn%aj`s(1s+0+)KNWz9l䚒jVL F.eA^B|$% ҃YhAV0S+ x9>FII2ZieavZn*TXB/ c|-"&۬3{1Ѫ4)IFGF J#HڇKi6:1V %>?C'ĢPR>OdCbv\)2cE,HgY`Ns,(JTwQ);끍1!ЃaEhubRcbȊ!M4=Z+!/5I9˩(iDHJ$MQ5(^4䷩o6i_h fJ'kJzǗWcnp0fpZyT;hK&7]`|nWrVs/"ۥ\= +9Ҏ?#1laFYa+s&_jH̐"\I ZjŬ?H-3>V#SL|/g%R151yuG6zX2}qiRsق'~[ys'n\K?sp7yZ;>ޒ\.o?9Jj!6^< _4J:*Ǜ>gx;+:<_qzW%km#Ibev˛4AwlCȫ.nK=ߓ%TQ&YZl,+##O7/7AM)^@ɚ?qy|E; XClfA $~[lLܞ>xLKsmY4[zK6OMEr v0C P^%bSGrPU*΁KujTJQ mJQy'!1+磋?FL1yW4U!}:\k{}w[M<ŷ HvQlVrPdjZH 98pl+!7[lgA4.Wj XB2TrFB2Kg6{r7\$Ύ{IŴm@f૸2W!̃"j > A4hΫBDZT V(|4boU\lgN9U( ڃ#JD gmS, 3(і!CmJxsNƴ>DF6d)"CU2ހ3YO]!\ײ굣sM+r53kf>'wW8)ٽ/}v/ "秿:ÓV?Oz(nZ9h49#MOzw@fe#GwX3Uf;r,J?gM՝=qjِzV{Z,Ucg6uj$x}Uw'Lrwu.!#P|>XO%;jcawHx2J;13i՘^ ^Wl:O'ʗ&Hr7k廝ޡֳi;?|x_wea9hYXZ,,K_C>V@N%h|@T K5j8EYi^' MFcIؖxڞ,ouI998K-&o{hU곃젥y:hSڠeт6,J5s-v(pKNH[m_6;M?$Zld?']f}Yˬ".aN*lMgл7g6JSv-AybL; )mN:${%BVNa8^`-#:Yvڍ[:{{D)zND_nlK9Q;;k&D#)Hu8U_ޯ&QI{1ݟOdS"/drz1i9\ۣOyZfIfֽ#Bhq0Ok-d#!,g)Y UWBY뇫J M]]R,\rO ]G'n}KkL+)z X>_s1BᛗxWB,~8_yquϜMS\ֳ8o屍.*%32v何Twu!v<V7鑅 پCDm%NL pex^n4o 9rOk F=hJt=>w2<}lgpK'֣q8Nvmȑ{:+oNy8oC7d-Z,1n k"iv}䃯(X\e!-ߺ7H9{c_/\]U+KBBe0TIC-oV{ 3+ Ij 4=yA^Qr(_T *mecJ~+1@R^dI˰d2&exB 6[& ^S0ǼRJ׆!]18P/Wڬg$']^ݱlH7whHj–ߩh$:+ȕ> g30^VG fc\*-|V(LjTfc Vd@Te1<55~/%6(\j%3^.A5٤|II^|N N 'V}i0I{(IE\Eruz)^4՘SH[eǿ_Λ!3p>YC Cr`B^uIPt)@oQy휄)ֻV!i(H2%}Ͷ$b-%jҺZ mzY*IU 8&WD'SURVI\{_Fœ<1݋ܫ? k%GpP$( ZeAm!f)NgIjѮڢ3BaTKR#0Nw-fe#*D+r04V:Fo-7*~(:!r w2LV#U,#Vu2t6D5 P b""`76\,0EAbʚKU7p VaM WK $м%Tcrd6Df,RL:A4*$L+SN|0>ĚĤ>xeе)deBI+A%נfâGXᣪr}"^âsPV#Z cWW|3bhIcZ mv]ڌc0vO x&]{"Gc&JPB熐G=a*-'KZkIVuP]#@ >{cl""`Zfc2!L I9*#2K(sӸ%_3 VN q}Yp]^9Z|kް/~C:bp~u:u[[-4"_Ng8,ysᆇ|8GǶɷ_ЦԸK{qn_̗鰤wZ.ظVOWb'bn'b0*םuf,\H{١;u-ÜU r9aA:n  F>ufn\܆  ϒʍb9+.P{L`^j]c :gc :!8ګ=>~$FpZ9/ GʭK1rd_NI-J>wmx3x\eW&3 l'617GnjeΜ8|JܔTϊ 7RKq1<;8Xͽ†ͻevB0yNIfȳxaZ%S8m>R?MjD;,@ݰ2zzF(0Nc<@wXߗVUB?fkp]=Xoe=xeZi@JhC#*(oK_.f9/eo;ʓ|w'hC9ux45L.z.[!%8kBhu%C)M\V3s*J†1&y/ߝ6QkӇ*ѽ6eڶ+͑6G׎t>Q ;r[%(c+C^W9 Q?86Ȋ`XWZpc3.؟`驩 nxMcoԝujIթvFDX:(cOVK';Sp+R=j K-ÜU Ib};]'ԮS1|c#{ lK 69Tm+?P${ lgY87dHܗN=.&q"CMjUeq&Tz sdxbQ͛ HS_0ʎ> _c$x2U+Ϝ_+wAH}!ӟ ,N,ù7ظ & CKEq%:*񆋒)k€ssF-y8ZHJrmMRl|nk' =5 nߋ]2+[+n}9rR-nz-خ`%b4MIk wpO$'1|T~ S%,ݎ}L뮒ɗz B ATTa@1֍N|~|Rq=o=w ͋:CyPbŒ-@Gllu@eDLAs7+B[I%m&1ru[^m?\el?\ewA62(+^99&B^Hn!50 Qs8arQ@@-cW-c0iŃ 58v9)ZZuG[% "ϼ(6&`-QF]QsLiRMKBkC6)m>j^y"!fK ~{X3ܷ.VAϼf t"㉵&*պX+ۂ'g5d?J)wa$ywh\n]fel]few3Aݸua\|>wtqbEҀϤ0V,b:nƵS"'#\Hb8v{΅VQ7͎m ?z0 Bu|Si=gh0!ڗdK^+ՁIj ‚i^#WsTv|Df'uV6pE<%(!FG 1!&f0SG +i?DAAjK{2XƲ0n5 _КK D)cX)#QmsiO };Ko0mzfs l6V},Cvf6rs8q>ڕlęq5ldߢs<v)Ϊq$kF"jRO܅`va8̾{0J4̑ዌ'nY{NїDܸckU;ԎyV& "qSPH֊:€v sވF||nR|SrФvO)c~IIV11bżIsBg7Fj1x=f (0KsBEQcm$S' G'mӉtt"uS' {| J|zwo/saq/+O}5xԀ w_eXAwWrĥm]h|\kxmnD%֏sVqF #S|I⎝+-`ewBnJW\exI#wU#.Q;TBNzd*њ՗k LSZ˽CEImX4>l#%c/c+sq_nN-xX-TR1Wnev)zv!5GHo FIi]VM *_%`:c{$K5H˿~WS-/u5!Y_;4%_M) *LlihMt#X4 ~E¨n2?\hsG%p~_})ߍQ[١\iI֗1ua*1dXʐ,x\P=\x(UKԴb$& G02;hr@^ }M.jb`RpZ Zo6:Hx'Iƀe-ڳ͊*Q$x{ː) M 5>0Ϊ/ 6D@!&(-|Q:imϑ-̩,i$*00v4M8^svx%ds(ɚ]n ('⛭%zs 9}_総/v&\*R [TBzA-x/ϿJPMߍ.+[@M09hdÃۥ"T3 CYJ.Ѳxt^у#}b^;ȭHkf Zr@"qCeDʊɨa!ģvvt¡/?.7gk-gf!#CMśu!VU$F12NAb'h( І|K63҉bQp呄32v |~V .ͫ_iQ5gʙ9&h^/Ƚߥ˧Woߌ2߿;=0\U꽰_u?+?[5ŵ[VQC0d|\Y3&ccT܎!o3N2.;;$|WWqv: ]m pXfŕS nq ;Ů迻2 ^{ $#g&6h.sZ3ˣvt љdS;:.:̱%/jo3VCȤv/ozY{p\0*\j5`^PBzRIn>gXP; ꜐݃9UDLNR&nR5|KT;vQmqI9(?S9>#+QfG4VnwNu7DeyG3=\Н);jw5y=c\nC67#d~zexxY>CkK=bźWFްaogkIMBcO>89*œv~M}FvcIj1^N ̬ʪdΜv]|qoվp6x^<Űxo,DNw:gLN'u:q >fum-;<u1` g>N]v rNPD6mrz,~/%}L!]qܿ~^\gՇv8ԃ›ݎRQsկIaÃ.͸A,[ 6.!}z?"RsYUXyz;(k8Jy6:^\t+/"ѐǺwCSHyK9NgI0~vf`dr&#n7=Q<\0/5Y ԙ=% q\U#tKs'?dA:*ѹ7+3\tޒ+Xu`3[sѮt#:䜈Ή)wt:֏9ɎV7lG0[Kv1hR0>Ʈ1GL91&RIK\6%Q;8Ck Bs vw@eCaRB0<۪\U, R']]o\7+&ءͯb i2`2;Xv,K%Ǚ a%[單H")sd;CKnvf,ۦvX:D؍S &.nvW@\P;KZ-7~bq":m [h7.4\e\]g]p-?.4v7,۔G'CGƒ 9^65, Ĵ\Mȣq,,h 8kcu6fI}6F6\VGI$:?M9u˦Ѧ^ ?A'ǦnyXl[GG<^6LNu;6˦V;F_ø<7beoW霂Q@,if']@l {a}3sQ@')- WY+m$c 8kc`c2:NU[X\ jO+PXX;}x*n鷈ï;y*W/NVzbӨve(7ANvp+V(6 ܬWs쪻3P;hNu(#Fx[A]' 6v3+ <ݵIʰi9zC(+>:mYÝ Qbh*ض60b8dolBA 4~0m̜#ט2y;˶[$aWQ޽Z)hz%(tN9A *!8RƅT[07҅Cp$PĎ;aI#ұ" ߢ?w#mNҫg xg)&MK9S6= Wq!%z`b.@랡t̐7f8+(`}pt2HabjwAhrO Pj^vFs,78'nL`ķmWl-}s֬ED4c&5V0f9>6&:)Z#i;4t3{MIbB-v b1ka6 a#|>ۋP4⌇ 9+D iޗoAe(:P[cu~:99@O d2ϕ9$DIm]Hզ\5~u;H9“Й33(?&A#*}1az5r+2&^y6_}6%m|  ѭdIK)#f[F+k</{x^8]vز! N kXwd1~t8fA5h=$o _ZR{QPY(0ϺwYl=|pGpA]@qb'c)y CN)_,gymWc_:g/o"aYu}IuUˠDsm,Q3h/(ǻ3; uEĠugFFF}uZ>LGvq`V ]y<`X+ &.V?j׽6̑*X*5uZ †(ׄq$!-20/7~hm.CX㍃ cMrh crLE{_aTZK (o +9f]e4%JI[&Ph 7|) Ux AL!0gqV_m#q>s|&O~ۧ~fNۓO>X?v){>?/v㑏Z9=a;'Q?>;u> 9yv v} vOѹpqϩnޥhp٭ϛ @P@.@L{ =1D/'dX{ 'r,p׽(or83֪< g[Q0nyME76$;obgjâ.FCb|Gd&XL*vfƉ.ٌߐ9eD[&8zHMDF^0VzӲt7m~}A.(3⹃jYVǁ~~cUw>7.YxKa-z]pK9ۻ,ʴ鵆wF{c!Ӛ#FCmrOh_.dY{ oN' ]cV/!]|k{ؙwtQMeY_%ړe}@PZO̪͎}Z_ylJEv!%?y ǜ<A}|I6ֲfeaD0Q.p,7LNGg^&-xC ƙ10/jӐnav l^Zչܳj4Vmr2dGWu߿8}n]0.y䳏ֿ&L'o;t<1)fw;q@#xcy~9~@W]=l=|Y.JFRM%-kM+J}SEێDa8|:)0ľHdYui| -WGyPE&o6I̋NBtysh?4H7fGvя%u%qq%A^ϒ}֎>%$z0Յ-$sc El wj__`bw<~RK+ɻ󆗼$] bLw5 ,y%>o˖o}K87F;yBޙҒ(z%_ pHP;X%<ְ7&࿆hvaBq}Ԏ"i/ #abpiv rNW~Q?_fzO0:` ??]|p#;nī$\s>g/gU4^9o1IWD;tWMlIb:BbIqp{=ioGݙMuH< v2LN1%$eWCM)5<[Uw;95}E =|bA.BZs5AqYɜ2E#L(|Dd3נ\͵LC{$i:9 9$-kR*䈰D$B)0;%[9VJ.QWG^a0)ӣ֣\~bQQmm_1 s 2 )bE6Jk8"K}$.F֤q XaH l0)($qKf- {bxL*!b]pHV m,Q,qfʹREEQPvN^*1Q%\\kRK]Cq s3%~ٝ3nn~̇Y0θ0{QcLjywP ab3 nRL{ݧ83xn*%ʷ6)UC~U0_ <̯?C FMYϬ1b3U0gB4EjAVgVo +2fT웱yxΌL+ɵps7ƞ1LxT`X`lSAR5WJ}k} `=y%dh'm8zJm/)?r%;eXxI"Y$4 F.L3Bj4M Q#xq]G;~{5n&Ӭbs"Dא\E^b! …U&CaRr&U#rӼN bvT 5ݵ?*6$f!;ШDN=b˜3Uq1]Zan8"Ȩ4X+ڹ?rVa}35WvP662m8wЁI{𩝵Y$d"iYR4^R8^arp /;o08`2I} kI.Rj3fnj M0&+Sk'3G-ŔHi ^CLq`PaI)AR D%\ɪKusuf?eRDQR=h{}ݮeʽޕXj}skI}~V(1lۅ '/ت%7 0 +`l1v@0pwtnPL_C( &a~NQ1k;Kل/p6?@zv93˭{@ˠXb68HYj.$x Sc+!]I3&4oQE5y9Ǚ̀؈*Sd,X,0Ah*<D1\2˨CT(Vq!cux*Ўи2:*s12)0qF.Qx8 W(kvdzR$CQR.'bgF} *~gs6tk#RXyӻ^c%o~]0)Iޛn/=@v+vф"?z `_]Z3ߺu'He|6X"RK:iMd ll}J^{Zt-m~#*-fɆf {X~ɋ9ρZ[#lqs%Y|>QVl}<&VZWtÓk'R+];93#8O!iXf1VKy%&: JZkiywo畚ȣ2S~gAO?=gi~ZAU̵G$(Yen4k!`>KhK4D(8|,ks-2 iy` _7Ι߰(QLm^ʼ|)Ⱥ^Q%Jj »(h A3Sd,¸R)-v90r]u3fH4΀SpPD 5^ 7UlwIʄsǿ8~Ud%X%KrN`g%2lO~RL/W|\:7 IBI`w 7ܥJWuIx*OL֮@Ϙ%af]!( Vjx+*br&1B;҈T2 f(1JAQ*9Dռjum}heR r}S CɛBjghy.&[?N{s _Fd{SzRJ BNycTl1U9[%+}ꮬl;ړW@qG+Y/#{d>MvU wrܕݙcU/eP{ N" J;;jHx},ȂQ2@nZ7UB(읕B@PKD@0\-݇n\Ke}T^'LBJRn3C ЬGh8)i;x? MABttβ&8M[l#qe-J EeB /朘Nɲ%m٢Uj45^T*6b36A [4Q6*`d6f |1S]f:x2\s%7` FtՒ}1nȗJxxX!CQU؍6zG *5`'E{l=x齕Խ^]_eR-eR} 5JbpAIIzagY˜'A8FˑUX>ŢU5%f9rv!to,) Dž,;l\Gje`r YEFQ'qv*(XT[T`=e0 d@@,HІ2dMYwR!4)M%B8[Z n9U,-Yed1"LB`TArGq1!ijGVy9-Qp<{3-:d]; *'$ΉGa:OfŇi[5*VSY^S" y iZBDhVb,/XʑJ,vEhi%]:71mxmM4az`G6_ð{ﻶ'p6*TeocLU$20eC%S2ΕjQy{#mC t7H[ODp$~;<)I/eiWyW$Mc?TtxxÕhy}͑o!? 0;=578 q1o) ::Se_WoK:j~S,#v-۞%qӁH)bvdJ!QW1_5Ha^xt@L e\js1U؍Qms1i.fs1cS\!JϮæ#nF԰2zxY6se5Lz>)^|ZD->z|L.`ڂ5fk O%w ^>C2u~bz3MGNOM4o!T6Z&~q2*ISC f[l.O_|.?Y1ߟs&ۖYK\+ūV`t'mIaJ0f|!4V)Z-[jZiU>d֖n<׽cs{`b`wJꝛx ^4xt:G5eWo)҄e6xv`p_-m5\3}*݁tl  =H+&ޔB/U,?lDc3&8 5%Tpeo,J㽐~Q @TU\ʧ 'PN}r^AXja]m,ly.]/rѵ X0"0~ۂ !I0dNJWs/ QF|g^SnG28[:% 4>>tS =?ǾGLL%vf&("2Sg}C8g!dRsHfi P9kFj.4E#U艞n'2m4yAzq7(P[J$22ËO=۹b}Vi<ߧ*p0cTٿJ[57'̝܄'_P&reWV#uX?23˰VXtsΓ7/O\Rnҽm6T 3! 4cԄʢ9gC ӇnL1Д> ΒkG0^~8t* k~?חL"Ն1Pǭ`T%׮b3ajK1ߧJn+qP{ת@ Y,]^qtװYէO{L$>J`[NGPQ#mq2.|n庭wζۿmm;f)j萾r%smݭК;@~|vrc]ذ^L☐5 OMݑ6 BgBH'6yP a-k㘣+J~[ɔr~*v*kѪ~LRkcI S(UU&Fx9dF'rL @) ꨍ0ŘuP,=.1Iaܕ%'o[Lj\ 5v3vyfY&W-ݟbn.XҐײ7>ʔO^;'gSDt'\H'1nLٺΓ,;uVC #UKXꬻ|sz* Dk0Ay)ZGe1) Y6 K Bp0גL4i>v`%@C*|-jE|1SB\NHI8օ }OEZӌ{4BrG$'Q JJ6Es9:YRih#bS%w\T0q4B +n]#0bdhYBT`5gzE^mROx0e1ӭv5C=^HiEjgwVv欻ɏ~T7cδv+ު/?>ɜF8N%"2e xۻ*g+Zi KW5zs3rDl+ blP+[Gؑ1ܾ{VDv(9/D{GQޯsO.Ѳ(k+=y*;tǎt!"dTeG3aabhOs&MtWqlнַ ؾL, ZSgQ҆5̌aېq]>(3U~ÕxXUG1o AH7sI`Q^B7LzvɝV%rR1=*oѣb&2H$41, CFJ GAFQ43T樲 Q>1ś 4}hQn$ɊaCoX8ҕq_wfG>ì@vkbŸb1rxXU{Of^WrHp7j\7cV}wd᱋ag#m|nEX Te8gˆɓOwh1]9\~7^~>hW&'@YO]>X}#&k iW~3lM?=pWU{45P.)Yd\?zr~q&X'{~@o3 xu|mŦ3tݚs8L.M~")[hu۝鯧}[gǃa͞,B0LrVq[g?_C56IUh1Q8(G8'(:[q^O{7jw7`jGh4-4Mfמozzչ1_(98ec7DPZd}dxp~G͠} JE)Y1Lw_ ?[bo{QׄS KVxN^u:Tb? koR 0ҶwK_\k?ލ#N> 0M>j O$wTYñNw~x{ݻu=ⴆa>=4IVVboGfWzq fI4Ѿ͇`w >QOCOu*QdT$D/Z~"\ar aa^NRaM7&Ac÷<XΛu\ZzH]<͜ L%gne&w.F̎6b EZ1{ =}( ?ޛ)y^ q?NSSv~~WoܽèDj)W''%OwOeXsXmuPoUځ t?#,Ti[쌙;vr3Z|nw5Wd' o>oE{CS2M+`ʚMRR+ Jb70@*P-0 cmmƀǂB0_<`΂=@"ϻ45yHH< [Uh.R:-I oLz%[fW }c`%ͣf &Qςi㋀[|F'_ ;hsBY BJ0O7h.cG}C*R2Lu8\:PT si3T.2<=vg0SQ[0gE{\]mYO5Y<~ +P<5鋐^`6H_+,L 9Hg/=Tߔ]T$[g=R)$)[~L넝:agvͷ:aNةvV&U"lEΎG@*PsDN!rvY1uNn8{dr% њ ҾFے@*ʖږm=%Ywtj[%k[r-yGYaKVZE@LzFbBt,%ږ̳%W ܖ,"/˼aV١Tbpeّ:P,;^~s^>2Y:*+3vV]xI1kp@aBh%m`bCiR") 揝=H xy`Ge<Z;rcON ($X!'1MjeKp*<@vIѾ>LjF2ZqG^r 241P+p%鉫^^ ?dJ!C qU[ڪYzXȧʋ{b;!A;oe|_vR)6ZiC{ lC*ÿݮ2eXF`J`)\l"6| 5`Pw_s;)/ .\bNC߬Ƙ L.}*ɔ'7wŧT0Veӑh`37k@X*xB MtDΟͧLA.F*I r o /jvky-Y^|5S8DaECp* v:^vtq)a2Kޓ$ZpҲ )-\#QmFpm?E+ RJEW ˆ C3Jxf&0E:e69 \DXJ0 Oa-RiT{ڊĊ$þR[Y[k{2 C!E1i c`c04XD33$K-ZCO`Rb3U`I--\~^=T\]X`P)PI;-*rk2Rߪnлy`U*W_wτl69!HQ13k 5._,^5X+ޱ$'Zo䩐V-ʓ1Q)zw;֥B+N &q`tVHѻT~8P!rWn=Y0WW+D?63M7:z62}),nL,MUler 8l/dh4ގ41,BAv;jKЌ[ai>y'~G!xOIv]BLySxN1%y䁝biN-$SҭSKH/ B7ȕ| w`-oJ2H4RTuX+oVgYrV^DCYu"J$ҁf,R.cGb*4 ,3 +KBۡ%3+jUVzwqw0JPL͗Kvҟwť}MwfҦ+0hZx};nbo*ueCJBIc4"ϝ$ ދ4; OWw_R8&s惱ਔrGk[Ckny9ׄvс8Lj&llT*ydmM`{k3M+C_>S˗[CW5ǫo(VT`w܃_Kh|Y̖aot{^q7n^65LS}Bd.$?aDw5 zq+tߥw)?]O] 0(汌"1.HDbfH)sJNRaI‘6ؤ5'-5v7Z8QE?n$('a j60{tČ41\X1\p8e0cGLSYXlB.B`X"C:9Ē(cPSLZ(!AD,PYp@F 2r+Ů4xJLZv+EW6].]9# tbFk0An&GԎ/[ XENJ 2)aG&ƨ"E9̴-" K,DAYs1s)ceW$fHԹc2 c9?8neH|#ӫM^ϓ/3h;0eXF&O*H[ KB ,ŽÊLY ZOͼ+X&-b9װ4@*qHv( Xôק_?yy<4!w,L(1'8M,$A/T xܸـSrz2$W|>d<|߾0 RY ƿ}Ƈ p4~ޤ !@1on{xYބjo.ށ-e T+w7^ku(0WL{g9GCߙzseVc6-NN_VZ]vj>ӚQg#͑3īc"aԮ6(k X|PThuT!xnqZ[ɘL"!aGt"i<)wF`b"_#-VbX-V3:<'2b ءfi'>$*d7Nb-<&9c9)%]_xd/ ,(< 2uI"}&I @}$c*#s{ur3OJe >y̓wgLxf>8{Zh\1Wr3݅*=f7ΗU6˰.ܧ+w ޵8<%͇wn0aG\=kO>ٔ`h 喫e*]% g㎳-*Ʊ7]:_q駝{ < XshxpD B2PSw uzQTg } .sbhMj_ۀKpsiOS,3\*ª+ ٭Ų 84ŧ$V0|I,  ٤jEn8_G5 > ,tZ1id/58%yr4귎0Z]:]7[m:7G$.I$hlm/M0A؋,Cdǜ锜p][ScIr+ ^#ꚕLcĬg^mݬ`,I.GPGHZD?4 U^CoK^-'a"{J5JC0MJ6by!Jb#CpѸx`J١"id^̓~eA! BK,Z#^,p1uH"0+W鰝 RԹ;oToQqv}]gcWx[v[ wH|&pokUYf^NcZ0^Nec,LW5O^eiXIFANMvoΊ;+غE[HΦ Wܦx17A:\Z&ƃF z90$U9[ηV>(*ԗ*旞^MX/cR܃gD]ͭ'B&[WUi}V Zk'tŦLk!\MnEKR;/vdϿ̜vvOqwü+; |/AB,ގymLkmÀxqC|cUAw*;^ Z37!#g K%L,eN0AV. xN&\0iX QGef|ogT$Bsˇ,q^^ΒeJqIM1/%X#j>` — kQO ,% zpTUQgI(xJ .I0 H$/c$|fLpW,"X[iu4}Ho|ZMr18*O}C;ZmVˣR1Y u όbBVł=ڃF%W!@b%*rLԸ)KAG\!ރ̴< SFqE BOٗ'%Rր%%#C FT#ZrlF4ѓKC]}6XdVV d3-)` 6knFR{N $@5FnN@xc+U1 Q<fV?+3 m,=)3-(Q;#^-mja#;f97.gDM>gf%9 I #[~+Bhl.ER|*PѲ`N69 l`/DsÿO%LnX dN3ߍNQHO:t[b2U5Sp! "%LaN!Sog.S\B4#26D`ޟ휗Z\#R{.èF -cEt%EUl߸c7_e r>;t'*eH9&EB(,@YH"le,Hs2%-,QZt-}ˌ-ɰX 18  ˹#CÑ񷑖EFw̖Y%=c\h`$ >)Q#F zEvm"t1ܴ WnRʛ--<#Ce8ȵT5&f j4[3)mKkh[,1ip=(L6U9Y3ZseY8) #7|CrQ7%t@t@nΛwigq:b{TVsn)PѐYa9TՇOǯa{( ;iRP1[UY=ů瓫upOs(eHEb42:Lgeܼ?^ɻ~꧲`cj튗i`?Emv:D k_{S$s~{[Ƿ5QM š[FX?zr3Ѯ^k[*1oEN䁬禄djZ/X ІK(\?o2 $?ݦYNaGc4le5k݇0pybяw/g?0ZOі17̱ӌrtyk&PGߧR'GGǓ}Xv !FzPnS~ʰ 3B+6|.W?w-`f8Y:h3sUr\eg!-Uv{DRC(Åٯ8q/3 ! ĵs̘$*$2 ʻ ky)/m'T >~h Wz;I=c$l?j4uj[M`>z2^mrOϿ~~tBZU9:YpTaH*+Cʤ8%^#(q iB?Z1CWHxv}{7囫_G⇲e৛7ס-:PP7 }WCePWΨ~}L]i<^GMn_U%Z:{Y,kPN}̓)n0:<ӳu{czfP*tC/SY7ZкA tź0".ںhu!8D`J#UA tź0Aͺhu!8DW0UY:Y,[v[f[듩.+蹻]\l?8*&oONӋs:l)Tm̟~ ȸll|Sp-[/ӻh Jh\@ޘD><<-f3k% 3,(:,="ȲhUGA%xjveQO%In1S4wN|" -#)ݠ״GAlK|d}1B23D-:c@dx1dQk`90ҟװ}>x[0|uAI4K(l{S`2qg$d6`yTweqI/R^?ewy<%IKQ 즨bw:2,E+82H nNn(n(lbD%,,EfMF1 %-Ln%NL$1c_ɛjʗ- &+ :l#A2}Q`NJ AL' D&]rP;r\[4-. rt\=/&/a.]D,$g* __|¦+{9 D"e׋An{v~O[NٽqC7V.xo1nx"ҫ/?||I;Y9o?[ەî/ V`p[`d&3Fͧ+yzM<VLTlvMVǢc `#].\F*2]Zt2 %hƙga@aSmk75Z-{ %ዤt|f"#=NHD!@32hX(u'$R9Dx:BCHl*QToUpHɀ+>W;~O/_wgÇW:ň8dd5l\RdpKN8FXҩޜ%-I0]U+|aʖPCr1Mxr'=]&_ :Yt/-XpldOd\&U_d6fД^ҩzhQ@ēv%{_8%gH¨%b8 r7єsL yt@|* /kr6¯op6Zf/>Psn/|=.?oޡϚ #럾?o@k7Wa0ِc QRr>ǾI9;\ yf;~42{ОWߕb{qdho}Jt.^}UzgGS()<$ ydɧ;摃`@q#g"u5ח?rsPs;V N]jy+ԲR΃E ur.E ٹx|s4'u.gln;j l8ڷҜUOCCWoO'vdJ79*wR]msLN:H-VEНjX Dm+H0"w_H;%FhֈJ4%#M$3d/$CEj(kdW 5Z,'*|uvco?D3%Je|]3 O1;/,{IƍXm=Xơj=Ș6zڑ~|S-]~ex+ϗ| jև"/>?#g[x_ zÏw{?O&beo%3/O5_Y~cg% =^s.$%lgEfIs4~K>݁ZF2|DQZJODwrf_/iwYO |mֻ28atRp؂9c+'s&A%@m4)cTsajEE3xQ҆ǭa!C9ʆsurLɯGJgeE=G6===8*5ze<~}y<<FJ k`R'5љ_X)_6mn D%} $9IIs*<Ի7a8aK1,\|*17̿X/L M^fT$'K KBpހB0W1(Db"Vː=t):2q D\FmڞzR}$ݗ7 ]U5._ , Uxhµ& r WamfՊ0Vm$"0癀`S &2( LDΌUTHo00d YyB!Ɛ<%^*JiZ IbSso΃F_Ok#<FGP S]D F1h@#ZkJ Wz=ˑ6Ն |xTRaUR6QA>hZUvL%DC(Bz k F $>d =Ht9Ao|(a~67xsu&7c=.s޿xA&W/ TcQh5016q)r$kg'9_ -C[}Yq׈/Kxc˒PF|7 ^)._&kė|C `/ 5KHhL|Xǯ_YM{Ufeܷq|zO:i%4/~,gl/U<pv?/./Tܶ7goo1=\Iԃ0d6ZhZ(Kk\^;_5s=^l?{`/5QZќz58C`uKJRcx#8 |='?cb#2RznKr``;\5 cgȜC;HF{N _yWjk*I3;fAjCd_~ eNj-@H`u(ӛo8NWݚGOY,u,>󺽛z,X.~vs=ѱA{óᮊ7O73} N`8\kxGƭvzbY%N'׿>NѤFWsRfRVgq<qɵlV&ŠuTګI$CڈTfG CK"4᥆X+x1.Z'\IaPôJ$p G%B^:a¬XHVyCķV epgeY-! dB`zˤ 2(B>!)F:qOYQUC0D)):@91RoW"1k@zY+2&0 Barpn5$/%g&aLR "Y7(p8B,;!z$xQRN4nNԳ**?Zw.ָXOWNS .,/f }lku[ͅVǏT_U* e%cܳ; ,x*x⹏zw++*qw%kq{oɱe=T~*Ѿ~bݝJuw vg_k+eL rwrܣD w[Y\AU~n)"x Owi| 5WG}!ykō: % F VQ vqcd^2stn4I\HU s5 Ces̬O|ήOˬr ]#=O m  Og~j>3^'w37[-.߻E80]-ggW ʩd0'㌧ubv3}c?_0L=wyg駜HW3463`G$f'˾C*eFPcxadJ# ,W; xj?$ Bf4IXTjmtsΉjEpFcw%p8Rv(3owejz3",[O5߬s0~I" -C\y3r7)4bX?,tW_<4S˽8loǓIEi;0[7RbgűQˠXq_ƿXXȿ;md*k/vϙQd~}{:2v^YӺoxc}13a78b߇I|@'@KL\HgB!6'2MZO G i9(~FLͯN,dg&Ik][3>a-W'`aXS.C=tj)-V '^MDQ5ݩ!ߗ&9Qa$|˰mfУ',|ybXQb1*^X o2B$@Q}[49MJS 86 (Ep 1D!%EjY~<ԁKҀ4n{D|4,c&~b}YJ-5H>K7(po%@Y(keX<7u^Xa}Wn;e#,ݧuH` ru[/^+1{Wfco7jKROS|Ly,8^Q $iNBxcN6^B'2Fu+y-m" kPsq\bݱ߭Cej$-klDloN DeV{Q*֘ 3pHtKa <#kFZπ4VE4d &LO0i .XNeTezJE>yh2M$&VoAk@0 wy@;A*|=$ ` PyTCKY8P Ivkj@v9*1AdFTr$\']4 lr鲱+qrHw:A`rQw;E(0>,.:+ksuzz`ixA. $) v%|s2uد_1bl=0_^6+1]9u1 ,=<]/[m?swix>Np8X0VxW x8- >\:V~皍;)i08KQI{||,Ky zG%g= |!@t5kBH/e"*1jiɈ)idcUm yKњהxh8g΁;#@,!m7$IvrkV5N9%JNx,Zv$m&_3=eIXHO;2T/.Z=|vbvA :dwJaB̮-Y)fz  h Tr-4{?½p뫖on'=r_Jwxnf9+>L`NO(APDHXw~$!}i2DͿ^4Bqz˩shyb1zT nfW#ݫauܪ샚Ml5Y[$(C-UB~F;͝@ýE{Zb :\JOY0nnʥA`îg݂hdpA .< 5C tINjH7K a+ANسOnIG;?HQ,h~xf"ťq3b heKqHFDm3ϱ;]?]P) 2%:JYdm7lS2Nj֖m10MǤFiJ`Pv+ṻWKء_6nLDC/.X = yf%v g gdzaG0+HL<@U;Ll+c1TjGPۣ2 xgb(fbʨR㭉BǕP{'&*@tjϨFZ.RA/9bz/!G`Sw@IuS?x)%Fm5i4 o 3>`96@E(lH'e[Qco<)j;i$N-|'GcSau:aEZ8m  E7 apN]sx]t=!ec[hIbau: = LY˷OgBH 2X'xRsZ~݆ZQxBX:HRn{ZnUb1g 1fpSdƒUlnVX j Ԇ !-\8{a%B JyB6Ū (-myC R6NLMW⡬C `KPMA=#T1љ~7/>OhLBQٔ@CW*»&p^OOH `Neˆ.u4a$h!>hFt@!2@CGJK20JvY@"EI `JyԁFS?|S,,%CYv}E(F50*S¹ ilv2HL͘.ar\ݧGʫ;u՝:;( ʟ@{IdbY&(3)F@Y-ՁEUl ANwKj8U23ӛͭюO夣Ӗ ø?U_`{wj}T8`._E$ -gO&BY'0}TUX|wpϬiDŽB9f5's E6(.)PioBWŢϗǩ$c)vG(A<L)χF^YFR*7S4R VҌ[ dIm:Ca3D_JAӋo& $)x<m9nv(AjCBjy00"a¡r9% B%ȑ: _3öQ1Au՛f-V4QxeޏsAS$^~.>,<0_:/k}}{1D A ab~L)ş⻕T37+|7w-_^ !O/FTb2\FV~皍B RX ⁛rJm9c9$2s4x5&K `8IanIr&U^[YlDl"u1H "&~)% *Lx((L."1V&6璁 j樂pk* K4K? /\^f5{G5SA[VJ4ȰXa;&^Ifsp(1-,DGlHJ>q)A/3lDiYλ2O:+c/cc'^e>wH^|ThW"mNyt CĐtsj[ì( оedy%!}y8 ؃)G sյ0B`E+B-]/4>WȺjc*H)Krki=4b$ 񕸲=ҔJ̛H&4o 0bOrmޓqdWaDXn;c6LaǁPCD"5LW!v6 I$UzG;hVFτGܩ 4A)h$!*D 9DPE@Z$=qI^@J4=3d 1JB,FDׄ.3K-dq8zƥx9^lfQdB[ZVi}ns@=f16q!,IQEyft#I bo4xdJz Ҥ["U$˕KE3<6l4 1ru%eJIaG9oTpAE;W2dUR%պ$|WkCa)V4P%*)ω)yI# vdV%䌁9Y*|$M*),Sbq8~9L%oJ#nW)`h۫tُ $X0`S &-aؖ Nma܀+ _)mv0!B2D c,BtJ1Jڛq+ԠJx{SUXqkP69?c$x҇5.7Y9Ex9}wo7^,Gl?h^O=r~  ,/bo-U Wןsq,O8٢*0w%xcBH6yu8o#3n (n2Hɽ6Jc4Բh<06%ރF@+ I E+sT>gG&_$Ts +;/`Zp!Z 9B!8mAQRZľf>XYJR`\!#4^Y 䘡n6klv8\ER#!^) bS' NZյ xi7Au <+_ '.LPF!Gc x``x fdf*BQP6'Gm[yoY$]VB!N\.\Wm&'hߝ`$+W>)W8\^KqQ :n%B%h j{hV|q (,"Iyh"m2h%[$0I^NHNZggQ^z VÓ0JP)c-ԣObr/8#wzzϟI\nM3Lg;jR!)6Z[lYD8xM ea4,HGSȃyÓG)0̂R3%Mx RxzBҨ_Px1Ŵɿٗ,5~n=y ˽Tz+zchxo~>tvt?\ڠ.0 `~p`lt|m__%L=>}Jk-PX@&V~c8 aYa:97W43'b 4}?=pmvOz㿧0:GC^I 0H# +NI,`ߌo#x=Ϸ)m6XJPK.׋] 3M+|* iYxrmNTc{,wU;O.U:-{nƓk2N-4R(ޣv5PVMbfn p! |؛N:97sx)|?9@c])t@?zɒnڰ%QR@{??%ALУ9$|*nUVϐoo /i$=aA*˱EiaþMf {,;Y\ ߦ:qUO|:Ci:ɼFܦ9VևkD)-X\ ZCPR0dQjӡmq5zJMmAl ? ;?l1?[T:wNk3Fp;- ʘ/VRUxq)G[0cO W#ԝ)7}( ")%tCaӗ}!|NE)iU(twd-w>S_5wSpM.]5-[n w ]Q.HU2ߟcM%;jҝ/E盛Swy;"Own|W;[n+twgB[Do<ޯ|wӜ;JNsr0T-:2lC6:m{rpnn9lm1tjD˶9h+\h͝c!8 e8dsssaHݏ$o8^1\˛E(NeYǺgP`v?[HX[x}zq~{Cꊗu닷#.FS45/rWfx=}fYfum? c/jaxZMMڃQ`Rۡr TKzz.Nik,2/<'Rbb1;QV8t~Hd'o>Ȧx:] %KFcs6p8I єcNT") {&Z+R^}trfc@g.oSV)G&5+Nb&jpޤt} R")[,$V8 f2 ̦n(),(^n%!$Dc8,Z6B&dg[hoSd_/G6s6kϋKAMѽ`ZDjBM+_وB.rh$1+w_)jCađ/(Q XEe. /r2b.Uڠ f%k4jL:>"_wRR `̥}K(D9̥Kh$EWG?:rHН^9fR^9*)-O},Iߊq|zfFë W7E㌮O>|en\elj~@,[-8~|$LwOONכy2=MV./u_.]m0E Y?UAFCa#/~;3򠀪0 _aI0އ4ȏ?P]( 4-?j> >_ *7al|&|%?}zRbU D#|*9/w%n_a4=S@֕!c:y SM;dECV]di!>$ۈ,I/T5Cj:հwIlo╈5 R!SeaQ h/ф+A[Ժӌ(@M:ӌ; )9HWeҨNMH$|wSq47*5ćPxuWGnZ^wE#U]W#R.uW֤u\MvϺA' U3bG("6-r[gCZkFx$T:]q+ ȝ@lihoFΏ}vyCc]k̅ɢ RrR&bԲhI-z<O&s<^ ~[h 5}^?2Bs0}|ta|rZލfilbʏOlCp)Ab_n W2(rqPurh"MI%k-кҭ Y4>zdIq1H\a9H@Sޤ[6u[hO1~gKMJAD< hޤ[6u[hOIqgҍ?{۶8{߿W$i/xmЦ"Xr4~,Ql1/ MܝRVN6hƭREڭ6j&$ Z2%QBj7{ࠠF6*N?.VЃn H.GIwOgdgO|1ݓ( J -Oz[8{iv. ;G}jzdyc%T׽ @L!l[Cc\AE,O>|G2\ITTvt_-8hIuv3I\Jw1 u@ǔbmqLIypL0FPX 9c25,C ,toHSVh੐@q]D@H5~p!D u#bSe~#d &}xIߦI_ NZC&}%&}k&}$4]MzHP|;-my [tVsk.85^e̚,*˿\\$V˧e'b$}Q0ՁQ0Jcu!AG=>޷=ҽV!3t`& M\]GftWevu^.[x2*Of<=#]e3S- ,*2BQytj:^~($KQK-РS u 4vN՞4!B{Og a27d@0e``hGB̏-<cNts+hf$$Q!٧h]Vahf0HG TGC7R!:8JS!?Mw*RUEC+ܡݡxh;Ԭ;TFipZq"!t8[w*Bu=ы]$/CQRX'ε@TVZ@I x+y0No͙k!X ^KʌgxhBH%x0!C̀3> e-u /"%5٥!\$<+‰àQChBI#1X5O9\@?>]gSl${r |3^dc@ci#_M^n -@\$CW)uOkVO|j}?[\yNA<-.tڏn>*~Yn0dKo9^!QGaXCrgK> / ^!Ky1#D3^Zp_-Ǔ|DgQf6ycD 4ezi;7uq@mq!?_|$vt]X8f<$a1[RDl&\ I QEsmRĈ%pÈKS``h̺l"ӝL5X}z/B^>?=γ3y;9pY\\sO  @X;NR8uJ)l1L9J 'BY $T8GOc9mn}9Ձ3'J0s6:FYԵ8DڍUCbRUB2"umE*!kuB8 RW #W+J|&w:͢8`.Jg n6eF=׽,Mf1HbJ 4ԡ_uFc$܊x-`Eŷhl Gv 1P:lM$E $>vd|vKW_ByI@kT}6Mu}Mbd( \"db dk+#6[}0>o)+ d"M,M1'#*hM[5u-4>J&g>PH SgkX eXh0r ,2 I*9 ß9X&2H] W,k'_'Z4I-.VDbe)UTLӹ#P)UN#I@6h{ &%Jc#pZIt*Ib)K3")',VS_&+'7ېZqQl~LnKA3=L6%pl5.fk@Rl0%o?`a`GAs ̈R VrQf`Q0̹[Ul}$! gM%(N3FY Cgůrb~4%t`͆U yVh#t\ [(MD02tş.Y(̬x9ś諟?V._;u`"\l9A$0磃oa-Hk6@^wp; y7=% <ۏ6k J;+ [#sWf^=mR0inH6ic2u0FY 4zl},<(ҍn!mcP ,Olr.. MVK(iT_n~2^N.7=^MV=G_dwK.gd2..?2+_9^FǴnϨC]Voݦ'em>1vY{d,krJ9bh59s⃰kUN[}FA ֥/f̕o m0% ~Abjj'}[tODƧ^7g Um Xz>Թ`o7jm, Υt;@y{*~x;[4'@3y_yөқzfU's26tk8m2K"5z M>zWЭK1G6ha Ra6x z }ϝtC` ki+{mݛzctk%aTtyB΁ߍpaD9yZxGZPc8YߩGtVE|w;i^[^aQ`c17ebf阖fX{)&F lJ9 ,Lh-E^W9:kElL; k"@۽. GtT6tU.4/ź.'Q9ħ7^Ց%2[M,K?5L6|[^>/ u(z ۙK0dI$|+Z_+܈ &Le6Zix3Bh+ruS_ғTBی|"zKȁ2Er 뺾ذ_,]^S=tNŦVJID=\jIeU~ )ۍw'fhgYe./qVKkUz4ywjޯ澏sv֝erm?y-M+K}g-sw }yc7ߵc૙ѻگgymcdE+wx9/g%}_n[x6O>-x/AW0bͳ + tM5~݅.K~eDaTpe~ mwׅx3?g?V惏oG4 fC4 ]zGH p)q,e^Fc}\YSʝp-1_YjZ,Lwgbǥb0,6ܕ̓Gdn}-+F.WF70\Fas f#~}sbVtXfη3"yυ[& ߓq2 fER]V|o6.*M {_z7eۭp\AWGLMxI lŭYmsFʍ )/"MӄIE$Kd7,*xK30ytsUd.P"棒gn7X_?gijj:d ].k_h{&XM]4sjϔWvlnc 7~l 8bҟ&rQ#FRyQ}yQU't>x: 2ăd d+ M2|>å(}<{1"{ p}U$Nlt'Xxׂ2h'q  HB}ՐbBefF+A[j-v =lș;S2_$P~T 8_L"i?Y6AvVX^gt]YH+1êаN} ^(TIRRJ$ɤd_dDFFDFFd0K@ /Jd1A (Mu4a'L ߋcմ˴ݎox%a^^OnA6gpk^)|(hY8zsGPk#vL{e7*KV^ P.A@DRp1I,us"F 9oA`WBxb&NV^P- 7DT7@R39zx %G*szuR)v/BQOERKT L `ZG-Jݎl5NE΄(YR@0g$_Y֝&Ӿi!j`e)f웍ճp EnLcs.waw`&][6i, `\ ;w13͢ $a7nL) w*n#ƝKvS{p"1t"xϗؑ] }?Ru0_[/5 ;P+lfZs ą._[NiÐR"/xr%AQFӇ, EY˲؜$UO3n'~Ki@g ?с ɜFBޭ.g;oy=т I&\/"N08[ak‰)e75F^ο= m֤n Bۮm[Y|*y-'YZe14~ (Yҡ꾺eT~,*BdH3T9np}zʉt7:W-3[ދټY}~ 7-gM͑`[H 4Sʼn/ y[$rZ/tafg B<ōV@ۂb4h_>zԵ9f^ChmK< :- IM/_ì;Xy{LkpںoFdn9VV9rPyfh\l_؂p$ |n9 ʁݑޫDō6#쒱nU2$: `C0!]o䇷̇pm.);ə91'pr >? ZAq1묵 Xw0>؀>g@<";ekBBz-Pst#ۇ4b13S`yjA"џp4t^( bMwX ErPcs)!.bm[w>BԑgE+J3 $ps\Oqan! W%2:ԂfoB1DG#- J>4P1p,WD qӍEd; I,PiMݺ(ĖIo˧D L 9\nPl H9r`3;I&:C䜭>Y9$6 hXM ce(ah\rsPJ ԣ&j9t} җZxs/W¤8gs@c93ΔÄ6DnU(cxnjAi^ZKeNގtupҫr0Mq2_bq QQzz==uZֶܛ/ ؖ-OP`EBʭ>~X?06ȋexս??_Eދ1( F~<$YQ^a6SnB7]]4ɿ4E5Uy [QWHN |ǣ7J!\I,a}'g7@})&sq'9(#s[ 6_ ZD#BTRBt7F@L2 Ln J`HH*{׽!\r#RQ T%o/{ȞOdȞ.uT[Z},Hǟ?q<*܍R4ar[`{RKk0{"HgvK Dz1}hO41\> eXMY\tyWuE[ߴ1j"iq E=c< 0} ':JE_x^?n}럿+ {5l`O62qŔCL<&!B쀒=cq3.N1ʡ( +[DO!kj.8 z$8hG4Vɬ0M/QQx4r glt3%.ۃ"8FG;10 ] V#XK'3#P Cãv1Ԗ0BXzr$^|+80* ZY.n2v0~ W4,bQqE[?nAT XdY—bSBY2 (.5X\VG1q3p,᪻gme5ZӋnynw\Tg ->(2ÿ^)GCklHaRiѳZP1d&SԗBe$*s1/2IWPE 8|gf'JUU~ ˎأ5Ϳƣ3M=:T[ꪌy!TqF"P˖"">^o$|{,Aj@r*2}YeNL,[gxy`[3}\";ju> W7P.v˅r8?Ǿsfm_6Ew*i^b[|zfߙoMn2tIu8զz;f@ӱ5Zgajw$n1כHG/t %17&պ*Q'(oeyqSSo0vͭaVSo{UЫN惟9z:gן0hzBB~ʨ,cHڟYb+1 ܗPz @0:D&͊<>ɓQ,cDY aqFGCQ^<c۱JBG,2uȫ#3w7鮞G#;)ywձq>皛gS rfՕ%Be\sz鄺&P0-kT_c|pF5W1 wפ'\eܨ2ai;'E?+\=^al+b C1>ղ` 1:Eם3w |^]X& 7Zq8c՜g<p a,P1{NI42v݃g :\ygb2ϮA^x8a8D@!:k|oz0g&)rC ;M+g~w/PMQ@ܛ.nT=frO.:U{^^3~(f5, 0rz.9v#nm t"8>_.$ )WFTw5_2MFZ30WYux1wC]w!+"*1AD~x/0pe}{YCLrQ5{B[Ёʔ"4YD]E B m5A);zϰ20,PtcƔF]rp[w*E7'4wQumG0~(Ml/X=uju̪:eX0 B9!q&rDHMMb|ꁘA臓O"Eu(g376dIR!?X@ ྐ "?bP7ՓP}}n5]_T{g<"P#=?RX',qϙJm ;*ˑ 2reo+u5)*"/n>ѴJ;Gw\]3JD9e5ՅnՂh2wRF*L{\uV-ҹ .!t0S\;G>͓qD?BGu{o3ׅrE>Fa=r EG4(R /1ÈcGIp1. v}Ef10.xq k5WtZ#k8}%>RrDtkjM - u9ϛM/⮎nI@·nDbxx@aܑ55)٥@x@#xh]RL^fGa 7zߝ2ڈ얆mӀ%g (ɱi…y6!pYw"حyf;]Gr:{+mf'f*9'7|$ (Gz+=DAC]k8g.οn͸!|<[Lb(*9EB[L3|ˆx PF!N\!븧n"--c1z;Kn:C_=p؛9MWwko') &^]%ijQ-Ȣ9)>T'?T3i; 1tw+O!S$?sߟXPB!aڿd&gښܶ_Q˩Ԇ/S5[I*uvcg_" pFFRDMbo*.ԅ(=V*# $5 /"BԔ}3à)PƱS;OCYGIQ  cMj @aKLdCJ#k3Dw'%Q3dۏ7Ȍcǎ`Pe𦬙[G0tY92Rpy']j" JΚI"+A 83z-MO"^$&<8?n{UF|e bx8~7F0nbw`:$1'Yҩ_x> hȈ)Lter.σP!M)HZi׳@6NHmEB=iۼ{ϼPf:$y'6|+Oc@GsO|K6Blu@c͑]+moKUNNgPݜpXAæP '[˧EM>!%+B}#͖j466ފO?*j>B4>#w,SFjˬx^OP⸿r%E~fČy\P<0vA{x稩6v'/6h~l+:y ǹ^i8h߾d l raUbP$< >1))G=t9$ޮ?zFdG?Q`S @9NiC.CA4q@bF2`̪=#DvsEڟ.JqāsI dG.YD2(A3vat08 5:]3&* h0<=J@^DӈX@JȈAP%Q4!#c%)HifDI0T L!qHg\R%@a$ML>n=}­\{XQ}5(NTq(x[nXCݭ:1e}hv\\%:1fv2A]n}&|2%.]-!5߽G]Bec]A6q=e JX5eye_V^xGOMw_e kVQ.N.,—>k{Zs9W'ռ|iu1ð3Lu-vRRp]%Eyuvb~cE N]J -n e^(vp/chm! [ngk.U:۟jyg AXkETN1khUR-t)Ņhh޺y#JDx;Zr(\ѽD . 6Kh/9@#@LhQ>r>($Y}dAaKQ@^zsT>hÑwwGJr 8";3xYBσE'Xwpdf ! UAR;~ʮ_|;=i:|?,߆aP|:.0˹hR!p T@ Of_`J`&jPBp}tF @j$';YϓV=޷i¬XdӖE[al_"QcRqG#}JC\*ERGFcŻxoBaPj"-f#"a"!5 XGX#"@,H`J' `r1jY4_jĠxr"0U˜zr%,b,It238H8((TbQ0(}At{OQ! y0P#sgOZ8eqp8͌%=LfPwy2S:xrKЍ}42;0 GfZ#z;awMvI_K{}$$\~0;5i.~H7.^2OnۧQiRT?'toƯf|y?=E(DDԘ*y@DHcB$AI8{BKYǙ,cU'v^&Rg N`ITjqrUq_{B_(^:; /=\\~Tn @hHn4(7 (* eC\4tPqTbc!l}`a!X.{ŽDI wz~?b+ afP.6BqX,3!f8.ˋ?. ^7. ˯I՝&[ոX򆲪5UG5zx7Ϡw=5~3B͂;LaaTQ?ypeMn@/ӈ~:zl33i.8[*UbV Jlx6\S%$$b $q"Yi\*@ !NLB] w.%e |;-ܳ [;Ve]Oʻ@֔DX8HģϩDW(<@0^,^f5zg4w ¨7R+y \9~Rg5AUƗ4 $gq<ΐ]Xmn#Ba}yW ,%gv31V1y!")tY3L2Ϻ6 !>3\ -[Ė+pn2<:|K v o=%#;=wri)$(QccF" LDHPԀA! ;!x*0ylbF%++ Q)PHGںXF(ɀ8d*DibG̷#/1OuP 5䎨oZ;hHCk_oZSc R7cB )f*!IȄN#Ɔ*I01R6u1PbT2Fbg%B.A.nDH6v #Rh2nD,\\pd۷`D+݈,D g;`FO!uRч3־v5MV$-r ,?7[s&|KY@e;vMtm>ivPfp.oumח\8A׹&(au=pb2|2"0Ͼ\o~ fXr!HK#ÀlD%CLh [ L7ל2f ĊPB+1 3X!!%G6"Ƭ&*'MF-[gg}}ϙ*1|k7Sդc3rFK4oK尤qGpx39h|%2 em5#!i6Ov&S*!!Rgz! Ӫt\p. 8;w۞\aTαئI׎jSk}Y&xgf`Ό鵖eHZ:5 uAw{V&ay~{ijG Gɵi{xnto ƱyNmLIDփBOs-&#r1?՞[^]agc{=dvț_ԸiWW皗dh^PH^ޏ\PݙMjI'YΚV:TٜE֒|Z;.u&pb"ѝ.(\ x7#hao ӠgwaatakC,T9b9R'9Fn~;^=? %^}q~ GϦHJVӪEP> ZpyXv& 98s$ <Ҵ֫〥Toݺ6?Cqŕ5]vG; tm4j}m~hvt;$6|> fk͘63+l~̿Xg/~F ~#hy85QP_SU%'&, 72ofA@-T=>\<ˋ۔z*3XЗz#;2X46#ʿ)7jT 6~{Ԑv&ٚ`=6 (ut;8TZ {KKV##&9$QJF]tbDig-OD|`OD|P</fE 5*Q(S 4f\%kFRHcNH"Zai9+oG^Y|;?]ҙߘplhuh}:Y^ݸpm|y9} ~RwΉ%}t6(uNUtN3bfrjtIfy5a0=@/c.3[P#k Z'=Y_i&H&mZU:pbpe.<`vGY]|?NҹFw=q%QQB  Cx(v`{>ol[/|nmN.*BQ]R MZǮ9:Mgu0F]ܕ吷wQ iZ>Z+*L-F9#u ]O)p[Zn3'$qsNs,\lBڇәfP:bx'_xw1pŌZd]=Z>mjٻm%W~:Yn/!; $;'yJРxvmI&KIn|. 3c"}bYUL6- E0t^Kuz 8B!haVEGqݜ+E"')L?ڏ> IC I:@BRLiʄѐPg' :H@*⺑@ ˒g„t^/v(E9q'0 iŁRQ%,AXRf2ZfHBڳYa^ܖx,g]Xv%fĜ-.H7w/6N `,q;rӫ/I^EJJ;,ލT;1^!8 8kfD߹$7K j칉 iI2 ʎEbлk)lY5oYe1Ŝr'xv@O+1ʍhLّ>GKʔB]7mto`S)gk_B{o{-x!iz2u1)Q2dB_o41CS"ptqYaF#`O>U=Ghȟ+;KiTB!e{3%P %l`rp % aFR{^Z{^J&>8)uwҌBBh[#[F!vQP"<wQ"m$ hVgI9LW>wXU*wۜ=Џ2#`$p@ eOVk/9ag :kۧ>!_|kFo]j50!&Ih6)HXX!܏b\ SrANipazW6@S h rmj|ҨOp6S~PY"IMqnnsrXVD/_ Eyhst80DmcW׈Wq2RcH0G.Qq v6t\B4;VDEwz_81P0t)Уg6?}~lHc#flrcK0Dc9(K]_<'0 s6Rآ|' {_#{\^ K^hPӟQh:a TqH#Q5W4.S,^0A$E"#q,;h+7yo?=)7MѠ#%% 5Xq˦ްpH\aW1EmLsBpC& `"Ⱥ̍>nrt)&*G :HjXgDj-4(#$%Ċ,A~t%o1rB9Btb% ?k wQo':a'%qc x\A0'JtR !D ѳ` 1{VP\WKIZDu= H"W_ ~^@;)ߤ& <|ӻB_\Ǵkwd3*]]zF)O^p6n(P2O^"?5{Cz}{O}ԫ?R?-Oǧo[W ު>M6S5h,{_LViLz 7N s e&^ͣ̆\!6]L<p ' ;G͋%iTk>qO7V Bޟ1oG߼lܴA.$ qA&| _ d0cMx|FBx0'|99@bj-<F1$_e!mz2Frz:B&H _ːCzKU)G 7&;wvެQ%R.Jfo^q5)\4Q@_~fnc sg--qVhɱS+DžBL:e4K>h#6[ŽPDrһD@j0l[P9 LU)c,.B%k9! *4(T0KvNq%Ii@#F`wwp0ɋ5;.%lH66 4nƖX/ Na7=1 @\-ŧ%3 I qŌwOKC z8jTcXhhljZ `9{:`8C8Ize wtq$P52b,h|'&9)ImA`kп/&BH+4V63=_,RRJkN b10K0;tՙ ˊW6;/2O -`Mz[hf13.&;I؎l&orY?˖KO+_Υ988^87B`|nia ۛ2s◛0i.!G Pt-kQZw!p-e@[PwjfmJkCM} ʣHRNU].]\?`j򬣒&ُ7ԬNP++0 肉C81 J >vFF[f 2Q)GqH_:'C(fsH+Ŀ\ 1Iz'CAQj8}8$v""';ڨC膌>_HؽoR| K_z@akj  ']i]?=*8W''^p-5}L-#fduMFx 6/#WƐh\#?`w>Y>̪fD7G2{# M^=/J*"|]/\:Sk[{ +G { 1>x6Mӵo-nq[|}~S*C񯧵]捾:稠7Z<9a1}77dɻ'}(ŷS='_aE$e;NN m=-].=qY1{ܑE$ǞmcK,o#L4e0킚?GN sÝ/$h Wta4]Z؋,uaЌy`mҋb,JW`.nEj8x{a0>Dx^[Ѳ燺Y^[b Kh~+si2z^ }2::v߽eOi}ݽٷN Sn,\[q|b}H v# o5,vS̓h^wŇ嵽 7o}uIضɢ`}LtDQ]NL˯k)n}b]BcC%pp$am<+˂ i SqB [Ja1K]MQ9*WWɋR11W PA.Ooc<&N]xDל:gA"ul .t]RTNkdA2#%;J6H6Z` JN\%/mh7X9I}`Z\fRW$n*(zAU2Q{T DUr"QMP\ 5DhQAFa^% P6a~UB^؄l'O% ѳepȖoJuu_#5V ^VOJ~h<1+`f"a9=]G)?r;i_X1Fjj\kL0ŨpH T< ]_Qa@6 ѫGDzQ CgCZG]U4*i`T8DCzQh*i@_L@$o·Z)*67Z.zTDZ$Q! Y)#@^T'?znޢp6$z!2%y#8GgI\"a΍Isj3T|nHM(~,˓pJFB4E'ZkehVCUY*$AL|3XJhH 'bTeL A|aArc9;q.ukb-mHDmȊ=<)aoOiw-Ɇ_#1$C&% iUH&eQ@Գ3 !< RMΣgCj00CK_UN`~^^pa[ahg// `&Ah7C°znдKc!z91ׇ̓#`SDE,}/KF{3d g[7P`eStf*<)mqMH>/ Q9إzuddXQq5f fA ,` eӂ0'Bh=y <\Vjno&Os3ӓ0Z/#^wV/*O%?t*j?__|*ssI-oW>FNOjMZv6H$ǥq<aeiFÚh@xWj U#jQfwlMyty^ W;.~OoJԁ*0NI_&nΒZȽ_`hrXƠ3Oj3 p9*EY]Ղ(ʲ>rj8Qh:bv29Yz@r([-r4+bFJsM<>(0:Y:V!<,o!o섋G=2i ,iPޭjGmݮӪv.wjŻ\TE썷8|AQ&BG\2C׈ }ec@w=8^ eM2 |18#Dhd<-436WKsnJ`&ODqx^`iḲ_fDZi$LuΜ$~ed;PwJȜ)A=:]DS`C]%JpzAj$$*ɡJ PD%;Mt$]bGa|~"`bG_mB[Xi|~4&/֝Z돎;0X/y!R@LipLp!j*9GIOQi#h?zb5kġźb]Lu`~qr@/XC5Er`~J}I~h8vqK}f\ds9%uC!P|u*y}鯧b:. @'!feǟVV A .R+,LlhsTl8rݚR>ZZ3f4әMϯȇ907ee>pţo<\e٠:$͇ȅՏC;/2TN(F5g}y % rp|ZrrUl~t6aͩ"ڥ2uIP?Mfߦ}S?lA-wg:H4vwE´Toc!ytq? h:Φ>U;`B5WN*y߇Y͔AȁC4 Sz& vA#dc-v,Y6d],jn- !)w>H=! tru&-[~9!h$cv truO6U|kѐ@ȁC4S ~'Ώru:Xݺ+%6t5d6rԦ[wbZL3uϼZl6"ͦi 9pfaJxn i5<4+HciW`)|4-q4!,LQ=C٭ȏ-WM{uح>d-L7^4d6r#;[ݔvA3c=v_2-wkUcvGȁC4 SD|#vDU!WM{uح˾rN+olnm !)Fvc}_{LPۆmB{ۜ "o_mu&M3[]VJp[V9$Az9}^߇LT|#{ڜ o緀Ԝ龿iu&pL Z5ytY )]{侜3L!Y7 94Ex..}1:Wc֚g5vgB!IRcf7ט[ )5f}1:ݻCAzWt_ckm-՘f[5C14ۻ3r|5f{Wcvв156g%|ǰHs_cnu&H(L˽13|5fƕj>"טs3Ao .ǩlqZ_c>3H_mk̇Pcf\>fƔ}1:43Dz15Vg9]֘9=ט[ lQw5fk}ՙ ڿ [kQc>攀S}1:ĦqB5fdטs3A$bWv_c>3_<r~՜ N.I_z ,,0oKSGf(r7T*p$Ri38ߘrd>L\xy /9YokPnF՛N{Q1@h4HqӐY;;\!SUrG֊D:ꔋklpT[BqAcaTElJ Iqy ~ ~TC'@ 0\n8!LC2O/Fq6܌"@50=O1H9`qd[~Zk33E25O!,y* ,$y\ *4 Gд1` ̥jP,K_B%֑V[#M>HtA 1%CpmCB0 0BR B*#K ! # P 6{TtL/@+"0Z)(Rs%BZlaFK)qp5 BMk9NJ@ZX@8xZI SFe {pmT\Z#B4g88V*0$Nz+2"sI#"Gf*d1pLCXˀ <DXTPk<)NhU r?3 ׯt2/]8QIQ>^kr#߳ WggG ObjN5h66,b⧰E08|,&4ΐZ^I]x2bBQ2" >a]q  9r,G`]/Я!JFںn}~X,i_N&|jj}q]M&%Y&ӿih\WҨoU0rd8T\u5 ލGO*hr|1Ά),?"9`̐aES5F]E5ON߿Nkrt-F=p&94iy9{\{q`ܞqj$b+s$=m3jeZVi`$X~`3JCir=KHO!gNq-]RyU>0Kv4UkZxt<@όh[jY=a=kWfL⻬}%4Im{+wrK(8Eh ,ߍ?'ǷFW6+=n>N+|P'XW8؈Y눦¸T 壋)uαSlUmdSe1~X|1)F8q-ƣcxu䏏4d^Ffi[4RɊ2YۇJxo SS*4MeSbWQu3y+K|),E{lVL?Ax0~>u10a˄gӺѾjbp%}N:,gg stX٠1WmaMp"І=B\֟ȱu?$qvEjsmk<q [I *TL'UB /E/, xt:Xu 6PhtakWyW"Vu߽yͯmUדP݈֒מU/RkLh O[2Pɶ穨1vgIEʥB5gbeӅ_%j)xϷ9QhT3W?%o?}`Z]rWR]] "b͈Bx+Ft\//%<o@e_;f:v;~$U֠ !]\K0~ |p ]j|⚘t>M~8xoz_zq]˳~aqr^3t \1E6(ɸL3! b,zbr$ o70 _V0F_z2ƴ|y28 1y$%v~r+OVƳPgW]5\(hMWĎf?U ƅ;?'eЄF7!9j [Kg0_f/1GJju|"AՏe1sOT V/Dߕ݆8$CX++bjo.oSdpG3*; srC&K=Rxij&Sd䕗ق 2m&Ipm0/Fe 8~eAj$[բpS}^i3Z"nvO4hLјt4f;8UjѶh=JD˱eijn]#aLĆ r36[PJר nx7یȦ K?7!$A.8SDY `LU4)Zْ֪-ܖT@I7޽N#M~nz>eiTl`b-1Ș 2&CFmЖ{760Ԉ4?kHMuͽ<*2 XrfK{aRz(Gd$(Zא mDsUvv5U,WeۙO13HS?s)X4]-]wXvj -^gym:Uo8tV\*Ð*" 0c^=I-~\x}hP{b_byT3F:-tDUD,FJu;4q3fJ f)CwYZOjv_RSbSQϘ+BIfgGj _vһX` rll`vZ>"Ig[a?VO~3@[2ɪ_XU*g--O+ 9C)Zm(Fj1y]ÁR=`یHV\}5I+\EMkЩVH$jgrOsV8X2Kp:aLB1~!gZVsQV1 x_2]/9@}3 !ldc]du(LFj/ytcB]#HnN7HAܞe4:Km3.&L.j=%NԞDȗ2,?&(ZBQά.9P*Cq\/)+)% uvW(B0/KNuv}:nݛN}dov7%-Nj"{6i¨a%v0FdF`51;lV׶UUTV<%EaNk{E(5B RhXPj%kxVZ,2C2ML~ W W JS(jxnVRclbۜ1?OiQ&s&[o&u}ngwb^>ҏ]LSL3e\cl骢]E!3ګ$fY*Ղ8#U"4?(ha0 FN E>]2s6(c @e^*6I) 6Dv%߮i4i{\CUb^ aS,hDaT 4iv [(i~;ȷ߻V7 ,~ez3;Α87{?Ǎwǝe?ǭ*"1O0NzÔ&pxD5O2f 9;%djP"h%Ut!圓Z+;Cl(7OTPad,{+bmD=N=ì ZW[{4QEF|sՔ_GnZhЮZM'}y;umկ'<7[Aemw wM2\ qUn6ν^s}uXk%aޕAufxۃǗ)qǕnuBо=\űѫO)|CEukEtTk7̩+2gî!# {(2 >qZpY I*S0onP䞝'n(Ȑ<6b"t-bv5MX ˑaV2G# iYX29YnG^^-VĜnݖ6ڣ')µT0NUQ>"I|ˊNר/0_LcbdG(>A+p^wsywo}kyٱkϻ{= 'ҽK{amO)V3pQUVqra||]E-U 30ӌOvk9oȉTf*(^l9wys~ȱh0E{Ep,(ð8e%uGPaI J9(TisvzRl5蕂q^Z^#Ͳu=IuC2Y*_j dgJ^EZxjM%+kϒm;y K7Z4!g! pxbǻz36{OG$Ew[$sv@mP="Ί^ KCAE$=?0ף+siT$6a4NwTCoYKIIfӸ-Q!#yNzŴV S<$d ܃0^6csb81AXL, BLu x$u1!11!J=`vacl**"ntj+e{\}% Nwva)AKؒdd)Ϟ+Je:裸AFhEJIJ":$FG8@/Lb \h0/V9,JT刔Z*dK \ZSoJHq9.Y9/g)y4{a߸l秗QG= G0rɛp_`k:|z+fP/"Nyq+BEhU7uNǫI:,֨|{;Cߠ᠀^w=(PM$cfo?1L 1RwSSB ќJ -ġ2Y dZ)pEbGw7cyȘ迠bGCa!PEh$(< >_/+ {8 )Cxm;znf鹍{O{tﯮ>T]i>HLntN"Dx m >J>$\s#k0)$:2ڀN`uys]&TJ HhrjW%7Zb`?EN jк=ǘ/q݊-T>5þfVKKqkqXF/U>>7Z.cmIʥdfo7MlR(*]P*F[CҺ\Eq:|9 _F-e"qX9DKv8JrϞ1~^oK UiB @H%>oc'-5Jf-e,Ѧ> I:yFJzH._aʫ&O}ݛӜO*Gn O"u/\o)uѵynx>\Ux6|9ʛ; KF_pt$'݀ EӞl e3Tx4FP '{_N'T2x˜QETGkAX)b1}|b<ӈl#:Ru/3ikl0bxm|/u/wqlpͷf@e2 ʸI }j)T\Rh:%,.׫IS:ӓ6|wA1A#~ݲjEPØȵ_nJ#"; ojfК^h]`nn妁r6P.^JhDΤޥO3PtOS*BڽU<E |fzv6+xC&KΓf}rs$*q㦞e7,gM=JR ne!%Da9$sMJgdhw O~|o_oV*?`GNNf"L@ֿ(aǂ'`sxk~x?9'?|*&Y8`,f*k$DK*EfT9д4 DdvN5Jx -y}YH{x@)jCjtv4 4WvQ!D6B[ wZ[;t#Ft#^jC_Y&tboenI?˶n? =cQ@π  *1` Ia1:zVZc(Ƹx(Qs+CFk.^꨺e@KUu8h =oS7Br ,Y1sp!Q?͘hd0B(Y}iXY u iiTеw9t| zF03@t)%v())`F%0"\Q%Lk.n;Qඓ,FedNw?Mc< wi^!tvFe |5Ygu \ߥ\y(`!«\FY>xd) A(ޥm'jKN~\gKTksn笔(YASbTE2Ɂp74zk`I֠X(rVpGN^9YҶD8^.&Uxh1 'bN xʼc@+NjBI|toE|%rbQl'mmD J8Hnlb+PY\9ꙌpOޕ6#"e;2o a1t^L)JrdT /'fr5m~XfG R Ł[YMas'17cHw2BJNBa"WJG;*@HQ0@is2%l4t\; "N cTVNiFRS^%5^,d\ CnUަ&UhbM|jF͖aJSI[\nơi> k>.~v?y?N^ sӇ"íG3?$&ol!y&v 0ˉ;&$]H,'RgI1#&MO?{ҖؘWn}-ߘ{ę0Ug3d}g0_l!ς, Uny r]}hhWGαTduag +A)ھI{:R#>L~wN'_% /"9U#*9edi {a!/\fXWQ< B2H .i%~PakAi'h9m*8B(:rxs+e62ۭmM1 p*?Ufir%~]Q \ sS[i FjMi]Eq-~ &XV'6pѹF?Y-~AhT{Rf!/lt5¹u"r la9ť懲mNtǏ 7[穳KLQ7n[0ro|4yf۟< ޅXAii r/?('{> '"sYJ.һO!0ɨZ`Mih%ߒ;dqAaV?ݻ48QiYJM\SNO)l0pS*tЍq9ISPnW*¡;hkZw3\4`ȩ6A ISJSKIEn2-9Oǿc9wfgR*:0Bf_ch2yß&}@Y*UԦZAyL @ΜƂ>>JH;4)ݕ:Ύ'7n{Xi3a8IJ@cC"wmSYF\*Hb^ .إ֔~6)Td '[vKW>y-*9inK To.a5D'<h-Hq3D*AiƘ,#1axN1/&Lw{1{9w!&*"zs#r.>C@75'Yqj)$CK!=QKH)oi#zirt"&:ff\A.\K{cj NsmF˝ :mm\q3D(=VBtp{1ߩx4sW^Gq|m.kBE‚pi}ԛgacPc q-~uh~yybGօ<|ddg]"RqÈ ,OY2:_~ꪫ6DYy+r%g vkb6>\<tg'[M Zj@DE_뉛,oA2@oCo?MPOz'B]-fwaK,PKzĿ9 e)p)*BsࢴS10憃6|ih>mV=fiw͖&Vᦌ(bțb;^6fssP]\ԉfmF*֝!6=KvϋӍi׍-)~{d &sgq.LtDZ'$%N2Ir% \ZB! J)]r#7%G uo?8ٱ>Ams-qdH:6®e'HvgNj95݊$-?;ne (mYfggّ8%tķ 1NK b7Ҽ`F],AV)/w5(NxBB9T8զ*o'7 Vr:o=rjuٱC+Dw%.& RtU/ǿgs%\zPO!2a55&7Ɂw+(Diy}y3|ܫQŒ1^i?!툒adU5lt2STAmirOn`J-+hgC;"%YԦS1Y & 6\i ivs#dԥmĔ*?J1(G;.T #tdPXJq(p6G=3f(l~?PP.W}b\Fɸ,3T^PD9S1qr3Dj( %rn6/ 6} ~:0tбoCeˢx`qv/K|G;"fq3"TGNV4ڍ#ny^ß^யQC*(L*b uȼ6RRE˟8"i6>{~l( b7~ cdz5dФ4597CV0+YiS7L!Z q:_{U! 9{o89 Up+ pwC3z-/!l zsm՘T7PH`7rgJNJA$FIÕkcH U<ܑ -Q(@ Mgڨ(hm8F&L5UcJnatGedGh[EW}Iu GṑVFvMͅAnf!CDb/&AVVא$y&wɄew&~hR3;<[OypB zָyLfnp>,߻b1LG/euq Mǃ(F9x b0 $Ͽ^5ٔ  |Q|X)Lzk*UB-lNʂV'%Hf%icfzOA+br1Jb%aّDF2 Lnܦ#;ڇFnv,TkR)ҿMq% (wxAYM4}W#q}|&m.]KfFĚN4zW!l25ptٴ4؈l`sLG3ܨHzp ZJc +%iH"`"K檩" w>DUZ7yu"G7ڕV)nPR pCRDArnwVL%Čy\,M-Uav|7(7þԅ:a_O7}?Q"8"-_+X>wF/{/st' x%xkv.ÙNݳ g3K㧁ڞ)Bιw2=" mSb:ȉ&g L/ )n@"[[CƬtw.~ i;U3δCIE4WkyӁΛ@˥iB9܊M?4Pڋ7 В)N%k*ّfhiWb s44PvR~|Ny6 DbCN|6P_^zt|˸~CB_!%vhϒr@R>(Z&zc+V @MREdxY] =!w ,70%"lіsL鬣"ð˲U27i\I燏\r >sCOPlqϖH&.;o%1^NgIO/F+wXBH1'Pg%k_F EGҏujz~VS_ey듇SM ު(3P.Ռ+- ǯv >`v3a0wkcìmK!$2>T(?~2,3F}'ٯXO24 ;E$Q7lP?iH`pD"̹Uo)]D֩ h89]/no]aeKyOZ M㓜lAR'f#6Y{t}O1FSTpHЖ&pJ+>j FcҒ#%U|L1)VbRfIS|m/l6:y\CπX}FC/-/K弑vQ&y1IϋIz^LIB *#A["hP˞Yi^fhL:V&Kek4zYkDoO[K(J499*;(w-//b i\ Sᑧ3DAȳ0&vx%LВKx3@+A<Pyg2FcW-&r$hqX`DG?~U0ESKAMNE:繑P3!jUVɅv 6&O.g/¦|2QlBxعx2yIpdoOsR&_V'D'ctI-zfȮp{$nxB 3bƹ$O\x G-` 5'ncxb&d\舙8F̺nY^1-VP7V:MBMVE(vWbE\:9hYQg jf-{5%h}U!@8UbYy2[14f $v @j\̈́D'4Aip60;cgx;e@cHL@bJD(_sۢ:UpRZ',vځ:NTIk I(.?QO7]C'4Z=]$_E_xj^R#$+w%'\BҵKJ #䯀9))T%CFSYII(2dֲ}h0/䯴z5 a_U!ZKgP_19 LF ^J#JqJL*LgVu[Idlzm.fp Y`d(eKlbrqRȹ%eml0o7s&۬Tp` %I讪wB1H\\*g$c(Xb1-$Yr2מ`oƭa6?IQ׍9Oe6Qc[q+9=tČhVXe*v;?py‘#ϛleSNȭ4RJZ ʧ^".)im$1,VX]_ԩVחpGeGa +c#BYr!"(P%cJ7+r\CЁ'0A>qŽ6 eyF+(ܾ4?W}Eg(aȏkp "aIJ|m*"+(Ȇ3%r:)-4c8;cyl+fG"жL;O#s/f-Lٍ-?_]AF]p.K|/IoT%u7T7njHlH0~n/&埗4|ͪ/a2?w2jᖍh=}sw"!DCooEQ院F[;vETm%{8p4!7bh/IL̹7d]_.rmen>k7p̠ۗ45ZnK ^kj_|7qkmZ"?F]cAA#t>^SnyuX} A R䬅TA,ي.[^:nhu*{ޅAaGp lg-o(}Ea&ȕ` \06\$][KɉJ6\Qщz$WI!~U%c̱L VneBqR0ݩF-\ܪGq6o( _W^wpsL1]^.ʿfdܖO%Dͷ^,Yj4(2].N~PDf%IĽ/}I0w௅>-׽a$n'Cuݚ5"/MË0:h<ޥO-Yy?VpaNdRB_uOwMwm_@'3]Ӑ:q%,v̈́?cUo>!"41 r$ \َjg7X5h!H~ZGr.oꣁUĝ?DJycԞJOtߴm΄39 DӰD腖L$;JSxsJW"+=/Ĕ)H-W l[K_J/jDQEL;N 6,e,y7DJA$.ah~\9abʡ^m ᒼ+F+O?Kd/O])g|џv^ߦ~iFZlvVƖG`ܿ}ũ7 Wi+u 0c|; #y o#d/)1v,Qy!iK|.jbLwD%eAia$jZ wynveY \fDu13GPKՈݵxͿ|TډVJ_yR2 4q[,Je0P sGq~bR ú )D=DgJN <)0&/$#}O28j>f\o$-{w[ߧܜծ0n閘`D֯xgnkRao2zMel"UQ}odd=$3CYq)Մ+3ь<$mM1Sd4_}MGEH6ŬP(8 DY`%͑LD.wW Af{O[Yk V;Ta]M^豐 "fК=5I}*2vX Ʋ ΌZU}ū{kC-PFcXdx]懭dbkefȞ3c?g,aҨ$⯆kkiLt U3Y1>.*߭D0l˞;3 1׎sR)C{q{6FIJ},5:(r = \ GM|'k^&IVI[OWYy54&>kUoc!ۆ:QI)}&Kk?d%֪%ŇϚ0x{ެu1=mTm691 }ݭtVJ7n|xͻߠ$̏i %:y|9rʪ`(@oFf`W4Grn;5פ҉힍<W -VnI.cd'>32lyh4(cn'OeI].nH.dJ賷8nUӝPAv;֗n[1j)$ ѶL笨{K~n%m:,=nݾj΁W9гP޿gݳMl;O(?ێ=|F2b2z݂"ZM|ǧ;]L `JHA[\yOs#с^>KxT,b )X2| XlvC (Hs}2"@Ou}"Ze'ʈUh8bS)PS|ƟD7SDcfA{} /VĿ?D:u,mݾݼ^ Y&5>#QZ=.b>K^ՆG! ?_Y&{}@ϯ8ddqmUn; +L)$z6Do-%nyn"Wr\K ?TZJo:*L O-eJ CoаD5(ֿB*x^!Rn'@+㢭hՑ1ZcEV2lE6˾I<31Ȇ~ \# ](Ppf0L~Sf\qb;=f['dJ 8'da'ڨ֬O Y\^gqnk;S'" bL=iIȵ6FE|ІX؁Fz>X%gZT{JXtHR{L!1*cӓ"&]uv5>}Kix--WQGvq.|C`r 1 .J+r' κX3.s wQ=AZ$HsMz|!қeJK!4 cIUH%tT٬"S XY=Q7'~&7  Q W궷|H)&C%P- 0 過Q@x`4zV QtX-kV &-z z9HJH&oJjDR}DO(016 1/=B"ɹ1hTo@CLڕ= '+.V&JGz]ꗮkqDGLK9Do#,hU"ON(SEt屲P Cԕ PtNPmkkJ) L'sݟn_'fӑ1C_IJqTue`[} gǽK@hf$K,^rŁ JCX˽.Kd$p2@YչQ;kj+sX)TtVׯt܋orM0 hB/7qsf?g2ٖ4U'|HJj~ F =6N59BmadF+mWZזKJq;ƇçiJ2 J2LP:M-hJ6̉ oP+}VJe(?pr& 'jMyו)W("\MZd58IskVii9N9˻]W%Yϝx\rm$g[nHt36/&VP27'pRO=9hs8OfAl_uT#g'#`Ere{^:˕\$Js +3R.+ޙ^2.WB4a8;:<-w@@ޑk W|Q/e#ڂ@o5}4u%*&D ZTJE\P5s{J/J0Oc%r %6O\ݴ&@L^u@5 ^^0^.MևV A($Mr<`mUZq⦆1Zב_q| w`X/8Wn/o47?_E껽dkU=1| h SXT\ MmT߬@RCon_5_g&l7޷_={oogۡe4<ێ=YA=]R_˓, Xe Ɯj@ $eud6Aj3J3[]rrtXy/c1F_nUPƨ? $$n>#noۉzS` sлl^D1*-}y&.)!|hTF{eUԋ'5kРISG( q֠£1%l%b)zk&TdMZZNAh2JTF'Jz@e"=Vx5anYp/ G"z >p=ԞIETB1gl @TV1o 1:b#tT@J*cBA؋F'GS+FG`VąB4vcL1U0hGB ČґP{f(DZ;vn)C9V3^Wh9G|Zf\+=2>Ҟ`\Vi}650%u3Ih9lJ@ MMY6+F&)n' RqҬYiDWz+ iLބJ:vQDa}+2h)D#`M<%7y|jx5N̥X {`hAd@\#EB*Y~95G s}7kAtbec:|n CmtvИI䳀Nł-K$PIPO:CY*s+O&L>>Cw$9[qi{=Y{JW5/"Nびekb&ћW~ l- ( Zp_E//Ge(r+.w*la8MWr[d_NɥǁJSt9O)5UNCFk"JrQwv%$N Ɲ.cNύI3^68dvjmTNPuBȀhŷQcEiɢ3A )201FI-@ Rzd"s,P/~GJnhwD ; [qVs\Mλ7"zH'SWH|ɲأs]ܝ\w߾} Cbed뤮QТ:2cdl[a<^9>xq/F%Ϝ-K&eA*b;H1@HhUh뗨2fl_~Nvg qy.= ѴyŗO>'>K'POn;j"#>[ƞ*w>]G_TT ѧ ֮(!x cCCXˈѹ譫k` 3Nx  |!ac͘S]&5i4=SC1(1wW@z@эV,x+eX*WF<)}9Otd|62/tBPmĶ]sɱm\vCE?ޢL:ypÕvqӋs4&j|ń1luN>8jG֊M55MGp%fЬz4A1Xf# &jrZ/{j첞4l^GZolJ ]Yo$9r+¾ExEр_xOF#L-K5,]:$ԭyJ#FY2Aغf5FPU*x'Q7F`=β$L>Y2-1oR썷Jsj䎇Q3J=58lNqNPD8FcEIjko\0e8Å%nyKc| vXZ!3j "$%>b |;HQR; 6'Dϑ$хyGZa#,"+fum'ngk /)ׁLio &\Az> `X[vݑ7Z[/88>ZOzVQ&pxdV574uqD9]gG|ggz`]̻?3r%n8c޶@̠ ) V ӌ7 lo4xmx Xv{9{\I"R\" 类@YC8RP[ukV";o4Fckq5^\[)|m!Ʈ6X.I01)Ls 6y(!bN(ƈv d$2\I $j -Ҍ rq.H zLN ~D6SkQMObK湊GYV΀궹0 7_xUOEq S2赋W_և+|_LÄa%ui?^W+/GGMXk~qssi~J,9_7S%!O z (tG/M~FQZz=2VQLl"q[˄;֚X=s1Jt1pvMj A|ϊRVOh>%_=DUM Ezi<G6NbVJ9#+ äF.}w< /́(^tcDrMuhJxSe2d1a]AdY XS4BYH1i(}z q^{,Ȼ$ fԵ {z[4vq}(j-Šu #4+xFIX{)!jGJEv7Mz0bAQT(5u우`&XISDX Gw3~m7E`w3m$vF*b38ou7#rt<1S|w%K!ިM3r߿z1i0 6%J' .hT6˅ڊ/=x7/,n$LQԶXV4? p*EO2Qx.sV!{UCW;9+ Iu L0UW=hQ׽(CҨGW'\qO/3)r)z)i[B|!mF}X!-~q ]VJ2%-}tŧaLlIaTSۄF[ʱ)ؒB*}__|t. *d6Q+޳f0P{(t8<2 `tY^=% ҋr{Tw,u}"kM9(*9h >Pr{զj18inT[zc~נh5X_Ž9bpñak䭻o R/=x6p )2NVe]xF,  ~>V,(ভ%y練=lhCZƴ,g-18o^()݅y&|xO@GɋmĈ"&Qo}_\~SUC( F`p8t7WQdRv 7?uG#AH1*qf8idxU#^>BTIوZUĀ_vA~1c8pV ɴ`- "IuܻX&!Tιxw7f,\+l43=_êXU6pǥ<lߝEL-<&wl~|PG"&̏0rw1ZMǿaB۸myXZ}9uXg5Z..:3]\],Euվ<{^ ,sJ͔ 7qFgx@mOxgan^zl30Wz8GnOlrOmb^}Fo3>}#Hf XMu1w*l,3.aSd<(p>or~}A0*'aF9HΞm0^ De; ̫-=wܝP 4Y%W !Z>|_7t18rwkNcAwTwjf*iA;GmךG̗M'a2 jp)M6϶b֪tbNU/g9X  v;fʀF:ag v,:TOd8BAR3̌1xjI!Y@L7$`z[ Ё ;^M۵-& }7zu`ci ȒW)B[*M+#š}B}.rWWKdQWܫB0'7tWxTtGlY|Ov̸ ơgZZ nxdzW)߿rC>4rɜ\0%u,aYѠXX+Xy:)g Je|ؤ#_<ꃶkdT[~dZA;M^}8hn7\"X%^Z y n>R&S#v!6(v P*S6"4*88=' kH[jؘKCk@({Im] 6T>(R¦./fZ8n7Ci7C;ћA"TbJo7gPbvМ4VڟN e^ ԲkۛѲ]@av9s@E!to*r M^b^6^mZv1ci޴إL a vG+ހe˛(o ڜSS|n(Lbv&ퟺFC tW׵oIpR5KTmѲKܴjȩVQn::wqn z#0\kPJ/H vVJ)/MusI +kDEøxckai#M U fue>yht ʞM! Sj ppԌ ` ޹0UQ6Q f_o_OKM S6vB89lӻ`΃*fHJ*՞[ i\ԬyUk GU#c-(U#p0a_ 9NIy)3^xqdhR\ALrZiK/ҙNuYf{e'`cr^!L1 WLQM2-ta;5q y{)nc1iCjyO9ZNR/Q ^=Ԙx0@ qTLuFO!pn”5Bc]2m(g~k%֨6mm M̍Ng-p\ ώf5#&B8jog@:iVM161&J V=l=yENgT3H:C`ef_Hcp~ƊU֢̖x{nU$Cyn& _NA nD363róݎʎ(}s|JbHQU'Mf%H(OXvX<¤~ۇ6 ϷXYvv dw[SA|qqb3Ow eFD!Gn_"dH C6"3D[`o]x𷭂e2Hl`ݗ"LxAxvhvR轴)]s]C"`cJg4yk:- ҔV"@5 l7:!(׋8 ⑎ Q]Jf,%Ԣs|mؙ4;HEfHSyvt%ܹQ8q ^iXI7PcxC*Ɗ]è1/PXN_qF0¸o.1NTjն+'tb&jr?HM 5 alWKcjaU3뢶AT:ŔgT[q.eB݅j%.^yTRH?Ɋ]oGHwpTN`{Gk] a4 ~+p0 84c6RĚ`GyNyvj]]sRiu\0q 𧩅^F[XpAle0\׫a1Ŧ]Uf0KY(̊e[r)Ѭ V(_[Ǯt!X;nd$3P -]} 2l0I<)XN;pˤjdgISe lɼT{G%e# #A9,L{el H((bc $KaZ4½ K0fI1 [rJfԒ n*`9P)xiFD=p%ga"h_%#`d#ZG/Km-l3`=ya斱m"Jǰ[Tm0pP5NVʥNv$g$FjW{ͽt"eEʒ]2ȭTA4JᥛaR(2K,"b( ~jj m:z|r VF( E\{eZ䛫{'9h[չ1l\}öTBսWOB cv]7׽ A>'YxKսont /9Zag  FF`VO/TpLV ߃bbz(f J=# 36Xw>X;*%BҊź J/]w0Ƀh7`Z ǃJsѬ!HiN[62A( is Fy+ʷ!P65B"c3oցJ%t~ZC3Lh3%TW SV%kqE5pƄ(Q-d5`0J00x2giJrf@to$huD*G!xaJk݅jkkʹP8N w諁4pq.Aqb!  w[ge8h7`CRGS0 `)"R݅j1zkp`ش=&2`%@)k6VXt 0V@9"#`P Dj/3U^ ]w]V*ZG JNijz M4=xP\sDA ڃ먅ځJ &dp9V ,™ŊcRw4]]:S{7g2A91M0(2<9/ >:,_>5Lgɸmz<êիԫGVlN>[̠/| OX|:xq^&g/.wpb_/_hFp}?BϞ<9x8zޟyx8p6t'%AOg|>= nxlF};Ǔt%OF(<#L?ӫ? /_XIHOFfxfFei#WY[_Gi/332?fR`z!<*ݱw-0ow[6-4ht8CY|1W Ho5sUżݍ/.c|l]I=?r8 sχӋ'ykx =6"|߾_M&9MVxJ_Ϝ°?Mί:93Z`Otrz_Xt9O?9 v.?9ٟztYg˱[7Ym_Kst[%!q5[/>_t~zRd e6ҝr@= 8OXmn% >clLg{'EG=]Xkaa!԰K!dfڕ)B3frjSgٻ6dW=9Dߪ/INb޳ lEr(ڲcoƛ#f##vUWUuW9cK+)- 6H NsLrۡ!hKf&$=CC +rSlc*|ԕ|fVr}%hju"Φn{ W?R^h4ۻߗ_2:|k~H^('ƅ:h>x9,kccE79#Mx~ޥߞ}:?N02x;iz&Co39=xM^|zm.~ YFx*;)mǃ?_wEׯ8q:Y O^^I .\$%H&3'%6gpX0˨Rgrd2X"zI$r ں .i +',G+Xb׉(^!e ){nHsC&lrGƒŠA锲̑ˊ0% AyϙY$CX( *x+xp}xp ᠎B$ BY"db 7 B Ԉ 9b ! ظx$x̓ٻ%$[9_4J˃D˃KqfH_̍W*t6OԮ&Ou/zĞkDعfk+T/uZm8 awo7r $1U[YxN2 fZљ-c"ᛔ&\m-PVˉg{׆w˯9{QuT.|`cћmsGpobif][=x&8^i²tsNEhWS1PAqKU ;*k)dR x Y, QSR td 2gJ0PȊռԲ6Sʹ \Xʫ+oyT$DS+8&m3~ Nuh 5ÄgSf UHp-m׹|6:ч -Q!jf,@CJR АRLiU~Œ-ӓ3nQ+]7 Én^]lf*4c6%H)FJɴgdɴVV,2O@iSL1VEH)bDJ#R"Fh݊eVA:]3,BJqRRLGipZ*5c["S4{mwG%/6Kc_X/j컯jqq_)ֳoo^)+5!o-KXUE6Ϛ|qDꍹX#olkOCA*eWu]LӮ\1k_sډ:h8Cf)},{v5_}9Smd 4C =z= RvpȥqAquS HZ's 4RlM(/=nhP .xEBs p5(5-x53qzܚ7 LkΩ42z].kzR uxsWaYC %h܃H2@yNJ.̂+U49d lK#ڛڧT}P{ST7ziT1JY;TmCU S)<מj0U?N-lpzlRׇT@˝*9qXp. fx!Q޻&Gh2IX́q=$3 P'ζ"*'i*ڛ21)tp!w .KKtޖe¢PL$ c{@|: ~ UI{Y!e +:w^^[xjȸv'N::vs ̑'%XL uhxi7f#Y1K5F''J4TnE'XEUS"j*j*jb}|(/'iF+D$)tyOE rъQ0ŷXB7W\2''wږC)!ݡO-{᳷(Ō.(+jU%TUR[Im%VR>R; N&v z *@渐 \x > ˋ\"PR)!&)N4$Xc\ n=?tV=m_9׸6(B z(}V\nΨGkqK)?H/&.Ish@ ̑(sZ\mWN 6:TWNX۩llRjk-kMiw*%Z}(N%b=׿43rm *PwE8VDFR9RpYqfpZHyeHB8@2VH&}J)麚麚BxtAf: $et 7603ÝPR)TkY%JCLAGӓ0Ak#%x}4"؏ȫ*+ {d8+VQ'TT9>q??rʁ*h}h,FK޽ P2hEH)&&$sJ 2.0e}oOm/J?2佫VT v$5,:IQV @F'F+T9MkcM ά$CiVS n%DpyxJp+ylr}:[Hq}A%VACP O)n4oP}/- $? p'݀~eK!w;Us}/ef^)3s92HND9m](!^IzKnL2ݽR:W[neV3elC|]HuZr,䤰@Xeu)n }(AoPjAԂ:98ߑTŔ}u&gY>QLe4TT>1ʀ* 2ʀǀ>`5JGEN8y>ʌ9]i:nK ɭpJA'r@-t-zAwYZ͐KxlKJg:$7T<3*κ cɢGLt b;h%Jj+'R{U'4+JjIjg"$TJb>擷 i-8^' s"T@{' 9vhmw06^^Hgu۞>u iX\7V1!w_ S Zlz`G "ƳƊ"0O}`Rǭu"Saw  CA:yKYǸ+RWl]L1D_]g.b o}0uաՆp8> LJKzvк[#>{ۣG??yW /\[ʇտMaܼ~DeNG_=yAAEtc7JeKL48VfZ, XTqA*PC/?=>8br3͝·hezh)ۚl&O2/YrKq-0p6lL@ْ2Vʕ=xd՚!ło`O%xF6&W+*|b # XqQ@B= zeu]p/˛|$5,ź;uSPP/:m`ע66K.pMh[MW aw3R.+%u*Bjg $3挿6D@jflLŠBZk^:d\&1zT.s> '+e Y7ڍm莦1y\$:iIBlD`Lf;N ]𜈬 ^TSJޣ]ب'9ӄx޵?{=fܹ6,6"{/߿rdo=; ?͇t,_5|K2Ìc/~5r/$"ǝ|7U${brw￴nwjU/xŏ[??<`!rbT-~)[xmx>k>vfMv J!ޅ ʁWW2(6|.ed'pnd? jZ, ٓ8'㝣J%k,(QFa|o;NM!U,MlK>Xe3 |RqSאzZ';; h=4B_F)OȹA;IW/{{^d.=&Zᇻ3Ou 9E<n eSOSaҺ 7r\>+n6B.Z(ɉWw kڔ;#Mrf<Y0(wq&3/ 3W|H$2V{ތ u>ivvM0D&L68H6 !ȵwH#Dı0HExpkxa=dNjD~PܚߎQn1BScA $DAa.DJAІ=ZU*i E$|YH;bLbT3kAKш;=x>ReY桔/o|``At2{<|+O;\FO; ^z4W̒Μ9Ap74n Fnylh;'1 T1yue0Y:lYe^e!E"&j{a&}I8Yl}LJ8rz5:(Ѳjʨc'FҘS*pC~,5Vkq8kO)??|PrSaKҧ(& Q8eG oロ RnJ#ñ$*ήf21wUBzok=97S0 5̈/ƶͨo됙3 nt=Kf{wwnExgXmz8.sTY~2v :^-7^רl:̚~  j|rLœGe6TMϳ w.SnBrFrc,§x~e F0-bX R$(hZO T]{ǟt; H:v-Tã@n>&9"U"a]*yKFlXG|"3p$F [Y}!pPצƇ9@AE6"!Ur7^LIBKKkڂ'94Ƽ Ek8 VVcn֔`(Zځ=(&P xxN0ώy&}f{_ry Gz<3`8&a/̝j7k\Xgk/ɟtWBR!s$yL JG vy;`fs`{WrGa9lu8f@/_8sХfՔMf?YpiAc*2tKK68^0st )ʖlTjQ&`~~~iB|}mLG!qȕb"}9lvld ɞvpcQLekrBe@5=.- nLJW~+xl:s.~J0)Toj٤\'JTSL50D>@%. #k2({P885d+V#}R[MT:5xffj|#s|C@lPF}@e6}sLi)dGG2?2x&{18bƵY0G`Yi@1b*ˋfGc nWA%d)l@/o=Y!r|O?UWǏӟ_xP~_{veL,;Mo@>H!%m ɹfDéoR2njecZ %9!F_] [^ ?d({P~twB_Qp?j+4n&) ʘ<QlpXTM&D48`ggr=grF؃ڮ>:Hj\F̝ǡ޿oΫĹHьB]4L(:Lť~ vK-=hvnM|ѡ)F+0*mڎ$ `H8!#I\Ajhc ^*LzPay+'?ڙ5GMcfw%sڅ%ցq[ds/1/KЬ>Oucxe=D3I>--sɏ[)Ť%Z,5nҢoʱ1$ ¥+F֫zH0w!~:{K|G2gVDɩɀG_X rre /Q5#UY$pV` fȟ2p:g$RJxs8<vn1<~h$̥7_P赩IM.(}4TEJ%IM.ׇ@d)W+x~B# wȺqaK&m,y$g+>WXژGؘLUWJ3-7S9?J8ZɺB8GoSvAhǵ{p \N״Z 6X XhjpdI'YѸOmm,|u.y6vr NWhC\6cПc4roΉmΆ~UJVM.h}8,IuҜ޸?pJDǼZ;&2/cF{#Ӟ.5^\~yc;*6` D/q'ݍ% zwg|{-Y$KR-Ɠfo`RԢ#kJ bFȌpŊY]2A6f;d8C0Uӊ>^,U7{q8.'<.[lJ`r[ ڙ%Cݹ< I5!\GHBè<.`x$q4&_>|@6ūfzF_v~hLf홇 V[t߷J%J"ɢH8M}_չ9fzMMQ*#W6~0mY4_ewfOanG{~V E:?W?g}y\E7K2Y=~%p2ZO7xB"͞Zod0ςhSet/ _os];t @~5ǒűjaĩP%4[a.'f j`篌Jc z̟xۻdٓŁg5d>Oi~,_}Q_cy܎~I7p; GH+cs^0V5׫pMd:?=W%YM_§QO0 |scx#.MM͊#%ĕdF`12D"iz[!K-q9/qXt j,^7?+'ŁqcmTm&?~Og8O; }>yEeOn66_NE2F,]KSit,K#BLL HpA, x}迭U觛sX__;mmڒUhGZqq`5?%f`6K1SDΉꢆ7]$2cحL| dE{Y+E"#V~8<0Wapֻ r+~#z<'&IYΫߞFg] \ ^C\bIÈ{ F63k~^^ts)W -BcdYp#IZ2LdD.jX|5M"PAA42A("%1&@wic sU5MDI3眞^QApP`MIWB!-|pTl*JV %IdF,&qE3`PD#SHT?'f)OST%W@)sO%7tv , i%b˸R,2{zVKD*pXKͨ67Yiu^LPn\1R%ʹ.Q 4Q:b1i6h[Ն2hi{XR$ Q"l*bBub0z >h)啒-' w"kH % _*h8`$pގ *XnhLuJWTȮ}KPȗRAל vy# ;j4y~k«%+EENJF(נnɯqX{aO; )kn^DU8]b㴎aT o)ϾcdQ/c$3{xxdsfu<S`!ʔi!^ yJ8eI*saaq2zrxdGiI%rI"c(,P W԰kLP~*8xsxr.S ] @XY[.1NVxaDKzDJR=Rޚ+:vc1$XR$-%.6?y]lBhGkaKp]+(դB¦Y?OkC7C%_|zB;zD -zB߄| L#+#}xHh'՟$ϲ%ٷpmci%vO 1y{{|o| V&UyZ8j6hg>58|Ks̀۟\yNduZ$39s;e eTӣ/wBb5RsBҝ}F(VGwj 7L6ȽW6>9bIw_1|5>|v<S5jfƇ'(KE]+)?S?M2)Uf7HJiȷ/9:F\ߏsFxg^jd]NtkUivVc\fϦiFl?监Oy(CE#$Hph$„g*$DPDEY<1q ֭FBW>P љ/PIqgZ%2.ƵV€ 7G'uuH &{TtI P]h@գr-0=:_%fS{xoʜ)NLJ0JT J5~OC!UfO $+q[O^ 1 ]*ҝ2C;ǖsr.Đ5Ng.^wjsuuO;4is!d6CEl C2MiHDe(L44F`1dXb$(  Dj@&t$I;8Խe)Zp~b$+1Ny?C4IKä4_5g?hy͜B F,K&I"B. igh , Cf$qDyB@;*RE}/7LeUNPmI^)7@sV * L\ɨjYCN68%3UdڂX' sK>#3҉|iq,5$r7Mcֵ<'=p -&o2_j% $(LZǓ 2ѡ(8)dIpxTBUgM.%H[We`23c{yY"j4M&pq kG*Bm3M쫼(yGH Iʱ;֤A(zz_oPwVvʪ}"#r%-j~?Aۗ.MF[23A>-ucz[fЂ}];trS[{p7z*;իPN(R}T3#]JW >E1aԽoXRuOXD:3s]3=B65nak%%25boۊ0+Gz$Ze`U֑nҹxCUpI +K6ciʋ+JlE]i7&@E-"IqTB 8ĊcJhJK%yKؔ4S+Vlšp1I}~JU+V.sAW⥢;^[pwI=OiōM/T~W2gKzB!Z(f^兮=XBHbYZ52MHD*Y5$e@7؋jUKA +ȪǬ7D:K,HEURIV.$BHnӽ C% X> Lvu繃JSNMTb1ѹwge%P !d+ +MCgPP.T=@@"։fWN(r9:̥f[O)Ѝa9 wXL;mݟfT8(( \D IŒS"cA_Ȱyn!j]¾1(쪵S'/ZwtR/讣s{Pn {y28kt> r8grj)N|ŜPgSC_ֹʨwi =ZH;XwVPq p3M WW 3BxWߧHq:c5BPՇj b[Qm L2s˂֋*]~BkLurSJbCS?>[ @,#Z"rJC-k KQKFU\fw1neT=֒RjopSi֛@V}fT.RM%@!3J5Vڴ< D@hRԦ쏜`F_̼9{gEٖ\R)ce! R4$(!&2;J٩96! RE/"X<_q\nڻݡڼκ2Vj㬻'm'`[&a2pON`=W+.y= 8eT=RL蝖I[Rȅ2OE)]FI=$ՑӴ{rzr!(yG!`jV"MZ@{!h@g,ؐiC) =ݺ E'g.Ͼ ˧Jw6S*>S8.ջO[ ~y~HMcLjC[>V΂MpaJ# UL4=sHY>?DZkiT!@aSYбv /#b)Qn)c gKNڞ iH)"}_:q9m¡xnuwI j-zDteECpDnT+8}Z=r$Z2* Hx\mz2֒%H9*q oXJb]y,~ǏKϹ1MhLk LEͻGWMa2 H[|FHfI ! Y{Oa)l8aeNL:/쯫b/Q{/);W!ciXQP&=bEc ZF\F|tMסvg6rݍyȹlpƵ?,}E?\Rںd[B]kF|&րN\M ؞\wzw֠]=6 ,$'Xcѕ+X$::_!}ǩ-ASIF@It_s N"4eXRaH1VHJԕ +ϧQR%9jhv&t0N]XCcutAќk|tYPUE.a\փ+Xy˃dYKz{@j.j5I+jI2j*@.K KK%MifM 0ǔt1P4@P@S!8¦(e:X2]eTVTr;: %'Jt@֊پNkҚ$-% 5׫E~P )*^"Dj-]"m'y`mք0= %H1VHk/e]BXDŽPcrJ={ 4V\ oAćBKѵwh9z#{G9K.֖s}0z,9ZO c^fv4*}n(I4𽇇h^ 0'X-fY)m}s\vɗ"xPᢶz~qw:?4 9xu3)d7{,.k3Ũ^,g=z`:L0$ 5t0w27!;Ȟ ΋YVG)`WZuSs^@|`2,"b?e.:ARJ81ȬC!Z%&yc3lRRo T:]4 N~8TjjH2!(a`X Ins=z2;T&Q>W)9{2C|.slq @"M֮;3v&eۇ!xRa`,@}ڜg! ;̾݇hd?*zvd|A'iKPgo)43sȔ~y?{[IhwVVO .IȬEUU',^,W-jH$7x;D+^]g5|vr<SG@}/]\h^ʥ{h׬k'b#k-ffnlpv"]E 14G4vE~g2^ Q'5?oXU!dEQ P]MI˗1s׫^1B2j}x?YKrߝ {;XuA.KPSހkv+jUYy>pI8HoΚAV9P{ݺŤ{[S+FJ&o1ۿ1&a>1TdMo_~Y)h*[ES,R@fKML\tMhZ=Ǥ~߸.L@=߮N%T 5"N+)gT'Y%Or_iyd:p:M'ݝLF}`mpGI&׭N &"/ tzpD+ПdY,(U}>R}u> rdfaP^8ŝyHL0K)a4&S( &.[װ.8<BiU@&jT~rg?jzS}ZBvP]XG =p f1Xv $UX)Yˮ9ՐݔSWd&(b֐6.|T'jH";w«~ɳ_ӰpwxW1ͺnC qyfIgwЗ.|wqśF6vW{ě .:uųŨƠ8dܺUdВLW&=o00rʣ);z(pyO6|y?pXiyj<* .n@FK#49jY'ağoNJ}!}aT ehbLbW$Bfq6ۓG?pٻߢ6~iam4Q 3q ZAAJ.Rϕ;62VpjF*Ppz"PS3j[S>"%姶 a@AR n@xNCUm-UKsKUE҉Yn+kۈ*M\߼w}1\mSw݅z͟]xkؼ~y GtFuϼ8JeU4nin! \;9"N#9mNmTzGZcܸ@h1ӖL63C=q9GSLs9&.-է;C "M mZP'{Y(d x(#Q}3&ݰThe q6_J%n3wkYWL̄.wMwɷɷɷɷO-.g!:~ڢ蹃ϵFFDDpMvmݴLwojT8Vn]A}1 MOݾ)zf ކ}ov\硥]K]Twoʴ7%C.(X y +lS1%ۯVo,$_2SVziLK];{6\-yQp;; P^V?S _\ޤ'RSgPš]+iFNUZ4:GeeXᝍ @+/\hէVc䊢Se=LȩEeraLZm5% i*j`h^bi)#!X)w肗17&Wڴ{0~k:M4W>i6Hs[PPk=yRoz/6g w ;H`bSW u]gRZacQ-0f"q4 ^#12c0+i0]!C.O߮Q\GX3-BLAr1W3e"(011 A'dġi-e6JQ aXһ`tiW+!wMGIA"x CCm Ӑ 1q$WM0𞣯vdA;,kf#7."+E `E& "Q!'`_BBIwFx02 \IB-BPiT@1ĜPi9cӞI9?نiuIibU0|~<{ECdM% ok ҭWE?QADDp>%tKfzb1.'NpV<3 1-;?6)'OAQvoO 'R)8jS- SFZٰt0RV$&{-?FkժiӕnG0tI9JkvU9EZq3CyqcۉhiŚM歮6↡(XM& A[fEǠIP'M;a36 ʸ@+;XY*<[Mh"r1JffM&$k W )/s`#A} CGH,( y6ŏ ?*x7CuGw%}L+_ՎdCu$=C$H 3&hmM&y>H_;I .ghHj 舠NHŢq)|\&H@f(AXr"gy=MF5}EP%qqeE$aFipq]A4QUN 2M14}Ǫ{.'N D2Q0(p>9^/D_FQc:')dX8ش5 ͹n5!TPL )mSF8Lˆ謒 teEmoӠU^8jDJ&EMVA_ԐVƬ+55B4 *_W!I&Mi5mRi@)J{pPM3#R|8rIƶS*[ a q:OM*QM648B% *_ղ6RKJhQq1%ypmJ%b2V1]uErڏdPV%V .0V[&.@W7qtI` kL+!Z# =h%ZI˱niB{y銠X|:?BOQ6mS E!8ĺj)ezT 'D9 F'"HOdƥ fuu=$r7P_DiN*,n=yskc Ӈ\WIdA;x&CL6\܀Yӳ?6/lOmVvHZK ;o[($%{wӈ:F$|(?dFܽbpoM0B5d}nIh)W q2k04@ke;kп~ߏ^2$6{&dLOO=zY8~ ɢnO$/'Xj˅y>uRJGiߞtd:+&gb'H?hoB!Q=1HIL܎P@c_yN2aScMByB‚䄘ь1rl\c "X/~1jKZF?#kdopִn)Yē~ %q4%9SU_hmh$wd,Gace"ZNqǩ4cS%5rj9jm|zsq܏?%ݱGuRcwO%i#.ӮuY>`팖;|U` >7.̚9Q;SߙU_]ZP|,~=s5;|׿Xwq،R4 Qt5. g1ŝᶸ=\tކoȝ͸k9I>F\:Ub>ˇ7.nq[x5'ʪƆwqUȹ wYkGh#h ="AUw~>:". #=#~t(f#wВ ˾1=7tznJL3! 4ƿMξtlW!vi6>q_>w-e; ;=4{(cΛ0ԇ /*Nry/9\rN1"?Bc=j˃7Z%Y96[ޔɍ=`w%fs`M}սLAbcSn+IbOW[|)^Xf}d,d<%T)fU+5})8| yՎ z+AA7kz ˞M`Q*߃MK&z[TbO*iI'"*%tP.!P?&^DZ0ZuaB/˸,5' QŔ3QK1ǃ'MSMF$AsFN>Ra//-63/BlI19͖z^+y)~gsQy&B_Χg#;xm}4{?oqDVo\/:w=գdTq0K6uA-'f[zzhT;XukR ["9Gpqu9zLĨ :*4WNt&=*NW-oL8YnYB_ݻ1 R[`h"!Qg5s Yᒡ TМ궥ELhyPZnb}/9aߚ"v#'bOsK3;ؔ⧫c\7.Oȁ@+B=_a>OK:A1&̎jn꜉[}U ˞١E{R׌JiWF)ܳK3[gꄂN^<+ ߸°'k.լ2rZq@ a_z\*^ȓfOo.}s4&O]YˇܬCnެe>B\fIzl§!S#z9LKiJ@#QyWa|}9z{}}Y̆ Ȯ#AH[|(Vy;i5hz.Cɷcń扎mAZRSZP'jQ`Z-mO4?O>]\ݴh+UP=^3ǝA)^v44 oM/Aݔlj%l?X5St ?Z@h\~4{x=U359[gfx'e:֑q:TN{4o{qwnQC\8zY;|ǘt=30IsmKwMw?훪2M67.!ΪtK&{?c8Pד_qzgpg}pc8'b^ťPN&Z)KZ6(Vͧ,}էl٤z 9?/\f={ra@ˢFKfql4{m;pE7F329JonKqo8Obq5m6dzU>,-{aT_bmvj琅F,Emjį]օ:`F)%)8$ ݢ4E\ I]Nh#t# \ct9SkӅj 怃Z䝆geZ-_geOY4@XT%L5wxASUAyD83 &=*D'G<z,'I.HτtS8xcDꄞYyh6D(75X< ZrǸ e%>nZ8'58mW~ZAK[.T! FrUy%<{A3Ĭ탋*k'AzFW+MįS[Ѻ]!;W5k$aEhuiE8 DxDK\!4׀[CGޱ5=h Js^}ȍ&kUB~0To~(2g-m/\1~7+Bϑ1v['n!^ft)⬟qЙ7'osj:!/w.,o'?D[ʵJ=|{_(s z1+E_Vt×M,!rA\)θ*ﭶ\z7הx_;*Gt[W?eHv)XmhVjݱB{jV,` $EuNkE@&JBah4uRpXNyꡋ>HX.T9// kR-eC;!+ Ue Ɍ + (辵sBL džAs! JXRL fU~`.5vrɻwe=n+󒛋P2_87%LF#iVZ(5EaBjfNkٖoRÐ(-/p x=fc\i%z%N\"[H; Zz}Nzj|CP>OU#ٗn-hx؞\$ /`)@0T } e%EwKPJ0W.dZsYQ&H dƘtAr)R$yrH4R,[Lwiw}:?pwv|%* 3nyD!SJPIWw;4B_d<*JfD? w7ls]Xo.^Cr7zt"9;vb&n8 {XYˢ :AEw@ö_opyEOoE'؃ L/DɂמYnAkv: uV-2!L^ۗǧtka2hS/Qle%1Qe,T* jokv);=y`>&B_~Q<ɚɏX_ bNūeݱr- ^qJf4jvo:IV{|Q>7 qo|o#Oj >;+R4 HvTӆ15yͥ$hʹlV7$|R deg]}>/'1.rDm\8:)mAŗY-?#XY lÚvבtܑ*z5',s5'ЎT9 uQX'uBHHUOqta|ԑ;.5ۑ $H\6Rfm=9Ӷ?G삙7c̛yjBE3GTC{t JkpuU3[x[€Õ]i+lpᣊ)}5]ְt1W*\S-5gDV3w0K~Mf+nfS݁Km~Uw+d1f *c4.9(}&;A5Z.BZmTVBJi[{5[*!r4E~^K3盉Yn'1|[ՠ|ut쒥3 &[2Ͼ`j~ ߟK@!̓G 779uBS{=#pa~RY[^t@أ|Z"*jɗMOќ(%kK1e9ަ qw)#O8]|9Y)G,662Ì5>s4W2`)$lT-;G"ρb$_ МK';nQwpEk̢ ؏Pi~o0 JƸvŁTU% 1Q^X =B+qHqjfh*#CۻgOY&j YbwaOv!(|CB ,Q>9V`'ۀ#80Wdw][C'%p yO6xGZTcV׽!&DS4-W{c-1Ǽyc#E<MohBqI9IdNvoPڪͣڽ ̙joD+jH)ba;Ofż\J,F׻j$Ky%; U<xm!DNȨ|nɤ8OӰBKxI JN<HE 晿q5ז#QhX+"V)Xo4+،Q،f}4f_b Zkc8a$TcJƸf&;fK/ ^BvR{7'Lsů1aX)Z-nlsF66f7c6,?vZ f,upHN)&}kDW^a ϗZd^uىզUhl]UDW"d+]OҰq[už3,>Op77.Ancn^|4g=K*mF'yl}Q'(mR3,cbljD9 I&6N LY.R"`~|5Wڸ' T7]lvH#⅋L|";] 5txW[4*Qڭ}PX.7"HAR;CAbJ6էM%q!hU:ĒiU?_97a"%bXBs4a"}*܏\q#{CeF`,5$rXaD M4H$g1VQȀxԞEQNwe>wR8W2>&ȾTUх%c\BԂ"Vu /9('75y%hGAN0UG= ] }1v3(,ZNvuAۮ<[=DH1.C0 9N4ńwDn Ʉ !Zjh8v N.@Bz'^W:iu뤑IN**]r44MJTTk.IAC)g0|ZwZgu?YZ8S(KS & 7zqrĿTtrEL?5 ͧ107[^ul=r{"ѳJ037xz^Lon-/ӻ[L|V./}md:|"N^᳑f.}J nXTN%Kj:qإ*Pc@cAQIr&ٱ)V;qu|K>T!ED T`jǘ").A%qa/T$'xiܾd]; $.KiXBk&]aaV.lUa&!!@ݥ$=/X<{zZ0~UBTBlt1bX,JЬ$npSE.+Ξ~5gLP)Mebe:Ĥm͌:p} Âk2^׶Ul"H/A*`ťE=+:JRWLeA/'quJR-'+"W`?@B$R(U~PyȁE J289@,NRR: / RZHR W@QtfMx~"U\1B*Z^u(wZ>&0¥,N9MM"iPNX%ha %Ƶ E7Tc5hVG6nj9`ՉR(5:qXƊ{\RZE`)V1Kv]H5MV.XNJ>oBnBQsSbEt*!KPs85Pc rXSC$Ÿ`\蚠jv!O0nM?@HCd"&F,!c ۹Y@plMnFP;BoOo *5ffUOV}x8cǿ_Ȩ\H;/S.zãzB,SPlXχ,i ؀Bwg|ץV 'x6`RapvΊ)LQv ~}/)p&<Rκ~u = ةIǪDȸ&Hc\ ;9;xI&I&Qj4V/O#M8,I4JXmX fP 2xSz]lo4EMkLClϦBTщԘo do@*)!T۔1jRsF\E H!AiYP=!CV=-. p2(ܳ7ǹAz]xiM+, }V2ƏCb^! c-[N 1.Jg5V]%YvweC*qG7YtW%T#H` ly v_oDj/5WwA)R4)ԸnaYX̾Z3NS6GgRc"R Ʀ토ݵcъ)$q~`EID|>wlb XY~(. 2^ކW\,O5}4R@T<© q^GJ(Sp&iǒ1UE2&A(-GEWfp JBo]$SJAXwdY/3z8?$s4ۗm( u84W-L+Fsj"ѷi<}DB5R\N> R*8}ʅoancy o+u6-UUoBD sP!T djX:dIBUo >-;/(~og\#9'$@TNZ+[yŤgZ V4fp f=831b(g zmQ)Zz d !t~ p'gi# rQgR; G`C &%䢅)%99qDF"5!*o"?4q/uBAhelٱ,ՅJy3ASS&)c+sN42 p땨0!6SSR96twq).P)k8n+u'4* fl'8E1O6ԽR>\W& CU5Zg͡(q3N5wrN٢RGJ%@SKV/`дҹ&0:TzfTxPMJ ! ?!5-7{MpsS&p=gci4F<]w}m&5Q1H0h5ZAx2pGAҗ '9prGt9Pꀶ ^49EŌJ+Q-J.P(B(rDX%a9B,#*.1&V3Nb 6sQK7 M1)|ZEPאBP+rOkEEq:p4#.xN0tBA2iA:J ֥{vњi|ξgf K){--6}gQ `;jJ4ԕheƩbPi.Oٴ\Z Lhya϶H4v%Ŕ~2yL܌hF.LnD.IJ:(TqvRv%< ڛa*jg1 -DOA!p..)Iݰz:>g=V<="͜Ԍ%m~LÖ́--KѪIn#cE%E 6f OQVkR]DzNN\C>»硊01tS)ژQ 2%8%r$o\d3FA6r[PPc" rhs_Uޱ zQ;s7*mCw%8 75,WW/>ɋbI̺Yy|cr+.t݀|Z}TX*]䀩4?t$P ȿ8}6SzZo9f>fM[/_c̦sxp()hZoަ𤘤O6$>ny][ ɍ9NNEs䧒.-|fɩMb퇈2޼}\gNYtR72~4گƏ!~o.A*AzF.+teiOƜNhe:`W^y!:=jt]-#izg,]]6fA&Ǹ{8Ȱ9;7FhuȱѬx&=Tpj7(_$<, $ffkrޜÛJ4*iͫ9 XY^?ؽ$4W xh\bM?|w|wt-}Ej6gGFkN%5#+DjBAN5q0?rz2 F/z9J|>LGH$>LR8W;Gwn@ /ϗّf!aJ'mֻۏu>gsXtg;GM |Azt5*8)n`ӹ4E.d cm4]:T@^vD%!9qcNPϫG9O[gvSxu6S2Vp똨mxlc lH벭G udؚTmH8-|ڇ6)<3}9Z{bGWۿzԸ Hf!$d~Z*`ӛGxq`ͬ>{!E#''+ 5IY|hNտ3p7?.ks|Wz5&YBhml唘w*1 zoq U?|9ڽN_hKc(3N:ұDˍV=%ә}y RH*ӫ7*t}X~艆r4LVƒEBՕhAN)G.xngq&hb=|s>^yݛQ5|O=>-5J/W1 fzx,4 :F1VbzN'xbDjzx['x`vFy$BseۖmNX>Qv:_5}i*,FyHPYȢlH[q TkK.cvXBv=F(U= gl#2zkHwi6l)L4*;UͶi+5\=ՏvRM>G1G׻J^ک=n RP qsQqpoPhm_ q^칽{Oc૧EERzzIK?I%hH,YޤY/Yzpc ځT'5`zųŃ"S|گBΘjK><{rf..PL@$Jn+TJR Cܮ2iD)JMO6xܳp5\2(ǧ!"E՝5&E;?e[2$c%t-IdTJ?(+ɨHj&Q%;-yj[뚆@TCZkk\ސd!:Kt)2Y8bJD'҉dm[2B KsA(f8;'1 qu Bt$2Z[TkmV_X2nNq IOQ4A.D-67wݵj S8Ҋ;pfHg") ZNt(侓ğmyC'ww$;O#fC9r/6$ROZ&W0ղp7yH ,P}&(2Q, SSrrL)i4nj*2 `,Quɭm77Vj( 9Q7>?_,Cq|:~:p\qg{/}Wt~lrmX^?|edFa@s8do>ʔ`,w.<5歵s &!#(܁RV1‰È(xMpT5 Ә5Wφc" O&?t^d:?+_t7wAΔ"u1tC;Z3Awx=O2s 9eϓ' x?og k_r$szrߧo߼}A^Yjz틏߂G3n{qw|{//N_r翼Yz_}5;~= g_OFNFH~lg_EB66|ZfE҃t|"oK,s mCw-ɥ^^yF-,bVa~PeubͽҔ| \Nm".: w{d×6&o-wo1Pl/G߇'@؁H*_ncjo_;P^B~??|y5}a]h|es'@蟌0̋ ?NƯatϜ!7 KF2|~~>c[G}7\_< r<*9cKhY@C~S5z*Z p:kNLG#SЛה* iͿk+~(=M;SXO#X ŵ$;Iһ,4D?wQ Npo:c G/wQi%lG֯/ݩOW++$<cE뷓Jܩ1I.*`_gn5NMEg` n]mϣڂK1b 6MwEAR߻7ӗ?ƫZ+?-geujD\Ѳn\Ro{ߏNjoޫq˚~nxF<#C'Ft`Muͳ[nY/-GɿL}\#6qΚZy6;ƧZh;Vа Fr*e׌^˿*Wǭywo>]˯f3mqLv.xu'#[d9*rTHBy5E&$|m)<ֽ{+ȓkK&pyO}-/W4}0os(Uy>Y8L[;-מW''=D ӇG"E-O{x>^T& T~o B"@qc:|FN:SDs)&R>bz+8'c:B7b+Qo]_֍O[Du ERW5zI|?/8١17 bBó,&4<&4V!#hS;VڌñzM",EK\9UiEjŜfKn,vIroҝŬtg1+YJwVJW\1Af29Eb\%q M#DKb ) y&3%Inӗߕ$!0NH0n%nf)µܠ`\0NLU; a,Tp<%2o(+#% =UjA kJC%`@6 coi 65M![9/*OKwHeNYЩL&82͂FJmA( SM?783Ki/ b1$7"Y,3ξJOEwj.vGSdZ"gMkN2fM}|AXN5WJYQ¯oG1%ODAWt޿P%?}z:*UJg=jH$ky{ųaQ40!;7oK?ǃ=4LJ__>DC#AZǪ +rw47|!ϰ=/O;[VJK`8viT0RC" UJg՜ nj`䳇ax\#Ppl6[T2qlUo>CSQA Q(/#;@5xg6}Ǵ H&sjU,:U[3Tf,&rLTyA9[[Vݑ^GvcJ *M."ZD)&+vmVh]KlL${i&+s<]y[VR%e=ݰoPf6R{+9-K9Cl?4O} ^R=[!*m)g)4s>G&1j;.,Q{ğM#qWyrknyE?MO.s2$OqC.œ3\ r']RuÃ'K Aux kOG(msoM^bMi5|Q JmƉ!kͽtZ9l]Ę>q'bHGqdZ߼3SnK)MP5>+;^󎝅\3/oF"6 R2S)2SRu(h)]-*›߂w $rJjHW]WR?B9JS-x dz55Ž8Ju8Z 8H&mQȌ{n!KT@~E8D2o,"FRw%hu3鄓Lc&l K^PWICV4H=Zr>t=#9K']k5%* B gŌaKcŹ71 dz1l+{<w ~6=,_}h^ډ曇",`8:h&Hw*=":sar]Zr4ܙKjcuzNVV֥-m;jZҁ--V͸~P&#^k˨wOV\x.33|#4VnHJ.2O %N $#Omr ),pRF[>GY,X,12!cG'a"m4j.,Ag))5'GHGR FK `X01T2P /'2Y *+<2ʭzM4,;G{XE>?, 60 ` ؊ <(pMe@d`-X2#UTFZh/ #+!,dԄ>:E3% I 1XXH K# z-ss Y3$&P9b Iv0cH'ȂӄHj0eGw+ VJű82Pm%LU[2pǘeJ)N0MXV7>4T:é`RiaDr,L2iQJ,uCأ3iKK-ǜcPe 餰AqYb1YmHs>-Y(\bso#%yx"5Èy_:ЁhP=[tʅ i>P<߫hADYjDUKruaE㍏O,\\t/QT7(✔ZH_$%DNe*}54V;j%\d]R5/ɟ/^/w^a+)*C/Dϫ+N8NUg\ oo^ݿvs&d|G>0q͵ѵa9SN7{"yK@ "R % "Q2h~ڷM7\,K.k86TSΙeG]+cXK_wq_0ٟ*exne&v`V˸kG 1?ܳPZ=Mf8gcpT)fDu湳Kw[|RȢZqJwJ1U ӥO;{-Ezxj5 Ox \XBh@KXR>cZoPxœ>|X(mkJ ,;U@SΣ XUV]͠)a?(ͩ8-ý RQRk 4/DI`jh㳦gRШg7G,G<|ɜěo-$5%^ÜdԧQt*'|D pkF hC,SSaqe@3qMi>}U"gsn[}\b_熚RZ/)e) #L9]@Iˎ95>|0 U 3GR37{~wowQlQc9i+5 ]lc\0qv ӝa˻t*ZGX@LaX Z2o-Uϱ;>Yr$0t"rgUV;$oeh}?.`[Cm۩ wÁ /gԁ[V/_V6ht "d P?WHSy,L5ac'PݟhGJhS[pU|[?YDk>.я(U=m 9?JwTvlb-oOFPfk8m/(FlWT$M/,b(-1m.G!5SXiFKs=҆?rYM[$|IC[/O<1J}֊+ _O 6PRPIy2L ~t'q1U%ܝ{:a+_}8)'O7S7_TZמ% لA/3g:.̰<2Z\qW_.#fu^&()5s17GX!瑕'CYD3vm#%Hp񻶣SxМQP&\) tvPhiٗpCW^06/.N3蘋s9aϝO}MIvT7QcdW:q_p:zx$ex؁i##`4W葃z3JGũ9CSݡ^L#%2@оfXסضE[J@ γ I\&LQmQ`Ӂjn (aI'ؚy wX`D[w PJ zABД7fҐSkH:P(,{NdnIi2YU *yUu&m}\Q`i/X S|!^I𚔄2y4 ,#F,Xwa6nf*)9gV: T;&uI-P/+HrҳE. :$E c VVǨ(2-FC .r[~cMzST7`W}r(isiD{M)xn/PE|,!(1JA<1CYs?$duԲj)5)[B"ZqB *! 6ǵ; 1w4hR#Ja-szCV8ցb4'ckWM`M@<σYrwtk)aJqmR.!3E1&l$cb#6TU#(B:MbcdU@Ҥ^9q{$tV j&1rZ%5(1RJJ킋L `s\C bZ(w@ѣޱax"Cg`e hmN{[,vh#Ǐ,i@?]m7g|?G)6nP; TG!ngW n*eU*~G{i6^VdC a c`$DZy'څtR;fowZfYt}+es. 5ҸXFa-0 N-N96Rl@\iYkr &7kvÙ95A~6iMQP͂O8A -@^Whi@קR¯v%=⨢qUe2G7OOF6q seT9yzr{p1Ggp$stNf3(8<'l&czVA&{YOl41mTpΤ)d#L5*8#IӲFGpϱ~|@=U*  D>r~7~uVΫb,,9T'ҝSq k5.ƕ,N3{m Z❽im0Iynz ݐ#l*٫ZTgZ_(o Z.=YqlsҾvC>c0y !^]ͥgYWî ӛ8: 8_(Џk47@n,Ca_MƒWP-AX:Vx)V-%֎ :\ ~ Ko$)P 傷y"dMi3d->-\/KY~^l=bukWj/+~J ;6o= Wu"BR!}۽1,ddZοzt?+7ICۥ ?s IyRi9p_P2t.AKsg7 K 65Эr>uΰ&Q1b B+9yqM SC2j$hÿOε' Ji(\z)1ipiƷL '93oƼՉ: @X~\blyD}&6sGC ߣb˨h1L#3T{hFk"-Y7cK[=Xon¾$sn+}?ۏ/|t}ЦVQ 7v U y]n.n~ZfP,.olphP=)rq4H~%Rp#F Ɋ-"Xj 5;sN ]i!c9']P5:znV{M=$CZqoǙ-Ei=`Qp4 z 5 Z-s g!cɾ٫!jmzUV _gd6'ꉙ@V_ۉpf\̼\v*jǒ F6%W}ۡi]M㫿wkr! {1qyS^UO,>na\4Qa>B+ԇ _*]ڄRW%B^yݚvZZ79hH5]/Xz`d:Vo)yAKΟIFTeoB)3kvwm͍X4 m.]Oݽf*]dv+ hkZGݗ JLKEZLR][88LV U??~ë&} +a)+a+#>xexSZ< v}ʐ $B}6>{.l'giSQb D,(Cf1#e}>MBۏ)38oՇҘ0Ę(f7"4BDDޫmW7lV)Ѡ(d$BG< %$#Ĕ"2`7 yelPXb{DDm"ID*oxƸIYla>Q~^׵M^fQk׾j~קᆬHCJ`̨,W^<8<&:Vn6Ͼ:%R^O曱JE(ALp"R H͆S Iԉ9 @#(Eu^lnאXs7RIibx8-VzRsw,MX{&:EגHn1eDzAA QG܉biP4Rmԃx?U;-jǪہڌi;m+W HZZB\*H1eX]Cd.@.44)iaBk1(XN5zա.J:AI$mlQā`%P[ݝ gpuUra*zDE)tj%1 #%HEt(a 8U QG :NՈX%j@X>Z6~;W/w~XN1B?cZb*^CzO4ZOؗYvzd3\J=LWy9i8if_իtC͂$^Gp%⎪ToY4g<8H>>3?G5Tp]O]pB GO+vwqY"9ʵ@le1|No6(7]nS. ymm]#2Gb>ͣܣ%yy|C -%!|ylܽ&)BZ\ 3r6låe%-/Z(q8Y-)۞$$2 :W A(8d7SJ6(/AIW%&k\갢%H A wŰeX{mm] 0fS[[ r S[RrNoL(C+H(bip;6) ^{,z~C"?7"LbE)B40`Wc!%X"" ,$,qdwwWQvK ԏźtJHb/j8~i,_5RyxnVtmI8ZYA-]yC8AKEG\ uRQUOF_v"[bEM3zNa!+ufF$k2֩s?1nh!!}n\KN\u0Еd3L'9:+ˎtf|f.vfay=3d*uiIYęp\d~.;y"5aa/> 8^knd9p@ J C϶U$O58|̰9B.34D˾܊M$+ROJ!w>E ԝ|g\8wYvT9L<L N) >ܰ&N$ L܃H9Ťt.x2|&FbUOj(JBږlhli䌔Jp{vH›,1UuE(t9"ˆ _xY9s_*I fER43$ZD̘ź`M-N,9Hd" AViB`Ml6 6Öd#?zO!y u=Ͼxa|a5`?tKtAݘ~tb '$1A^@4TQbRGrrU%Pr?sXoIlOL?F,NGaJ]v+T+ʾՠfTvS #$)e͕mܝ]mN:湛/ o8l ;/j.l:ꤗ&Pt?װNxFlj``y R ;<&1#&ǀK C.eju'9G͌֫p,c5/$%OYoX^B#T{ɎnK k28VrY`)SYE/%/Ug זz0RKyDהTae-p <.x'&k\(69G Do[`5ޔP5~wŘu!_@rҕzԯ5O\!/%?ayb߯>aB9[qPxE-+C|zƪ1_'b}G˰TU^s9E46&V8,&v0kBӅX5yPKI-nݣ[W#_v/L⩷fեyOx#eF?/~ȠF-Fx6^o0Reƽ2 QLQ*I%iX^=Ek*P+0E e,A!bNS Iԉ9 Hg~A *-r~4i XR%BS%]W?zcُW{?6x>oӡ#P$UDcGi0;.K"T Ә':U"up?/_xJs[O W8lEo_YJ:gҭx9I+BvNerн9"P/ \~ؒļ0tS0,qlP؉ca+~p<^yfXy?x^#C玲FyS8L_<##%X-vOE'\A #1o&AD Ib"#J%0m?ږ%H>]NgM{ڨxFW =$Zc7rv=5[bhHLcBZŚceosZ Ur>=FK.sYJ)}3`Ȼt.Y?%TPqv*Ƴq֌ࠡBhH<_|+O M; 0wtwSΪz;R?Dq[$F'g$[|i$ >(_KB)G fJ5)IB0v,J=lE׏X ;mbIBpwљ%#g*ہhqv>ֺ˰Cc̏v.#R-ϱXkD+l볭L3L9ADUccY33A4g<;H>>m2$FGO@,0l`r"CI6:_D^ld.F绀ȈcLXSW@C-6˲$X7-52~aa*ǎ!i D$/7JHy7a<Z#-9 لE $YHJugC?7> */NIyVˆ䮑 $qhd>5+ѨˆWBZooF4Ku4^k}rFJ_8U!e{Rq˥T ڝI:W߯AiFԃ(Od<#uhtR?>룇>tz7 B*x sZB g0拜S-R C@vs8 νh;= py}aWzY'KKg/MB.vY))CF9DIċ Z˵!pR f)TX7cSbmΐvCQƌe8,uT(Gb?-CkP@3+kD`Iގ0Hα5 "L`i# jSܨ?]2Y$*w֬=^Tt1GX`dEViпjKM GF4 y}V4*-0h j~;,/@,?{OrO &|¤Ce,l4D VoC&9cuyMry ؇J  ֹΩ|sϧ77bz6zPRz3 #:'ȗiQ9Z|A9]|M]ˇª"},ѰO"l88XLU>R?Y>&r~[$'0db0Ɠrʉf:9[̬+ʤI ƪW htᆃQ$cXG@i#qAb4K7a}NāhnPRY涐;P. Y"TS ڎ8Yw%"oo \xz=?6]o_v_uAz5؞ڬ=d< MH%x:Ajۣ`!,%llV 6Lb̀?%Ȝtc/%$ˤ9#Xe^QG;ؑ9pJɂ<FHO,;^f(jma-4^Zo{L|MM TsjaF;C2؂r؀ 1Vְŀ`9ϛxǿܠ$ % = <t͛=Y0`gϪ}̢ dPɹF20C/e"qi4Kaj`cr*xQzIآf;:NQ %%-2_haL "HȝBĔG } S 0Q Kղ$壅&%3!&CHY!e!RV !p̉9"&G+pZ8#8&QU0csL aO(BH/pU rH$CY}vng|`F ^ho.sD [RV \p(w& ts &!b &!bR̓s@[e˰y*0XF[@+1g~B`06,::PR<'u%RH]"uYe!RU#ug`tbT 4)XPh(8 lf^]IcҎ;'; & cE2'AG%A#)ZI8 ݽ +}zaYo\==rO,X*q?< )Acߓ?Wx;G?N@ÛGx,q%)4ǷcPbŁOo)?#ޔsk拿TM<@tV0Gw7a^!rK!/8yPXqԀ ⚫9)ߊ0}C֝:S`0)/1  AxlE5 \SNE"ae0 Ï$]Ÿ㾋[>=y@Q& :ˍ(\\L#A( ȩ Ng&RhBXgj<5~KUSS5jDM&PسU4B ')^qKI~vTS(=o'{8hqc{)otQ9otL\o2m= A(k4RvyrQc@[wCY1ѭnnRi<^U^dwޥ-SnDUۖzӫ&qi&Q5Oơ/DnQ-5?a=Բry5~['Br̻cw]llGxɰb`\>^˾ǽsl5$^dy |>II8t֨I~DvL N_Vw1.H[݉}p՝# Bk.7ZM#rf\q%8#&ТM [y՛"S)?530EjĒ\vvФpPNKd v.{[nAbsaJ)p!4 hz&73X̹xT~a,7fXL3FUtqXe|*D0<q)Nr2trP4T+ʱ D~I >, NdRĕhz\Е7]YWr/{fbWtP>[ +Vkn$%2!!@rzf}g c Aޜ*p9̔9:4eF mwH@ЯO9naœ+<+RcXp`(,æĢ`<4mɋcG⿦ֹ}'ƌ~CÏ`isH]롃[~\-OΦÛ䕽ۇrj7%!#]9:YZ1/[{ *S"׌TIuZ)Q(da5Ba,)IJA]6=h6V U*$x)d9n kU#Yd!.0W8́ c!Ŷo3aVT.=@U%o{ٳNIOh֌`J #*>hJc"K {^0GB:RzVnn7e-drs3Xe=B]ἁ'sۥͰ]q&?} xi "Z_Vvs.kk7M ]xL,D;(TK'VS1Z]JeMi A#aWTiKe&GggU4&[=RTAC- gxaWVN xQe$09ܱAa wQ0V3i4،@"f1QTU Ç($8,ɛǵ֒~{84?-njnG 8!"@ Se.^2}|4)#!Z"e'iMtqIA:*#HIb6d 7K%=ҞMaӕ|Y |N\^b^:އ_|u&elַx;:ٝ<9JռMg+i]ΠVL<Zh1U)iVIFrp4鰐йښ5mn9iCHQ=1DU0q; po*ۯܪ4DQ:2ZvWvLyfOǐwCV uXzy˨._9d@1Ӝb+CD!\3Ɛ!'Xw\O3skV^;hl+9.9U֖Y?*>wƿ*>p]ЇϼXblap+(ZFΣrԳGt^t(ڈTb!/1<~֮ CeZU5ːӤ? 巿[Q*f+Uz^&u Pz]ByjWQ\4z4zuh˦LCX-ΩgR?=l]K}t~2Zސ5~_cX Φ!5v@CnWkzwMM@smy#e:GfCN]`x.Vn|3fD 9({7@qGט44)2$^B*| e-| n=xZ{ $n-)Р3Zywg ĕB 5#- kWH׹P8`SxzSsv>&J:D\RD5Dv+^K;J;5Ξf_:D{*b"iϫ鍒NggݏNI?4C(v͓lC*(M0sS?}IB뵞#ӖW3\? ͈\v3R(^ dr2 BX{Z:4ה"t~]oD!Ru>>M=R2^}ZYFi++UB.?wG`'+l>z9aA 2 3) [r-0advw$PDY,vC+e [`L\﫭)M溜$V0Yqeu!P0Zşr5LSUq$'2ŭQ9pUUw?+ϮEKW`NBjlvY y| (.}Wʈ=܍䎯!z&Ѭ:r'맓͐5QU; _ D2,o~6-Zl[@E{E8ba[X)2#QY<_K|(A*b^]^imJLjQ<9T9i>[<~bT3UB&9D=/'p<99"5n_2Զȡ4GfOKf%SyA T™DmKSαj \*Q%|cj(ߴȫڄ/Xԓj1@wŀWvc-Ofǫ?WjkS5,# רw-\vf𼆸,Ym{+~1o',JG"W¯vq$͙,Qv#J>z\fE_/x{d =^xݮjѦdOv?jfٜղz'A02^Wk8 qEK~’dJEZ zprt>הLQtB1[U?I`D06I;_܃׌sB4'm't~pI.kpi)9m?jE=f5Y(&.C*2CE&,Bb?3u*y^vX'0SCγPqKHdHֺ<(}q1"ގN {sZњUSװ8`s^8A,h5AC;vM m"L=!~| k?h`~uNL;khxUkfHs4(fUrL3±Rm_ţ+vO6߄5뼨ZHZ"Yio)7VH.+D9ΛPO6B+R"Tp~{ s0Ճ#7m\3$ߌ4^ }r _NR<*{&c7y̮"veemQais?~}78 t;3 f1!lv?]{o۸*)R|ps(6IE- =T~dgHɶɏޛݢ)fșc8GzQPfMkRIzc5=]Q4V5Y_jלcП65й36i}V8{5)hE|=qïa«xԄ!r5T9i0))Sߔ؂{nWn}S7#n;f؄e vZsfRlt{ɯ[¾2!W>0Z\ z*VpdVt[Ws gqn8 Ӽz1L䐅C88)YKoKh7 B /V2p!ٷ@gcxS;,zܬ}s]6̛U-}U_QSFvhE<`\n:X_ro\yZQv>]DE[21k/ ǻ9>r1 hO1J,k8fmxd'3BnՒz7|$GZ6Wz[>Xi* nE.}8;\iӱ]{Y)%غgrukl& B(~ !2T&,}l:) 6+nwnn7v?^tC$w:ÔZ:Ô+s孄/oYr1ą|uwT/]ee}ۅ4ka~~C ,購 sδu8mhHtȏcci̚˶wzX:Mk{gפC1Əu;Y ɞN/psy8/Eq`;s3YR뽳ޝ6gLuwظNEPsytppT^hmvRݝA F}rMru~kU49nQj+NZUGW硞ʩ=F*w)r ev C<.ҳ]r %!).-׷,\Q;ԋ=,I ՔL&}J# \I- 9Es<*3lK Yev)Xdvmx'v$mc/h|E/P #\a2Rqa B?QD#ڬ"8{Db Ʒ8|+<KdW秇!"dy33NS,PݕRե9FkE/s"O5y./2i\YPg燻.FWfsiEXŐ;ғ3OČ} Ff^PN rj~ۃE9h?1OǎS$hn'~Ȏ̷e;^١ٰv܅ѽx._^5o\_K3qq If|š6[t'Obf?l\rw.%\U/Ym;_-fˌPYfKs9DlagwÂ#n%/s 燴J4_cs6DUE%اB>+KOC-)(m3.$ԑLzT稶aΊ*gsfd\q AU`I2Q@e!Oc1 {*[\A^r^In9OPJDTNH64=IX3Li٠ [5oNs4[սJsGe~tN'VNՊ<~}D_A\èUC*=~8fbn߷ ͭ~ح2>;|훎cŇӏ7 e]dkmܚ<]ay"셣ٻ0,*^L?:aOwnu=Z1Fg|>֨W9P_Q4V5OG6h5!_3f4K4:wFיƦo߇k卣8WӲ<HI4<i7t;!*$&BN0) )YMQ24g |3cJʽN +N4S6yW*!~ ^ţ&(1PƫҀiߔ؂HJzA[ܜwE/ HeL>M=x׭zad! ofmmjCmnXh‘ٍjKx$J~g@}rbRVE ?[X?#SgeEe*5cR|N>DN ed_}3=N! ܙr6|׎|Nv5߂>0N uO[XZZSX.n <P+ZXwb:YW9=ae gm'+~u :s[^뢕fZw@狠yJ(xj3TfΕ\Tֵk}[շK"$[f] *= @X׶ cRL[`yY"!L9v&# f-; 9f?-rEu{B,(v И[q GfٵyL1r.]vs) +iA7GBQ Jp/T>P<<v=- _Pnsy,^s2'6VceA  4K \Fm[XY)Eض'|\=B)и@}PְxfqcHCH@mXjԅ. NK̄ Edؘ2Gw9l`!VTBfW8N%9|TמO0inJid_Q]G-_H͟/Nf[;:R{ՉeU=1 BCF뛋'ѵ'V|B"e݆==T-diS g QantISAed:u/H]}ȯ \BF,]J_ජ עj^.t BC?_tln2'Iw :+2! Kǹ# K!~  lohK!COXÂ1@BYɵ-RΗgˆM8XI00l- i3Px[y»P=Au}L223z5(4CMKV':v!K rbM7j<!57n+ٿl]*GɦRUwjJ #Oly HLKĚD@8mn.V% UMi/FqfJ5mڹҪ(KiY:6bD덝eiwlVpE|ᴁx+qas$ׅs:+Dc`/DCAu{Fcd:{6|\Q z Ɖؒο~.rkG8cp+="Y1'wa}RI]h5qk+dJ.}A?uJxT]E ClU*W|>nЅ7Z{,BO5ѭ\Gnpd> f .O>z >6Dbk&WzEbGO{T-yߖZiakzMg^f3cDm,'rk+gňލ ȍʻheA/an>} kzOztN!P檐,79 I#g GSk{P" s8yFk}dڮy`~|Ū 8Y׀ml].ܟ׫n -Rd^B vGVM}C[$H<(u)|X)Lx0*c!uGD)XGmNmݺr{~~2] UKCyP@ӽS}o65sw6HFP#LZu-/G\U}탆 {q+ۻ_W ]E7Jqk8|hL9ɒ ?;se=YIN+wyOݨeOy828?. x{pu|[/67Xs\\vќn٘4W1܍Z^3ƪ\[ Z[},zp%֋qH[T(h{\m*{zs)|Tk^м_: /*(2.>m5gP2X3~x6MΑ]^,I@x ; M?;T( H ?-G`2L!) ; g $u'm{0&!JKif@[[InnZp|]mTH]owI&fZM4 !{U~␀ ($@ɖ秕 [y{r>yQ9Юgsaw\UQȿ~px Jf⻷休ZQd9忽Pgլ{\/+irP>7ޟ$.]X$BfeOtzP" ˞Eݧ6( $"uYwdT^u}ۇbeZg^,3 (o7U&{mO˄n,SCM?1MC. el8ʹDDc`:`CĶMQhn4EH1j4V5׏d}ʚQ0,9)7S 'z"w^H#%B$?=[j<fSL.jb_}+x5p͹zNTtgj4~!^gYsmhf2~zv&ki|`Gtπ^{J z~Θ=⋑z9rSD-B"jYdQ /,A"|PHr(AJ 46 =vɊ#۝rRpT0i-'ˮԲ"N_=v ]r]p37w "BNSvG$^d &9Hsbu>qN+L`|,fQ 9Gɐ::t݇.- (21uٟ7WU :gJgIF}245G &d#1H#1HM bپb)HHdCq.byPjKv#0IY&}U)#N֒ D]mޞ)Ծ5-@BqN>o}q@C6Ny/zo4Q=!߽Mpw[V\v $HVp2E?ȍRQƴ4~.{uWJ8,DI=߶l(YX;atdKH#=hE t_H gi6ےY[^&>}1_Jي썜}YqJCt-Ex{BjgAv teCez*Rh Tƿ?sr6vɹRƩ}9$q&>XZl+1O= 5V(%gMRGIfݮ(J/LS'@CL,ԍc\kNLR@U.(4;dlzORcg21w`-"}(1fȏvf;l؇CZ@TUFʪkh;.n VZdcbh.IgFÜ U":/+ x:J),\SzB5Qix.qu(EH'OX0gO=a_DaV_Ņ؇D?]rMyieHq=N '{lZnkQ@3G! -^2At1JdxʌwL2`*R!3e"34)cOEY1O=l> |݇wH&U3=b`ku3W.#)," cL_C%;z o_0sZ7TO DD$y{1qPJmT蘯^u6<;ZSW4<^o C'zn:%[s捤W$'M]v<vo>)Q,p3)rz׶. n@B>|^uBh<_w$%lqtɓs{)dEJjR@JdZ*Q*1iV'= @;5O1-r4DsR5gj"n "4Rz JYyy_A&+B(] ጰĺ=$@"1!@]ЫH m`(6^!YxEu>ʄW59kv$:k5Y$LĪ%\KFgkyF;Z&̡@>QRir49̔=n[~yArP}k}_]I,`ѩd 0ƘTfC~6tA# tH;t_Q ݵgmy /䨉"qqܕIۛ55\bs17UHnVQ5iqD*DqEkMXWީ~{/e棿.+\inVkcintbuWt-݋F4[w_y_`n.۷ۜ.(6qMN0}z"ۺʹ|P 90"vy-־]:x>][֬ Ə25GJLLE ]*p&]kJ44'Dbe~_7[^"R͏\~g ~_:32~߽ͥZB9t_&VH@YCR-T1^^fCQ_ۊje/???{WƑ /].qBJȔ REUzRأ!ÉtTWWTWV NhRἤ{xl %ϧzhY*;Du&D+2 !W7uC֎˯^fhoƓ}Kk֊2t}2#[h>O{veAj${t:>|?G'׿/ya偔Ci7ܳG٢EX?]qLU1 #UJUgߨ>o%yG6Bl~#H*yL7V濒Qc\Uڎh[;HmTDS% 4%ahp?ŧ$C{ڗaˮ{g;Qa&R֞y^/!4ȝU{FN|>@BDx=ļqȴ '1-^tVAl!2l \Pl7|;h#vԭBi*&*패8Ѵ(, }M YpKG!=#ٖ 1yVy/nT9+h.N/T;a5Xtx]M_0nfz.𲻻wAVnA!8}[dDR\g*oZL\`j)!T/6:4^XݞW J\1tH5Lw @e&4{^zL.͘&3DPFW츜d$A_-/^zy߱ s 5eò_ j~Hc8;^4f TK 8On4xd.1^Z2s(naAPC rBRqɹ6sռؖ\KRy;l=O>%@T `Z83LvZFHgZ`+2(SOhC@zBv֗,i SePћxL$-L cot2D0Ƌ|k#IŅ Y|qbG('y\0Z3Ns S4d^YG Q/@y ѹ xЈ3Fٻߒ6p ݩS[SضJ41l<Bӊ[gY0@)(aR" hdim".2ۀGЃ[.WRں>'OM'Z$@c&4*S`, BQj s- |nKxo̟2q/Wo_QBeE+F٭O޾Of ;6@w.of2변d)D@ ї2wLE;4{xw(>'%1fUg^-Q8|4EfS"j|Ysn/qsٮ╝}~}1vo~=׷c?᳋Ne]lצ}wM)94nxrїˬ j ?;~fLu{`GzBl |Nnh!,Kh]}[Qt͉6oED ez{hd@vHĕsl{Kj7A0"\$A#SN*.yKR$l>."uXTm㼯BPy?l 3JE)OZ/RI0( ra9fD/(_P Dm/k8)AZ(TP+Ѓl6v%BmT ~ɇ̅%f L|7V{ܔ7C̪Pq PE%=Xo"bQ:"QOFdh9ИI惏>JZo9rR1%K>,T4;_=r)=-| 5{7'7ԉvjR]&=j{ׯ)"b8ܰ*ѨMB+&ji F-e2[A<&t}~R ]_G^L.g Ww4/e` V}OK6nTrKmf&ƛ J>lүi*K\ .ǰ ȌrEHohQ?`5 ދ^i2DP9W߇VѣiX I@0Ơ!ߖYђ7sɛ z%mb}%LO~m4Qx3׋?_E`jܹE0z]]v񿝌k2Yc'O7g5kPy67xe F5T F>9kw?vS{4~snIZr60d\sV=y(3`eMLtO_ h[^V"dmGMޞuC\Pt9M.|_:ױ[Yf鸯UrJT26LovA2)p" XB~79t0:!T)6ٽ74 .rց6$ ]٭4AfY|udl<'ffBMJ偤 (:1h>qRg&IԤypZHIa|JcP 8p:ȒTzd& zڠ!x5Ϝp# ǣDf k*p˚,8H`FpSzĉKXHZh`k{B#-s؁٢*;uN8E(:PZ:0}=[8(ɝDrfм4G^q" }+5l*54 r Pt Љ-jvȂAT.c![^2_ ܴ{HW`Q.^F\*(Ɩh,iяeⱩAQGHY#.FK.뫬Gt la{nz֣37E݇ACrjۼ&KVowYTxNQV^2v NyB)Db;r@5;uW̪ a*C  {?*XPC#N8zSd;d~Dm>ydwXLWYd&aRmRF! *S6FQ?Fտo YjdTcy6>sZWűKz-LJ*M#/UW]i_YtPz>UbEYQA TRO[=AV?icƾ}`*񤬭ѲOmR}ϔ>^&Gvm쌓vfײ p dfUZ0qP+RuOA+Aw !hv'⪄{amq ֒˖>]am[2pZtq;.f]Hڬ&'Dٺ]sz_UЦmt=V\Tn6ʱp),?mKtѵ77cn.~GlEģIEFuRc.ӕ#}$Z#QcymA*AT$؜3(RiZ9O?UȱCA 2բͿpNmU%UjᏢlN?Lo?d^ Ç ;Cx;ɼf7/gYh4I+ﮦ5]M'&s) hzcµ>mHX.hHu}I5ВLVzY*5=a`r&~eJl/UX}ԤFi$ њƩN%5Y$U[ ˀ*a*5;ҊS ' RPy$AD);|,5@&M\Ϧn L%vul ZiA4iLZ(mRN{@y  # `Dj Z֌ 2-I jۓ S ]Fңt[=$́Owۛ`M*fYG%xrWii+'ҋKcVM 4λja6 .cАryV^ ܸChsZDw {jj{AbR#snHHE; $dx؂/ykO<{Hfn_ Owdz/sl+5|77O}>|}w9Y^G$5'qKs30n=x<;6S[{5zw6=L mCCm,+~w;QiFYvPxtoaا5ҕ1$?'>|NW7\ΊWPwSpm N/c @szqH%ѫaL@ ,_ѕ[[OoClг5afw٥ ,xw7 -S!Pt-#&J77zP*76P\BjaI-4l8o.rY"ӕuגLyI&f:޳:k\Qinr]?DIg@'b1;,9 $-f#6Xq6G6+LZX7$6&Bncr7=2Ki= N)q`N}b6U Vr=_VB2y赵`= ^EK\ώQr cRtҰdCCO*%SYV&J Xf `ce@>\RHb)/(\jK$ oa^2\ ?7PGcۛ=*!_3X5CMQQҋJRq#m:ᚁ\.F@i%K:#fD܇\=roP1 7 !̒\`Jy­qb4O\+Ox?fjE`0H^9]$qg-0fDDs6k9)jyWBmZWz֪$2!5.׀j@m8'@{*ThZoJq:"S\zdҠ긡-z@u>)։ ޡ^T~7GA2NJCS/I0=W(Vf3%:"qY`(dA[8P4ui&+Ajh2 c}*WOFćM3凈J ފ{%Sf<{Mcvyw51sF٘ 1tH0ɑˣ%i%0`[Axit:#WY|OHbA' D)oL]t}rA4&I152 ̤֧t3M02nk o۩(8v^I]^"HR][qQ1fX4( ܚG)}VܝSMzw1\5AÂyAf!G #*_w7HZ@$w8JLiQ-uv[K ԭK%=\UzoǪmH(Wy@U] c%3jVd$@GRY(N(W]:;SjÊtu9a.[rÊӽ304QҹRJIG{gVJ-}}BLjaރ]D֭@~]>_*h_݇qtwDiMk&o~goSzon0yyn3?'Dә(.{T7%)Qufõhp)& [OgFiprs[3Y}ɈzM+ֹ ㌬湊+DKRǩ!hQ 2`Gc9EY2p9PT^T9GմPQ.gI4 GSP]Q"=o;Iceh@6)WR269 zQ B p(i (!Ήh]j!5"Z^th&d"hk+~$3虜+}Gjעjٚܧg{׈R);nV4oJbFlzjPV,Wˆyr}&Vsj}ӄ7'8nEeag|oύ1w\nqE[~+ㄾGQx7_CR/x[lCٯ>+ Ze=ykԱ,e-=}񩖷vёZc0+YR\ߣ澲ruM% h#ffXFʀ[S RD;n!T@խy4qukCB"H+R\>ߪ&()7 =&0"9X S%4Vx"̈́)D&YfhAmPY}e uql%̧$'|0~ŘռM=儢α;Aǟw1XS$ iõR1=3 wBҮ^tv[]gs..89fwU5h =^;UָEveq¬,ܴ *[8=Tv&cknvw6кW/~v\̋Z=0e(?RnXR5꒲{\_GkMC-J69xSVچjL˷chY n޽z%hazvAVZ6QAJ[{7*ƩPqy glyzW;D,6;0,fYI*9/Z-xjE0qհ[%Bp3opAKKU %MLs*"V␅.N 4i=c`ۂ9.Tfɵ(4)rI7Z܃Ah g8>\E4*)#TUM(Q*aւ/QR9ƜPwxGXgϣP@QAHTm eEjnct+ @8-`9Xd J[hBU׬6;4J_"+ JW8;@z9%I:['_=:}Rm@4^L\PYf4ƢeL)](NJY5g9G\Mh!BFcV%TQh] k䌇\ i.mBfBhi4"'h7'xˉD]mz ]q* jfԡp{J'~:hqxKkm-T;qkj@Js>R6FX2< SBx+h = ;H |q7|5AQt^goYFF% (J$\O&)1*Frt5pӅg2Zb>P\ &֢J̞j@K(;U-V Qê"SJJe "'UPyIy@T!i8HB)ͅNVIDXYkL4JJ=8F%%|D;uiȤBlA.]u"ոq̴*4VRjnb_u6[7'2O-P:h溍SR7ݒB3?M3  cP8ܯ6J]!L !NJ+0Y+^w\:{N?aˏJ$1I/pL#% Uϋ)zȟ[. Nn???2;ңC*-2#6(7m\]7zˮ^WÈsWy|UjrfS?oJR/#_uSzIa|Xյ|77ϋg~sX<}-%FLT_ZoW hʸߗ+q>7 ty}!wWO/Ʀ<s|t¬7K4/G=~"-Th舧6cyZ E:r17G=lJ+Rß{ZdKtv`_L< h gHh L,z(ձ7"NTa 6JF10,Zeg}VsQSgc%:oE|j %u9*$Av|SϗP\mz:2strDiM"qfB\m;V2s|;@`CmRUXRYbG+ B>W.{ƈzq!xoۿ1&xK"eR"& 䵛GpYkI oF3:ϻbi0߇u]ז!}ØXܯϲm 㳧j%zy;r`;{Pac#cҶ\ɬzg` m0RVG4E!QEh󳷞v؅C:ڕ"V]A m鰶RgJ|JwdAx 6'AG sC5T;! hW<†5\ L잏HiZ΁#&py02\csuzÔtEXΜh"kkv{E^k* o2AτҨ֑©]:q!@3ZB3aeӉT\I=NlO8-OhS=ůPh@vt/lcl&JMy(66jԂZ~v2:e7n )D6CLsoRfjA\4|1q"{ MSl^T*ǘ2F)1f.yfwOy2rԘ1ߪ6j< FWW0h>N;SҬuSUN;?tS-桕lz* qqF8Q:,4q߅ 3{a2Z0Cy 'xDJ*YVt.CNa!qu b)-PPn(zVGgMS.%4Bpi)4e&p?͹̝*\a^~MJk5=mBj$vd#͒>_!p3l4=m@5C ZE"uK{ɒڻ_i~ǎ+ц?,"Vj{b,KcR!6n(h:ͦ[ʠƠ)FOZT{?C][7(NֱlbWo5Ȯ}[T a[ u@i0y2\Mm(U1 Ч@}tz7$9xB 1H;%&V) -:\-^D  =mwwo3$tqukq XqS.45MFV닩RX~.6m%-G4gź|^xvr7eν샨jpxYKʕ6&#ތq0y?ή.grm2="\~v<ŻA(~a4{gJ}|{@i;oW +ExߟqNw^]<P ۦ}JAa^c$0DJ15@A۵}Jw^C|x&8j%o\ur:0o֔Xb &sD$ǼĂDQQ@41p0kz%sD/Χ`s&畓4y0kJe&n<:z&zCmEN eIciB|tE `HZgu IȂe%Q֥RU_C^toiinщOxhs`*7ĤƄIjtE/ЁplRԒjJO/sEG|MIyKx7U_`DNnCPw#{#F3JŁ!h}e5@1yF>5hR'dIhnfTL Zˁ0󖃂N+ԲXVt*'ncDu>UjRw0=絀9?Eh=kA[X~J̗wovܩOX)e&wx-5$һv"@zgc2k Gnځ\Ӫ+:O[#vtvW`QҀH||dt%]Y [jZ?[}vѣF|߃z~G*sQ[/W\[/>d(GQQO?MIv!zPyʪ>ئgTըmI͹2b[pMNfdL~)vqK@^\t!ΉN(PS}ξ~m iNg}ydWj A֤zAypգ91U\zW!á#PTrfݽ\s<ڢЭ]1V{sB eL# Pfڍ'3-}1^hw;rw׺xS޽' >9G%b]-7()MXHpZ`ij;R>SRJQ\yZsKbԢm%cگ\:Zn맰s슶aK^`s#Tw5^Yf2sJW1JjgB(IC2ו2IvJ txR&i T&Z!@ b ,蠠`D+?)q \C  gfItAE|Da( Q$HhPmb«" 4mlT,8-z(8CEi@E\$Fp@D'dNF;*FC,vkӣ N ]M\G-7`#z'sg`Y<##Qn~LRkdrzpPRr2VK]~?kK ) xn)B+IbE봦@oc "JoE?>rzy‡5z)Ǔʔ0={|v @3IWN.>}ܮ̾8`Y7˥T(UV mpsgZm1<]yćGX=N&COY!Onּ߿&F#a1SNRa.R3z榔8+rNёrmeSN5y7|9:Mio~wB^)InZHmQn"6Jr2F=uܚosaK`&SyG|k|DP'qo7,$w"[/|o_# !E_W)Kc褃T(ہnicNzJ3VAPP'(J;f⯫=%UqC+prdj@*qK]ebV)qP9/y(d .E6^Q T`[WZbÑ3[gi`%"*c+rGEcO@j/V2|[)`T__Ǣ]td7Fdմ E\aY&::`bosz# ܀޵JG0oIXOH3'X- ݗ17+Pkԯ-轕-+` {(ZpZZ@fj 7JJEǨ٪а줆$X\!okdEXמ-]XQEv6kVo@ #6蚢`oFP!!zej+gܩcOF;)8f"%nulk .&Zfĉs|>"*8ڼ[~0)plwog_..].}_-$W^ܶ׹ÆsZo$af}'=O@ #Y* 89 X2 Z:]Z+~Vh1K\]OFuYbTbKZ3*-w.@&RZ&Z,s#@f?j$EGz^JTDmc!YHU}^XRdEN| 5#4 Hq;5~"7-BHH),$U#B^$s((AZ R9vNZ%oSsS mVV3X'lݾ%ցK9Twv쨫I5LFBh{&ly#P)΅:3ui5'04TOBL'jN>uAfT5F|McD%D)ߏ,JY|2+LE4dG5 \XAoM%fW*U5L6 'gHay3 hy!`m JA+P+I0/ͬE¸Te3Y*4a>g;vsrJBI5vٟaNRء,1CT2hk[1yB`, Hy#\Iz,c`\~ܯ! YղWLVa) %dJ>w`h:xgFqfkw9O~Jp( NBOGҡ3{Cܒ!sR Uu= CJJA%6nqѸ_e g"_iT|@c/}YpI_y%iIE*@60Wl)R\kP q-s4 8Tb׵p{sT=zb{=z-`c= 'G/Oo߽u~R1?y3X'tR⨕ꃷԾ[}z_AS!jK*E;P$PzNoę>J!ս],gen{68&[q}AVƌx9VSCɐE4U0,xN2 *r\ʙ(*S'X$j"{0"hōi9{\UVs{AT\8dJ$xUq9q{A#F!OOq089:FZxH2`g.*RMOۼ2.ϳB9M2 m.֑R:u>){xW澽[ܹ_Rfm"O)$D}jɫUa47'+A|w*A '=mvc?rdl A N&*|\QQA&~G;^(AJV;,9Z!TjǪ𜧪VRS-S5G*!k:w`z?_eupU}jyeN$F$3^d6KBhGkڿ%Uw-GWDK R"e2)8i Vǝ)$9y"yu"rB[C"\N=|r!o)bNgcU[_߲.z+pb.*%c+\Đ\ء7GPJ-:,)HL(;tĄ|NvʄןV.fpNZ H TwĔUn)" O*b;ihQj- PM]ΣݗOg.Zic3L*N'[>ɶd[ z.H [$'`ѴۗT},MCD{8b&nÐ$Vw#R|8Fs~<98I@I:q"\Uo0ҘC2\ 5X: SO0u;zCሊ^[8^I0sgJG"8\^?=8>ʍ im =Y2eQO`H*&Pp)Rr2% wMZ +Š> M!NlvH 9FHg~"B4n7~Iѧ.dѧNjK'ńÁOŵ'@я? O N#@Q]:9a*ca"_#jUGgSoﭪbqpܪ,h2 LB(̲p.u 1FY)akʠLmAӂ!Lvתw}zAdJ']h'mT)(2k&p{5'T+#FNO\ EaezEwa.ѶSt8ňkb-JZ+ 2NZMPi\@@;9k qMJUnG|o[W}6q~OWi?TOLM-3gT Ν@HY8IuD!@&DXVpl4vڿ@ jm I(2, OÛ7 ۝NJ/+**ږL7[axH2Gݺ|I%ٮRr[0M?~uzե[ zFo9^,0ړz_Ak.xzyQ\Ն4sq0F1fHU%'3o։#cΜq,qMQMױT$,~1am.7 LI’#d\JRa(ޭqHoӻQx E*k̍&4آ >ו3A5fُe8n=< o͊Z1lݓgzt["Rt eu}\""#Fbkw}1 ,H Bќ(~ B4PI-[.CRM5@a_Rk[HH#:f%%$uHz)$i&cw Q bOD#w`1a{_ 9QpdXkTfݖ^ r,$s5n?Z F }7u1_OgNJf_| U~Uv~a㏙K[8Y䜗er29esL N䯳BRI 4N BUa9i$-r5kk0TPYJB^s{zU"Gmլng7w],WmΖ藋E޻R-h<Ih_Vzr}$=Q ڀ 1Sd IDV&!9{OtJ] i(S,3S;.3^7ͅDp?uHHeC9$'πR\tb8~&;6\ f*xiz{˶IDNdgL{ $nނ$Zp)>".nVD5%P!&$+1g rA1R` !/>! wBYxS2Έx@' AwQmK>Z= јw9zlIWr2YǍ5ߞ!#J̀rӕ*35,:0[*fjyeN$Fy=@͸KDcqp?ƴ%Ñ?z@M}(E4S["pw&Am @ZH$NL,C+X%|qJ}d|y|VB*;ڡ}Co0o5D,-uڇ!^ن9zW=$ۥBbq {1|B'6M1"?@2xy»!A:\<_s|G* |\o_5m 5V3SP$FH[+.!֪Zrsv?I !r7@)NMVoqL fk-$BcejD~ ?Ê{^{^gCBX@nd9FSP圸˄6nt2F ) 0m)68\߆zexw`x!L{^i0y=ô|;#F9AeS̍t0iv\%`P/3zV݃\ ؇^n 9!Jjmz nXZg:sk)( Ci&g tgjC+{CAJ'C{E `mi"36 h#CpEf@88Ea`M.sVX>qߚtL|e/ʌ]ovyggP xd-({-z+~\G?_]R]޼>ANgfrW~,n&>n{}> Wwv߹ހՕ{`EVL@dg 9P%>8VTi+w$Pz sJUa4P͘q7<&mI u&+@hc3ڠUw"4GX%v )~(nj\Y#PS0Gp@U*UWb\r^UVU , NrGINUNJb \U@顧X@QzWj%LɧEaeIwn8\B %IKk?SY4jF4 tvi}`VCE/Le( Q>h[YauR݄qE_cBRj0CGn/nDY2CƁtuBP~SbO{z!Z-_g'z JF;Iof9 UW6J>!JCSa2@&Cn7"!4)A&ppQ2U+˯-6_ 9$oc^#5cQ \|.s{1tDPfrgzA2C !PմQGvb iIJoぬ5 b4 i:I"'wxq[!9B@GXI&Jr2K"VbNP( 6Tь!#'i&C~wm=nz9س+ylo8Yİ㼜$mag4I[4m[ĀϴZůu!rkeFw#DQDAu2u >w~L$jdBz(9$Dud #ڰ2&ɮ@#KsTfmN5L:a`MSM_ !G_B-RGAY: xwS *D ;ѹ(f)д3/kzP7SjǓk#cHb(*e~y|۔1C¸4d-cBL\ƳV_p+GwolYzߣhG~wY /X,: B3*=,z6UB`)[$%Y_|Oq+-GƓލ~/eQQ  B 7XK@`P)p, y9Ŕ l`Dԗ.ꭸrE|;k%d e惭'N_~!aAO sA]=(~^z X_Ǡo_".*yq|ux@Fgk/#m[*dZ(e_,/,bhc {5YCXRȁH`h`,!HI_s2Z7ggmd^=¸X xAldJj  `Y\ZŤBM BkXIO|Ąyc BD$}: uG#7 fCv r,ª}t滹kwk/|s~P6 $"9)U[#$rs\LLS,}buQgxGb Ir(ôų{uX1Ɇ0Kv`p/r 3pM |kÔAZqAbWJ$ ;Kr]J.־ÊyVDSD^l0#ma81g' c5;%h=~ۺIAF4KDC>g;H޿d0FYK&IaeS,mI0!5`@y"|WAZpo|u PD)i>NU=Q̍ C2Q'?d"*ÈR8 BP̽+4- nъ,0AԳ3nCvb]lV (D Lj2kӐK"h %k?K.MLEw?C tŢÈRH}!L 3@[SF€aDӊʫ[!F${)UI ӉBD;YL-KMWC/@Hdf5į??rl;:4C(v/L1_R/~`O_K6qsD9 {LICx-_jjrv>u(|tOS}DVr8YGw_ayaoqߛoh L@sȘ߮ud(KKğV@Za==w]O*ZsG:*E=*ꅼ9@3DZq%wJV+YA76Ÿ,oMDF:Mk>!rN;:.*.:yΕ78Tl5A2;P< i 3;|(c“",LK[0nPQ Or+0-~¤B]e)LRa~Ýf"Rx3srT|3|uf'h~S$sttv?ƪ=zAd2GO',\}st1'& d/QIۻ9HiIՇ+@t{2uh =Uճ2"85QH]7 Jl/?IU|# .4*''4) CistlrrFVS)GNI  Oӭ[d0+$׵{qcQOaDEم{C5вH|p-1?\އYgn)T{JHG6!&٫Zϧ RIJCC.H\ ϤPC*(JX? mym'U٫XC%щt$h ՓWVOL ?p>\/4$ʡK_&4b$k}s/rLSMZ#S q"dDkv5exFΠgϷ|#([ݏJ&郿?uk_VlLY[atl`M{] >8\1`~޼уƟ'7|tdAɮ>6WeY3e1uau YrYtauUдcuaW)]g48eԙˢro:XTw;*NdT$Kv91Ig3,y٫7Q.^U ,EcnE;*GN3d)Utõ/VT{ob H9s?zp ۛޭ5WՋmP6 Z=XqcnCP=G/m!sA3Sh8vz}~yQbr4EE8|w'keRSU%e|!Pؽk_}=^L*> c0R*ё>L: T6Ċs /%1bHCtprSx\OwK8_V*D2T3/ 82 ^ Tċ3SSh2DXf?c:Sp9SYhBiRfmmK\ 5FJmB}2^(AcŌgThagq&x $@]Q CVo6F :A4M$^eW\RXjqG#4/|: { ݛ~9 NR%RΆrYSPV!.AC oU 8Sd }[ּEy?/)|fl"A+ r`(a QRzUk%"}o̮_ ]`\ K3 S\z45/uEQ[Ha}e>s%I &(kb(˭֚aEt@HiM*8]Rv9< >@`{A  $P AY@jWJ%|Iݠd`"d ? 2ЕP²/,2VI6?5OĚP]Wr T_"Vʃ__@c^<Xkxe dl鿍onB9~65#$K /hR :1#WńJ"|9Pe6T"*1\q_+߬- `!끀Ry9&[n6 rʖ ktBjŵj=[Dp%_Uc\`qYr5 Zg@KJPQ/\- E%EoS͵D4өoqxW:++HzRN@[3*C.V-', ᝜.bZ\Z7n}4`K4Ys''q%TW'>]tՌDd,vroڕ}໹U8p܈".bpӃNrN^&<|hcC~rKr1n=~un R|Z)x>ʢh!;!R8(^[S4˅-WL99yxZU:ûo -Hc'eПN5y=Y=.w{hh[Wc,\:̻3g`$gtz]e eepTfO]_}+g @" QL06iE!q'"Bd6"}6hhNY,dmL?Y Y^Jӻ-0Yc CAZ"} Jł"S:Zi1u-h] ǔr<sN#+٬3Klά#Ucn)7c.8ϲjp 8&^EMjZHkS?M75'7R k^}WRt_JTr-6?@q/0@#{[T*VYJyA)ci !CC$$hup'?Mȷӽf}~<#00 Ow$TCtRsNI=D\xP&^C@W2މ!u?xq֨+6zylҔ˻t]:3oD񺧟-*hͨhJh:yf:kӨ*ڕ:7!y53%b{@2RB=قzSfw~x򛻫>AD lOx:~K|g*D?ݻ;$r]F޷hu`'PcJ(Z{+?O&𮸰}|a:v'*vJٻƍdW٥~.H /k-;=<ܕrr]~,K80`j ajV9]cxb+.RVRX4206HB-&#rHxN+Lp S`my )#8 L-rVSjER:-uH}}L+`x5^\8Hojޛgfl?,Q)W|yyЄDbc4Yp԰e%SX:iK"uF&":׋Lt*S!8)I @y ӁZMF`EFb@{ @JQFyha>R iTzxZ Q2CĘEބH k9x-H,%^#"mYpGWFA4]F.km6()8zs"J21g*7D)!o(Y'Jj5k)Z"PH #/Ea<`09B"n|\L7 qlDZr1[IԅphDݲ]\nwxlw؉}W &rN Ă%8+V=\)-yLJ0{?ή@g\jGQt0o %f0s+I1 n htqfGX;U%^mrb/LUU1.ּ8(,BAeXB@b,/ᮍO^C>'5Mxyyή{3+aƦ!D=xU.a]+ @xpeAlrDP"`c4 <@cN<=DKȐKM@ي?x=<جsqp62I I&"-99Xð{\4 Bx-52*:P5:N[q\Taκjqk:aՑ:a~Z'H}}s{V@?ڮ^[r/nr@ *?OV q=p'.b_{%%GUO) J}&&GX_*IE2R5*%Otg[ب1b S*HksΖ,,Za-Yr\y^2'w ƓtWӗeʒb co]}ZN#j)ݨ]"zeӂzp{cj!:>-e4 JUE-)-w)_ 6,lCbUgĮºl !Ք{U@=BʵQX; /@| ]e!5gM۞כfd^&36솝A+ug+Y,213_iٙ8F}k0TzsN* G'ds# \.gZatVHYD.0cC5$I)Ճ92u6`0ubg:d4c)8sôQ:řsV;CCKMnN$9Q3H*KbdHe$8nF9"nbQMj ;x=a6*\Wç(]_{6~垼HIZ_8-:qC%TbZd*)FEKB^'4Pm%6j- (o&UcUHB: N|9RіkD}cr{/~VL5r/YM".`r;.Rgٿ̭aVc1«E3x_kTaЖ1$mάo8a^Ou[F[<%4xw%Qik[lFj~I]ٌ_קwk $+y_ϴ@=hh,1މnyn6Ws,0ū.))m@ m4;~rVttuV3|%b]9G75ɶ{7ճjMںJAO` N" -,b.i@Xf%+1\|;Y2F-ښS8n@)7VBK!^0ůff*we ߵ{޼dOKP8./ 'o1izT7K(;~9*8M7wqB9fX\!.eI@Ŵ4UEߌE%1XSI TR#̯Ǜh[@)D* !XworAHN[_iIM\Zn7V*5 2Tx@G{3068ofH i]%Rۑ&8b |Ͽ˷hCHUXl!qP-c,YʬޜoԨD;5uFB@5-uЂa&=څs&ʎ!v'?7ap焭Lv:vu"|x*N>YSqQ]eG^U^}6"^{<lSW=F^i$:՞s-T{gdzk!@DieV4Q"pK;vҾ?JbZgu4w~^@ה|ߚTE@!0D1KϣJR.pfQmԑZT(;_Зo9oo تMzwʑU 9BGc5r–1N<ȡ~x/DUdp[t}6MgײԓF.㈩} ?X2KzvZo`&2|Y1Gz +:‰$GO}'Q tGN }뎩 \cޛ,9m$WQK{ݠ~^)B]mWv#QW1y4v=l.59r49AU>C퀝%±h YO JY8[xKv9=@sͶ"N1P1n`t2t|2A8ӕO9E]\~A֜ '@XkЩ%:9?a2{¼fǔW 8s3\Tݏ>A_eTDmI'H@y*%8ĝ1•sJJ,7|[OWcJ!>5 9GW]=Vy}s.m&SP!9Ho|K%Syb (n|#Jpz}TqwOGRni^4+mB@ur>}֔eJWIrBL/ f|3Q2;bfza1K JC4|1cat"l'_=Jգ_=Jգ~ CwgJ8220 hCa$dB1| m[?y9_ڑ' =h:q:`=B tWr ˠ_Ƴ7|J#sjf \ p,dS(e@ߐL1cöUP r|muLjd+3y%X^,m<~\_ٝ+L\ 7J y7J yB^iBPTP, ,Ѐ'(RT?+ɍfFl@7+!H+j)5V) U!a_׀1wn:, ޜV|W+mgASޗFg QxݻVGb1}8_ 0-'Tvϗ rz BEE&8 +Ez z 1uPQZA 0R1 ; 3ƍ>1hH=Fz![|@u6O6X*KVEJBQ919VF"GOeĺ%'I_q#K/+u/i3,f$K օnؒ#=,$۔DZU"E-S:X{7bE$b]*ϤԂ ;nVZ55w+^Ƅs+Z;_ Ze90!4̔DJ <?JK][-8;e PZ<ڍ(ϱ^Op>_kŻVW7`,s"^ aݻ=BwEqd߭=EY8#~8o: r z]X9ъ[ǘ<[j 5 q$ f9Z yAX)D*BS,VKJKN奾'ScolCNגh51lsz!#`Gx('Ƙ|}Q8꾃ء . 9&{*0#U!9O)j*"D.? }{ֹ%׈!M@-J)x4EOqZ)#W: ꎸv|D} 5rJ&6ك#˜PSSHBl5;*}z4).c[3~J8q: \һ| kP$l)&+C6!T~QEҚ"oM %81N tx`pNb@7X=ۺt6~)fX+g)UF2Eo $dy{hL2M`ێ.h NA `BݻbLL¡"MѐҮװ_A.$mWA'm7Ixr&#LN{;&w~}[2᧝L3!R#egbbbluWWYjzHkFwUa.r̀beT~Hpcc5q ѿ_ <_-k^/·+,à q5lvww7GxW/T=•YX1V[w~vh0U2]nm7UO9>?W2KlNQmzpm'5a~F0KGO:ؼfp5a.-0 [+L; ;a`R~So%x68ĚK0|T&!_2i%+y9)߳ g0Y PvY'k@3dY~s"*BV'IhN\ZxMfH?gKۀөfIz٢y:\+B˧OL/fmC3E)Ʈ'ls׮1uu(}x&?vxVӾ8?gcisfcCF56:r8!xv60~ eѯ57+}}{\?нko7S?ײUBhL[C&v FtrHngUX HVBBp͒)zyb>ο;FzDa-l›M(4@?۟+w~q\lu'9dkA:7 7! &g(g-ya<.cxZ@(6Hтަ=?h$b kb5٥vYu7Y~$Sɺ.Co 2{E|8{Y'=a&Sq-4TN:^#:0JlD`&F1ܞrA :Ғ߭9ؚ1֍H![N]AňP} 5 +x>1 IMeA~to.m?W!Rz\n"3HCP f.hձ@Ҟy;k=IRAf/oDHӝBf|is {UhZ}zsxwwkm~ԣJ mNx}x.bD3cޏAz?ؔL.L\*o`cZ)j*ikqmjZ Kf3#E](k&W'o rHGRHZϨʻaC] ]4:⣨SJɡclrL?e Uվ,瀏U2 \`E퉳g#5\ ø\@Rz c]353kafm*VBOEЧaZZGYSM%L˽P8ahmuL8E 6X|?Ê3X"G&A.Sr]N1Km8Kmuql %p!,4l͒*.TZ"VNAWXdAPrX;9c!r6x@F)sLM+oFsHe3bj0ZfBF S["dTJذcy6Uݕ&??S%chAL=i)m0ηY9ߦFv$`_z7 =WGi `D9oHd䈎 Ou/9J`Ez{rlV}F'wo&wǕn.MސQ*;T NQ |,{Zqdzށ<8\)g>7<~im'PҢ":",|߲$-㒸ݢrdeK{'>єs\Rl]/E Zh1u}eNFY>!̈ e'ֻ%=>^ 8h#8JaB^Q ß XKT$wp&L=An jo#WINiT=oy U qN\yF2\Y#TjglmL1ι"dGhۉleYuBs-AnWMrUރfoK 1T1i+ˍEDExXʼn*Pw4w81bO,6?/XIl6 UX nR&kCE JjA, 5\xeSIa4jV?T]—cuNe=OuzL$+Vzdu ;A܊c{-6/XiH2{Q~n [&+31ImqCÍޭ6$W_O@є`9ESJJ.4fퟯ wyAX.E')D“ Z6  /lJ|a 7Hdf&,BDgヅt"jmy5q|J"hBط&ͺ]HGb/g"82HJZnT0%* =ϠZb.X'p)LJd1ݽ _C+c :rAؤ"Sf3`\C4uGDpL[0wJxpm>Y1Q=vVGz g  Kۋfq8/<.-N. CH+Z'ܥElVxqS!s{,xD2,n"f+│hO*6 !a ܍|: 3*矗n3l1cIߛ]L7ٿSu̗qZ'ܜ>2YmeQcӧga+{AJO"1 /u[ L/SN_0fD3/ˑkf`=]>-ƗEaݱƘ!? 7X$T#ƨ5ƬothX6q873qڋ)l0cV ¨wr#c("l2\bAㆍ= :5O=AΝz\dk@1H{4ift?.- 8نRR8TJ2-\d[FI;!/Hأ!H9\ b&q殑2,|SE`b_EF-!!_fԩ/ K4QعĈN)팞[~@v+!!_fɔ_19i{N ;pK7Gyeo^YE#R>8V9o͂e'[xkNSn~>?kY+()ZYJЃ_ %rd[m7{ ? yˏD 3krl7uöwyRc59G3'*:2K:E.B4&38=ɾG)'TlLX!#fR 4WpH{}t S2~{|b,.9i$#=G=\T#鎧sތ^Q8@up&p'Æ2C#!4,,P&7߾\dbD H ,ItzK|c]5b(' r=Da?{q /9N/CJ]*v^R͕ DP?= X9 cAbv랞 l5 F+cPjhdCZ#Y:&MgەiF'#w{᣹3FDf3O\wHmB?|  k W@?N%WrHno%b7Aj qUx )DWkgS4s|jv3m-'?۞ɬ7]L޿-(|˛8 nN*ԯihލ̽ϟz2Ղ.ҝأ5@~mFubAS.ɗ.)]e=}kfJ4yIy~t%-~3\(&kE-oLua89qɄ% u7 ;QV}1G XHbJJ*FXɁ |,cY%aGC5]լ~Nokr0nkhsqezkGlyR]Y@oxG!&EVx)&u5]w"}=4Qj' rf(șūjggJZJS-gEū@-DCilt+ uJ߱t;ST]KJ68䅳hO5s{]nqG}@VA딾cv@=дҵt+ORʔnmp g"tӔAb:FGu#kVW)΢E<%w[ T|4'M;A~tA>XHɞ?moqtYBaOks'(Bݥ*GNxvxi'(EN8  )rNvB;AcNH7*R섓PdJi::;AbN8 Pä-$_fgDJָA>dIS$ՏG?(4Hd|= &sZAc +ARP#+L XT}x  V;$];B{T]ݓ1ԚWJܻړP!6)vD"[F@PrP5p-]~Icx1g*`*OVl,4oH4AQFbd(DL%Fq 1.Hrl+L, _D˳V3#/̵p$5Q})N+bmDC+EKu) QX3C)̈́#Y:̱D ( J5BΪ'Rc70cKc\qeCmnjx](t!h8ѕq*N!3wXm ]xkeC%O7oiH[C[}>7*b-0Vb佐Hp c=!#k*:n~j6, I˽K_j0/{8O$oq&E$eZ : CQI, 7*tWijmfK9kqw,mehf9-p% DYݚ]N#|JWADP$!zEIj;%7p\3(DŽQWʁ4<4mw S<鏠uJ3!hN, ̛[ko''} +(J(zںd E lB$`cᓏQq7_8vi'y7 m`Rhʼn} ۉHR4<ܵ>8ޥLpyB{Zw>4?M$qglMWNK]1-AVBG@AMl|z|["hY>aS|@% Crj&`N .s`jQFHD) NIe 5cEhIT$&G{@bN,U&P,*(9I<́ %"Fh|I};V 9k 3 .Q 3~I.1MLLfbsʄK3H??6_ڬ!DrIM$U.ƅC4էi&&s6vx^V>_lj;2h\=ՠrUgod >b\G;ƱdBDKƈ4| .Sw IڋD| nFix 4;]_?Ǧiލ׋g֠r`6ڷC%0y5E6FZyt\38C]rEֽ f-9/3bTQr%,BApyGsUA` ?5-RJI%gbV%T6Xj[t WcR0BX5 8)BNU leUt2jyƘÒlcR*8:b5uGF=bpbmFm,̘x]z}U YF .߱{1EfXHI1*0:vHSy+,0Ke Ӄ#E72I 8hFԔZMy ( ]w 3%djq߹>}}yso}sP =:Kw"f*\@߻{c(u &r|$l#P@[Yk!eϫ}Z\60yBIF\s_(oɒj3#x9Ӯ(KQh-}̵z*rfn=g|)~"MA#Uo"Ob x_OFI7(SqIe28͡ e-ZJ8nes]16%bhu8JA Lu7/|b b`P0{\u1rtn˝%fs.n(= mo}\| 뷖m Di9iVp}b*c[Ȝ+\sZ9Ue WL/=*ܿcV0++f㶙f1T:渭?zo9,m:LJG_?|}x[#95's0VJ|g+bNjѳǵ?7k߿" w{ꮅf_f*룓?v6B g Nxb@ڀɢ az睈ڐȸMc"21% ;ʭbP vedUN+ `q}*"1a#ؒF9R@ .w?#(Hc+># gXzl4+f81xO5HA)jj@ρ .(xn_+K܋NPmƈ-d@a#%~+l% Hʘ&,LGMo%oZ mVQPҲKG|x(Otad?-(ϔ^oNkuL 얲e=sEV ᨲDY`QI[/ cۮ*QvДM+ajf0S+`<Ɂ C}Yb'+U^ioV Oimۭ1\a1 LB2Ғi))8Ҽ,H) j5TXvBZ LTAXl`u3?sP4Qēj pmq5V#y6Id:#@Dw_bϓA]=v*SLQ#7L}_sLf:B]y6N^Ss$܀w9٫Mr{O:Qj=g!}Mc4=F[KCD 8M;$RtgP@PRSj+<%ORZ*AzCJ;ädҘsBm&TC-;%Z GBK++H Ɲ+- Tk9IJ`Ye}H$6t| yʭO3*NSsBlE8ܔbn6FOvh!! ۀ._B f@JBJF+=H($0y L],7v ƿ/vJD?lOci0 Mq#6J|-KxL}8n5b3Eނeų>%bI(9 $4T[1 ۧe@gK׳vo~a|trWp7d187D 6䇘Q°U\` LQL3NYMR3bMw9wNfC D>*ukaKA)F pއZsToxgޖzК Rxuuhʡ~>7-7b@3n& DLg%96Td IsPZT4zO1`\RNdi SEZrLJZ* /)ZYhi)1*BdA|;ɁʳSI>EmC viy6f ?'NAb`wWT8*ӌ@T)* SS Pթƺ9ba7ֽkUv;7{p:HR]ĝsr+qog,:b )b4LKBH$qJ+,2c~ZɁYnoMu-k_*fY }hW[hkK]::v% A7gT0LO3&($ģ4ejrnaqpU^U7—3R N zGjE\.NFG 68m?o&G H>?>tJSS] =S~u|VlƎP/ @K{ɟ5vTG A@@+Ɔ ;* )o 2U+'UOy5#O2N.#dFgV2wed6G1eǸ<[nM\P3$j2ƬR@Y:8֜XZK'JUP#uɕR$Jg.,!uurB-?[ P.]4{~V1!Bǚ酆L'$ڵhŏ =-g9 pwg/vzLP` ڟ݁hr~r&Y1iҧfc-9ԢRtUhaH7,TDh5k\'6J1Y_ ,R>_H|¶P,~m˒m[6ElΆ+/Z~|i hk. -Y0}Co> }' ԣѵ6- `nxƤ!70׷_}' -Cȳˆ͗DR p7W<DI7ouć)[i `֫Ciѵy1%s@cm\գk^}}Eg*A`hvU}JZW )«-%f]\9vD[\RuF^<k[3%'ȝY]Vb`+7s"[T?ۺ]\,ncBtO+XF gB+guM +Z}tEӶLh]/w_u3}ɛ@\RIGc]d 7*ǟKgYӞĀjU9,u RLꈊi/ >xc޶ u䗥XjTt,%Ud饲z2^?yVU5(c5 8,QipQ }lff 5DUgV)M )P ||V֪2$R%})ȑRn\-gȋoӥ[fOi.׆i:&YYY ?A<4(`{KaUJ^S[ s9"_׍$.F~$kNLpExwحKPdR^xL;NW]G{>7Q %flm,LH͚Qޣԁ0 1=xPCp)`|B>t3s jr }~CɈb#'3 f(~i0TѾ0TϨQp׌Ԡ笣hٷgZOXgz7_8g_ !ٗB1VI$俟jR &ؙ`,~U]]Ut ~=:oe}]8B}[G<*XJT SUK%v 75PQ<ݠQkP_Bsݥv$˕uʫLgM^uSiYu;< !"$SIS#-ƄhCbJaJ14");y fatn݌>L#,-WO0&͟UopE?<>ٳxQ֋"S_%VgѢH1tC>-2ƒnf7ʧ|9;#OVѬ>ߎ.^?EǍ_<ҤCվ6AfH熓Q ʽ.?$Ȭfb@-M%2|*CT! /ןC +1`5E=R)6 ']X35mG'XqP?A`mnս+mq(W7ϺmʜT:eN mINZSw@@+Du^dHyC̈Why&K.Qo"W'uˌo8[%J'#})=5G P 8PZXH 'd(58r  0es:6 B j$2LgIj Z+QEB4u*+5ĒrʌȔIN0F1:VL:nXjת=޽PX0?\>Ki$֖ 3szu#o yoގi%W_Be ߧY1: ~~BbQH_xʮ| F k6v?S*oa.Т/#q)Q"P`1S {3:09ܬ"&RyYbk<==ׅB㧇V;.hjg3fi72FjΙH{ʬeVWXrLY>H 慢 yNQ;vZeJ0KtqCF0n єPA0jRSD:!]\0tbH0%X샢(bhroU !hb:8GS+ū`y K,ao+:2i&QN0B*l4qYF%BFE[) 3:Sr*:/!k2#r;PfH6KQ ߓ{x2 TY@HNJD]7C|BiOUXXkFq a",{19j$M0,(07(9PY7 KFKh61/TCOt!zx嫊!4ܖ!9qQ>^;@Rro.^Fi"_!輝^KLW;K"8>p%nRoO>|_6 U(‚iq-?r Ր2ppppZ6r\Z(Bhr x4s-8qI'$l샢ola%E%5dºȒPk$RKsȊ ,JlPL2R X-*q)Q_/Ԣ]R_3W" zd"lj)2Yg1VD(ɄXU + n[Xa0'G+YSq2D M wKIjOD1(+5;XZ >MW a=!/m գ쩈O_߿}x&`! ?~\`=̏ADL ~1Lj wO81OG?33k|{ai_i2|0hE@mMqB.d Sd Ȭqz2ܓc Dy&gsvdj$(-0'jpU eTJW|A W몼&(6* a -pj S78ރ+W fF =HLfO=KK=1^nwZ{F˵?`F`<zRM*Utj 0ƈO0Vd] Fx@W>"S KHcJV$fԇ bZ3&9&2&Y:.n1j46b:Ԅ*H4 UeA m/8;mX;Zd/"<]CKs\reàokaKj/KRHiA|b]-!*%].?/[VIDo-} +1d! eVYCҲA| K֭έ&K?pO4ܚb.mz9h&G[>ttun>w.^[[ ]u4 T]kY$X][J LO/Y_KCIL{ӳvJe]ֻia[Yys(OaVfm4*9=]{mO3Oܬ}PFqEx`ݪ\o'/l;}}&]nmw u&8$_g#t7d(:/ Wqԙ ڄXfEɄYRBwL QݯJөZЕεG'JKLeuĢTR \ &r2ͅu^ijЖg,RnP ff%gjht-pY ,K2ZrZ*<,KDZ1e&A][]fRyY%Ի^i/ eƯs^<]g9vlBCluTpH+1Q۪q`\ޔ R֛rAWg3d8'W !WOK[٧3(m-Ӳ3`Eg` !vfQN}z( #b" !5^EHսRhF1TLT]T@ +Xɥn zji3p(Sbϛ-#=Vc/ 3*}atEBWXN9]Gh!\_^6SDMO37;U7{rk ..ۺ .j\z4ˏ!(3EtrN@܍"3dB%S:>ф 1N 1 =wW hDSN9G&\qa02YcPb-mv Sj)(o J' o9r?ռlBdP-j[H|ivhFIp0^D% HVӴaQǕV0';kmVyu ;3.=>}G ry" iht%0T h>>Vm_̨\?8J{n?TE>\|-Jc84*JDg|!x.J1?DI $/~+rvD[uoXXOaJQQUer% y&eSJ ~лqb11gng[M4ʦJ+Rgymou$Oc_JrPrGérWUJD6Q7</X0~4ŝ͛b>|*j/:aO@dB,a3;QFQr=a;KpCbkOPA_ &;[? T3\(:4]ϓ§2ͬ˧|9ùp9kV?%G_ :{~vsPʰq:{41.K̀b{j? M޳`hp%W# ./ .l(`b2Xe M<Mҙ B qU,\Mb!5D|Rp&D(GlcS5Ka3bc$$IB/;٢> x;.9^fB2mRF)D,ɴ 눌Ȍ98zb{ ) "J !䮍Zr8;XJ-S&}V2%3]Z,XACdtx-Y\w2l)`ӕF6Z0&V3<ጅrk !$qɮBx#@8ՠ(EB-BxŇӻϤ7<f:ӂ,JRex?R^k㫳H:()\ $s`^A h^ȭ<tEs4*tŁ _ӽ. _Z v ^ER 螳Oz(trcB"(-Xe Wpr-AQZsQ-5K),In+xEœ!,6z!".X[*=SBKB@p[n%aB^iZ܍gR\ݖb9&v0;b?٘E__=+,bO Dn@>?⫿Mrx \5Dgx6 bfϷdv Yoл_Y,kz3og3>60VǻGW)cΩbسѽYO?%?t=H •hnL}&q:f6?ǟ7)?ŷqN?3rxq.oO{&c.Wy$sXDo7?KX+?-7w c4SR"/F0ZkI*Q;7.R9f _,%M\JƤ?+/?8+L^I LS)9eYPLa8CW ǔuwM6In_f~CgHt^z^b_˘[IZ[ٽq#4wd6 `_Z(ynA1!%!ckf(ŹlA5󿞬$HU2`j``žd0m􋴦V]iC'kBoDXGvꎫK/}ئLc1_ЎۥSr9nXL𙥢B:!wG'J>}nCRstAz-!E%gHS1&9j |zcRlj%@4[8G]f,np! GSTZMH`x)*PwO r'>;{%|o|2|q q\Zj)d-)TJ>xFB@"YKfbXp:kP((8䉾_`"hjY)ԢliT_Mk'XbD><ZuE8SuY$s~}Gh5s1̳f?=I'WD~_R^63826+bSSV #WOCL<ܼʧ27*G(W6ϴ2cLYfx7Mo.8ǫik`%ɪoC` [Ai5@ES[:uv)d!CZ3p.3m-[?l($=g1tMhG!]ZC:D6wuh&6ӌ#|˻W[X-$ZTmRq"x(X/p_q6s>Dhh(eUf&Ly^!5Iҫޜ3,+X+ y/W=b4ߓ"Wdό.#xdO6X[Xi*I B;g;O{FY6;?w1%:gSf =H%eςHP(M b4f9sARsňޓn N3ƴ*Ѣ֝bJˍtO VkR n:mIA0tn}dmVCLDuxTg,~],y"zW̋eoKתgL$!Gx#gK؂˷*F瘓 9I22Zi ##p4y:RԆZY:,zi\AsaR\p]&Ɂ~ŔX{ BB{!xir=;z)$ a<dI.|T0~!:$`n~^l}!Hç>h)ΓIj4J9lS8F{/ކb(C=#3ZyӔ$bJF5+ύϨTx%| dNh)Hv%w({ݡfд8yL}qB%+HW] q14ֺ3ܘ}Yz yXPSQp>,շĩ*e % 9Б淓C-,#ZZ6<*6b?cٿ-Z6knF%wU_vа0J>,_4dvѶݏ20 ?=Ш[T6^b,a6c P )'͖$~.:O@>,PHE" aIV xh6]-~>fu;{"'+'YՁzz<է:nٻy"iMtM(6ﲫ .w-\Rb[C@& 8ӢwʍTbN.仙MXe t=r=Ӑ+kB<$ "4d9uvVg,Dk°:$3.(<[jwDRyF ; 1@5f7>~wv H*%G/6XXwf)Q1Ҿ.:?>_cAE۶Fm:Oz&hg&冁{sVq989c2yE旜!!b$?16`E~3.ì1 ia@~ ,c\ r~&rwч@2AkOK00 L:C*F6TaEj 3Un#UŕYYp^ovTjx.)64cjcj7!vkKO 5uim1plAySˑ0"5^ &p(޵6"e2vR$+e, ldw_AWbӝSocTg ݂$c+X x5еuÝ|zAS!rNx#b  iECȹZ`wN4k;S:o~ʓG.z.S~1iGzM-#>K?f^.WǫWTn?H'kLz0iF6Vr7W'чUNi-B΃F 9 TAd3DZIh|~SZ2xh\U r*C)̾9 ^1dN<<*ZںۘI4//.~Zb6B[/UZwÉyLJSVqge6,^i~< N"Ȳzf^AB[9?jxNA`u3 VepN+sT5SZ1x4s5a8P<[^ࢷJ`$pP͞z H+;>AXAb2YqOi}ZZ%0#^WXU{G~'cN=8l<\d\ dQ{Qjz z0q'K9Q⎙B;K=ö)F!,_5wp:-ύTD.a=d |R+fU&MY\NڎI]F-Z(yx@] `2C ~Eia)yxL1hcePk!j`6hEnp;X",0"H$]IyFBꈴU$ݕo\OԺ#kN[˵Go}۪zKFDFlxKUYz{ըb@GerVY?"x-y,L AQIʙ_#7qT* LAzT T"k 0HGLwxa'r?nq_Hq` XtŢef0'kBN2K7 yJJ_v^ӓ!fLU6 ܶg-dlRIZPT<[^"fԬx/0*1/ƄJ`!`cѠc'd ^$Sʪ3rV!J83Lrfl}$U0bY)B-32\W#g'aWP%ɚ3\N ipˆXs̅u159b&x*fVB*.sX_hfLs dJQ  [BYY8Z[ˬKfTh&QSɱ=v@s܈r2E |I' JV6$AD>zE!\b\/9CKZ9] s*SQ"@ީHE \]&]w} .5Ao4kZCT|:D#FJ_ /1K4*Øi/} 2Ie _/nk0;~:~/h^IY Kmj`%I&gJƕGR{x)`U0{R- ʴbB9Bvj kHXk625od˚i̝# ~0^pi YSO@r^KVcᆄeTwt GPANgI(-o2J33WEvlVJTѪZDb<ZMbACxdpg ,R;>Pt$v7.u-_8ZJooƐlιszn-lR2!hqvy=3 zuз[.I(?ر`6sAO {h,4۩XiN Š?{f~t_$cq! M, QY3#|dr5?3BB9ɂȻR"bT*`M%[ԮΗ JΌ$Q k B,EUd}Iz/ z&^tJrz_Uo^6_t\Ad~A2edߪ޴B@o7ZWȖMA'жn|=P ( m1զMVr"hCIٛ(Jq@"guE)9IC4َubp VG2dFX  m.[uɔmqv1.AXXm4ȿ>ǣIkݥ cTVyC}\ @ WjGPFI|Ռ?_T0e*us.{j(B:inH$vIM3x RSp{JjťZVKo<G7aLqJ V!|e\NQ/XicU_ZtW=Ea\W(&K4KO: 0YZ(I{U8xvjI}aaImEKہclX- P[Al[- ].=hA RB c$0V.I+ՔX5BgwWaufk~e ` V7zA X 0ofSTu[AОB h  A [)|zAv 9Nx([vm6;lB˨Y샴eV҆BٳiD榷ڜ6)ZRZ]cZͮUZB f/ ʯf-7+}6Լ|؆o["*c.rUv-4p% ^ "&-"+fWXBQ%E +FBe&*J^>b7l#L3y W>mNAlg4U+ c+tV-{lLlYkEz_18%%VF:.gU0yTk{S>lI0vL{JP_N GSgt_?b7^q͔qǼv<㵪ţ@fR %Et~*Ow&8T385)"&%v I15מbAE6oJjQ,y>XW0L4\] AdVMnK|Ԭy <]؁v9 >fǯ}`'0H PX)Y!Yڨ$b߃0ޫ7,f%vnb9@nP۷¶SlTdr KÙ,{?8*FW *1ޮ >; s~|UgNvU 8X^2Nzwe|q>8hɽ]5 ~+3Lyr`]%]Եޠ2)^ꚞ%\6Zw7Fxqxm-kx~Mk>e')_]/wa*_Ozu=** {<43F=r_n9 gVźc̣xטjyy(L͓3޵qnd?}j%?#8-7QgX?^ƫz$cvfF*J:PYlῳJa(:_<ϗ9}_i9krNq~gX#>aqĿ0 J.'l{–o8a70Vi::[wNմS5cSuෘ'``V~xO'Y-Zdrh2 ;?!BSx˟amN^o.=R7gFޤiz> gf-R0Y"iv>G^r?Oc4L{tՆj^ɛ?zs|ś_y0u ]~&],ԍU[ػ^GpDVcbwΧb,wGl,<*cL=1•0s+?{W~KE=xy<xZmUuxCYNoX;[9< dll9땽E38&8C!RӼ\ G+#ҺR$)iq+@jn t~]#D}~Py7[_PnRíwuNJm(jU]I$JqE${L8;?J[ =L}ԥ~-ᶫ* p;%խ5.+.wJUqAyKݒVW:)q%,]r\BV|0;tB"ʵGRʵ1e)QŐSX51W< ~{&\Lın{ `t_q^]F5ϻ_әDmhItҤ>-`*mroȁU]VYminL[GrEEbMѹ+梛8236ڋ#9\Ra(Ң;BQ˙3N]O|{|KiK3HFZ*9))j8o!ƨ X T LՒߖ7[|ٛ))hVā+Dt%!P5aA8ZLztVBi]Y%/[yӨju3HգwcGzLuO[ Ѡ "ѱ!Z逊җ}%2O QkOQV.t*&rǔ*!Jtbr231wMv- -it朕̉&璊 x9%ЈRe&xa3+ز i5Enu\Y!Ԡ\yCR*h8 lk.K-Տ^ڶ{Y#܊!+EX,QB}"\2׈>of67Gt (E8fAP2Lf#rc]Ms;ГxG, T$,O?P8Nفq/A*H9@"rtt{ Ȝmv-lb%l/SɽG^(P"=3iPdNٔX9( A轅֚ژE,: }BșŒ(%.1k ƣ.[]fw(6͏T`$/\6JfIsYA<{NdMR!JV5ϼgfy(H#o>0uo4zHTk[qP{LV%M|O:;CH>n &',1AD8cX 1D}K'|T0r Qjk#8:L:ygFZz=wB훔Jl@f@z3J=EEeXHcyD!(Ԁx f(V1~~O?kqj#Sxy[CeLJGuglDƟm2sQ!Vf+ZY[a=k) ǣtˆo~e 1aLGQǼm|N6VoU8r`Δv5aHKWvƣ_64BIqαa~`6шY#åC0E˓*k^vT>aRj4 ic,  c Tԇ\)tR]e~ t_u~)F:HwaOv>fŘ/,6O8|4K?ަٗ\|vPhG:Iep 4Ê(kcV%xhFL7nSJgIĂ"Xe(F%TTYU539{?GM~?-QE z 0@.If$Z{L^>> ܚ'\( ǿ3&KMoǣޏ+ml1+P],GVׅr tn$QLMY4hd`V{2l%'$3IY{"Y͆鲹wۂu8벴YF 3Y+=s,ɴO:BࠁV q$̟釙W8OV4)G6h(Ԍyy1 c̍'KxIU >?g{ZҬ\\ll$-&G")ณ9W>)eUm2Y  4rCvA #(鴈NȌ@ f _ x"X{ͣ:[uEn( t(ŃJX #5r:r{&>d5~,\吾z)MZx (jU[Ks` nGME-oBsêCTjjkY=nƉb(fhu7s@jp3˷ю\r36hB `z=IȺϨ-@d;A$#!kBCdkO>s:b9b$ډېkjL^=ykµ}l1*pZ n|jL^NG"R#xa\@CL<uPcxrȔX[<-Q\Ge<&-ԘeBT|̖ib+uc n-1aHVKoo|L~CĖG4XIRD'Cm=#vES|䛴Pa_<9:&Ɂ@E!h&-T<:QVVA;^wmAHY[E@B6e"B'H]_"i}W8Iȅs"B+/C.QQ}FGf@U 蹅[%m`WR1++\Zm3/jEC]8@a 胉>g-$Q#jܗi>XN玚HU%gUb6Iz$B:G/Wp=/5;nR~$~p">kVd&ٮmj}ю\ : Cft41fy2ؚy~71ϣ)dӌL )Z&QN BJFD'-};c\8˃XH-(B_)GK=YSgjrPQ_K<% Oڱӝw`ܨWѩm Bo c݇vn`Ji`᪾l~+n·R_-d[}5ireNHK|K67D2x;-1l.55,lie˱I9ߊPBmYEZ+WY{ۊvۭ(3mD },yЖ2ևPj{VJɛ;\6znKl)ZrI&N4OF=EFS,J,Iu2DRReJ7p=@` DjB. H!+/#1eD؅q9Jͨcs˅C$؁vt3[#G9&t32EtjG&͘HG5݁RTc)9_f[t@l Dl)[%{>Ӆ^neD&gMD yĬ!*aQG-@(4^.:Դ+ZtmgOvLܤTٻm$W$6$̺֙R)HYRDc~ EɷtiFn|-^|m';nU&/icIN( ~L8SRb2>Fd mʨN}&i&)!rsùXpnHf 8[8L*Ñ߅m3 IVL[pl,׷,I$W lGSF/-5j89nQ$@+|4V;sS.jQPK_w/&Xz=(<7py1!(\PK,(xg*M,$m&j`9Bn]#hy@au1T`l8-:J,KRXBx͑JaE}Rv\Wu r"pRa(V*pyɖ_fhQbrb{& `WܦK=7`(5^J8$!leSX=&>%cC1]$*$5}mFa d"Xܜs(B $FNGjc /m3dn!jYMWJQ#6S X߂Ec*-x.M= !bYy I;M ]˱CmBH=싀{0TpX P))%$q#/GrV6F褳+?ozR8:.B-lاD")[#=A<`[ԍ@O '2:J6Zs^wG/[(,G.GyN"'E BGҌ;ϟ+Z<.CSH\x< 4oL /{w^9IlҨXX7-m Y3;/3ZkS[?3PԷ#MLms#cㆡﳷ=IS17ne%}wB CvV.;j >̤70|pW"+?><6ACxz`w8HwUPoҒ?}ꟴ< !2k\^ąN/ύAw'" aXn|vt#?yk|84x5ZWZzW'7p&x=3T˓'o~1^}5SlQJ#^G;0>_=NӧgK7bV&$0(akmQn?M{ ( -*£)5C;s^ͫcO96Pɾwʚɑa߈>_g?OI9,ym {m6izUl뭰o}mj0?`hp~F4eo3+5>L]U F{2r8ʟsd1p5y6s;țVr|d/;woL~EE0omKs4;޹tS/_GF."eEE3WsOSE/N:sT7?6FvԋQӹc[8`V)pljG/u;6ʸ ӟ!=uA\z|޻Qpۓב->=`s aN[3׭Y{O|Nbfm}GJ`};|U SJ5(kO:V'j=*ul8l?v< >t*TzWpҋM"Nx|-{2:}׾I?=^o`_g~V48>mu1膝'Cs{x>[)|6ӑ>8aqMUG Jsa/ >]B9_~&H8ǟҎ//I>dM$D<-x&~ݓ%??e0y#l9!%%Гr@z,-22[־o\xo:DhXf\qn(Q]xU2`,Tn]1Q3t'vbة X-ZH{6 Z (gz=xڠ}jDs9ǧFS}jh&wm#~1lT^i^>I)C6tjITbe|"9u[)@l*Y1ƕ^?0,t%mHɳKrZn^1x&[EhcƵ$ "ftMty8igpn3pnw%" GMпVmo`Bldz cL9VE)WS +O0԰jcz/ocnYD^A!AjTʝY9x^NSݦըqajVvP6`bt|]U9\]f4Z??8e"6vQ6Ϥwd-|-usvFx\v3UL;HVf!'7n6ܦi*;Xvf>+:6:Xu| KFOϲ9FstiXYֽ-S*U^KıWn?peq[%o̒#mm]c(75KV͹h+!4uw2ev%I¿ˑȡЕndm3S@u3 crbn(7ف2 хvn4ޭ}P#q!nCPТQ'V]ރO~w8h'G@VM_Ѻf.YgqNM؉6ӥ5 *R W[ր5 .Ž> ; btѿTB%o5j)x5R<յ3\#̾ٱqFM[_h5lWftae=b lLt%LBQ{mg!^`yf`w}آ"Ti9*vUDr)&ގ<ꩾEqo9_C- L4as 1W&,jR?MeD!#əDaca܅,,̝ t&^#.&w_pTTX #-x.-daψ 1O>Y8wK=Io]o J~%z7[قos0KlΎ5 |M>>jf\LgNq| ĔT2+evk_fˬ}]hNy2m]ȴw9|1%\Jg eFO?&nj.?,=Wz13Ν,c!!H=bha3|~c:;` ?w=KzpLfJ+pL ({nߓjoި+* y؇Hw8 ;mma}}h [oɶ1zܟ @M) t8[#;gIͩE1:y R˜Kkea%Rz0R:2 YЗE:`Mq|83+(mRl̶* gf gf3.+™Y~h^i?ڳE&-9ܤ%]?8w7BoLM$Gal- V:Cwh.'ԙ uf7VXpҔ%Miv<đNqN\qس{ÞS Toj 6i: N3py3+ ykm84Spұ? #46v\`,J wLF]Oxc4υȮtl~8V=핊&W})iuy! Έk" 3H}fXgܕ|W\c̓dYS\Z | :H[\HKA}l`h|*\d4-/qhRbP$aqI\*(P]cW ,b)KxԄI8cW$]IXg6&Km n Nx EdsRhX0 >ᘎSEEܢ3gW\n<ơc=rQ3; \t A3]K;|'RGY(0S#ٹ]{y4}nL/A% DD۱aGQ1Exsa!dÚ#AAK ~o$;Ti@ũ{."MчYꅺE (&EgiwߥV][sc+,$X_j3NUZe5DGg/G3/CI F7P*'}O@ߢ6PwtN;2I%zݷ\}Md_\hOSW-_Q+/>uPNł( EAH\ZCGA"! B"p:'TX~&Fz$v'=[qJ9B{)i_ Qfn m->v[=9_]&-񱐰4Sjp|Os%gH#iM%iO`SVR.E 0SaldOH8"6 @\a[I}-u͉ʎ-W^ISsPݭv ,Xƈf`.t O$lIJNC<Ag;|PP6Z$qYY&9@`(%[eXX!,þmH-̉3+f۫"mLb.ZmGSKrj\9#8}9K6ȩFVP`4I]D6Gu"cEKgR9GQ%8ܹ4%l QOI>]uxFv͔a4͵ܗO7|:nfb`Y6ʐ<ʿ`CQQ5MWFZ5]mS`O+E_khRoQ{tEQf|^`]v̝^S`ϱ^j͆+.+8tc{D֐ }_f)'ҹEa>mޯe9d1_2҅鈭ERcÃSeVhrƸardxM<jh뻝݄Hd ^XmQ?L$#B~mNV<[_?+•L;_31qnErW(<3k7B3#/FEXx('cp|h=hkr1^lkԌ1-ĭCW(jλ9 B.IZo*O#t˓-C9G"FԛG0GszEHΫ wh%Dzf@I=^Q QDD?L•L#vX{T /mYS^(!N9ZL/A .Y3SjG43*8Θ>RE+/$Ѯ`B "R2O zEq|Z;a,5Ã%؏alf}dB.VGo$gWGei%n8I[m/` G)Dz@^pLWCk۫g3):m`wC|ޭ׈ Gc.F|N]/!bp@@Q'Xb1Q"j C.bħ#JdσuI8v,mA`|/aPA[tEUeL;2}QzV^4zzwN9E^6Tp~*GC3w~AG_C.D j{x?,'7ar$k.] u4kpNҲm.#v%n*i9{P\RŊGr( ^hp?d̽D@pR0+7q׫I4 iJr>M~LB b=[J!&M~?cȗQ ? @z4a и_7ÐO9˜W^hfwJ'G;V`q꿿3 𔲹J>,[nOcSJFRtGaRQ1ǟ74@+JAdJw˪y5j(=RjūI^]K-(_ɰ~VL+}SRSK]`Te'~~'|kF@#&R $Y&j\BpS~S I{W su7tvcJ! urq{P#rT){kTԖ Ff)! `7/%a|x||d I{A^\b!D" *'. ?kRF,fR.Ryv-:]}\3{wlgʜJ ٷkG> ˲ip`{}[8D1q e-8S!rjf< jaqk0^I=oaqő_xc-e͈޽e-dLj.g\s /sF9$abTMw7g$c#U V0mB S!#cls5o$B(o~|·\.2FBÐz%az 9}T krNGBxxo@K2FBS0Oy0W5%Lq%1"qUA30@fOYH>j{|,e4E]T5s'{7uaߣF{?!kd}B632.CAڽ'pHA#нC5 iTWHxt Ω|K>!~ɤh %v-FDN[#"1#b7kÖ⎭% QnZ έbBeX? Nzإz g :y 1N*˦nOS__L}?Ut6]ܦolU(2"`n -L`ǀ*%m2b,ŕ'IÃHq&+BQpQJ" *oQOsZ}z~Bx& |ཹ33 y[40-UtU~/W߼zW)Z`~TQM{m'xLk\íΟ:[U />)ǘ 45FSF@TFT4(w)+),@w5hJeUaK+fC/lD6#&.qya,Pp6JřCx+D`v؆簠6P`AF cz9_!ņ5;3Xf`fOZ /7ْF wGlx5̬bV/twEd\M IiQNJKyEݶlPhw*@})4'n}6}i30pVϹ+ nQB1h~)߃gN1;5=c.EGd"`(Qʸ$Z,1Fo|\Cw ^8#yGE;4 X ZYqTk[?18_}J &=:oN԰yB-F*XP{;S*B(sypƋ0c>[ф}n[i#:JҲc*|BL:94yВF+',Mܠ׉NnĴ ^~_e{5ڿF[ ,I:r ٕ 903$p3ΝeQlfOe=edw;&L1Cs! >Yf)!4,`l'XZ6 mAHCgU.IG;+v 6\( yoj2z}>E>v:Сhm}_6sGo)ތ,sr#=ɬx:_j U\\Fcy)iY0`c BJԝH.#aD8!51]Q?TD1\G%#g %?lE컹UiMX B<1I|*,.y߱:<$t˦8+rq/_Q>}8@1ׂ>-Xzks]$BzSxg×-JO.E2eMܮv3AbEtJv9%uoVЮڭ2%+ܧR7gW9g)O4?1WSEZ$Ma(D#"8MmKRӧbO<=ƫ _5US_|A+F+/ᯍ򋹌 &30Y?v>xz.w^cx%!DC\G'jh1:v]uw͙sL4#/FmR>IһGe. =`ءjMKu>x}L`'2)X=4<(0D:@"3m͗EyK(3R>^ܣ_7~YWQu01pMᶢ;;h}DwzU6+oh 1FDQ6^Ș$f[n:}lW\ؤ"i)(O꠩ R%G)R;_7uފ+ B-(E6Cp`-'BGFL(u"J+=D-P"kl~hc<*PiX@Au4VkHNjKMJ$"^F#j1*Lj~vċe]pƒ @A)5pn%4rK<&5NjXsq{ R\roWA̯|h̙k?-qH ^ѓaMT˴h gaGw_y@mDT䩦,B btIh,0m0z$jsk4t{AoQai>~QG炞[@ykފ~qE]˟;5tpSCw;527ЫceYZ3|i 1Rm4O'=r+]>´ㅘJB7/͠%Ҽ?p;w,>a_qA8seu9hu>ir[S{Բϑ-I:xU=*Y7.U 6X/ot1jqٍ:K#TG)ʌ%hdi#3}R{|:R {+p͜ _'!>#g{TpDyӅ 3CH9LN2g>4d'i=FϫrsQCkN c_K;dQ_f?ݝNxQC.`lv_|%ħGxfǬQeiZjV^o~aPis{ubOɺKtzxl=T:VÐ?#= O _} nں^lzlp/1o@UI JD:?iK?V-V}'t^II^3I &Jc{CEŖ{?O=MW~'sӚWPn㭟x͓{M麫PzUqΛlRbܹpzӡS$'9a$I 9&H+`\HM",;rϤ;Z!TnO'ָ8.a!ȋ!J/"4hj8D7"w!ɣ=J c.ŤSFUp^kbԁx/Nyx@iw)#zou/J/ڥ%n/oݭ.[%Oq+kɨ&ζ̎&U<|*2p.%CgI{)=u f%T:mKO· 5n3_Ty3,UƧ!ij#N͛ASķ;^;\C/p.JUmtof1l$Z':xϪWAZ!5 #S6ޮ Ƣ+s|wlbw-YȒY&/ˎJaX %[qcH0UeF7bӺL\ca#bH3$,3њPUͭ,{}-u4z+3Zc/k3yZeZrV0*Dz/Jc􂻢ztV#ڼRŚE= .-hTץSUq!5+f,RIY}/ b)wԻḟ;TƜf ̴+;ztx%c9ylv{/:->^O?lQi{R' 7wT%1F7n<Δ*쩔[|ڳu+oE^ JWd-M>$[v_;f[rWIESz98u?{R-q\YyY]1 WpmG2TwMRPV>yt`{ CGu*CRib p^O-gJF%:|B&Odt=ϳ) *z&ESBoJ=B Wi2N+/"\4H45"I2S%(` VISjI]!pƄSOp^ܿZ:2%TVϿq̬:|͡ԍ:~1wJZbM$g揔O\bV4  c:ұ%­UK҄`X#&F EeX%-%\ )9CE99JX$QWE!30IqAn@z2gbضtBV 0V/2hj4ڨ"3 H%Ps}mG`hV! 0"k1AIc6 eI1Tw*.CnO'̎O)% yU%m6y)<[(0'& Sӂ'd1OTY6Prh8PR/nb(vkaшR2YYN9$(2|D52ag8DHH4r 8yp6??(< !Ldi>!T5kM"QAp~w{p}$8I&-FjS8190R1u4r_{mQ%D&e НG/mCP@7La[V tx*.v 8:QDQTZRZį?jjqI|{;VΘ)Ą bR0KGL`&wJDu?$ݑ,DӾ,Q_?݄M>_ۇɾe_2lX8h0z\}h+s6gs/Ͽ,\ک̰[|z+Ovl j2x{=˦7G7\kڴz%1؞~2D!$7JkxZ,?#?~_iJvד.ξ{XnowKw5Nvw+iw:| b8{-_1g[]}~֝᳼~f~1޽ Nim:co1%vF3wTc_m][A%z2G[땘WD_rthRI}_]Nc^(\4lLItP0h)3S*9:K^:g,\3.e[-A+zSVa0˴zǤUuABU].3Pq·<.:g9էݩMf`ؼ8~}_XN_:Usj3#vbMr{ƤL3yxh.A)/VsEVs䱇cb^7c8^ޕȑR藵I7#a`5;OkC)ԗeʮYyT #@fe2A2"%ۧaǫhoIv~98pۅR3V` r1RC9D(vҽ||˼ /݉xixė Mlera$Y,ė<*Jf@a56JD"pF_^7o+k\6T?& 3) {'AdL(YTliTJX+sv ͧ>Nxpg0ۇ~.lf`T /(aҴG7V?d7>}\ q*hEO6Zk u;ˠP4Y^Z8˻;bM'PyǎE/kLOhuY3~Yd0QbK¥lbRV6+>gaeٹį"#CB] %S!)_N+*EuWQGy"iEݝ>pUBG? gCP6FPB $Z`Ȃԫ2R $`>tA &!q9.+u[崔>htQ>ɑVT¡Nm庫=xډ,yFshg ?v߻'^2)" +%2j' (|r(6''/ʳtXޑ+Dha5UJR Ӣr\G' C#/z¿Y ˆw_}TůUJ,({81x&+|JLSB|+ @ˆwT驎UGH 4 ?,w?ܴϝ2mNEn!OSIGs?Hʝ"cҐ=?J-z59{1e02=W!7IK9qKOץ3JLХ+CK7Ea80&GQS#6),]O1ikzB:!;MdߨFt݌8b!ߔ**xֽ3=C~$fU*XerjWJNYy[`0lONz/ڻ۝>э4q"E:uvA`DoNPTdWv+ @D=)6p3dV2mStІ 7O/MS.}EnU8Lo)箲+$~2+{$~4J~>N<\Չ6& X6 {U 2EkZΔJu2hB`{+%AEPe֚VhPkm[n+q+T伿Iyз\c^oxb_vzro x챚S5U:~^r)=seF፿w<~jǟ,c\;J^凇x_\/HqtWճ2 Y,ש`0ϕkF>ūeq.Lץk8UksӛGHH">*%+;s.9)Ł6> -5>}Ho\D;T쵕HT] 3;\w]0e5,D %  dnn4ֆi9E'lǍg&݁+uYT:$e?%(WZ/(?rWqP=643iËb=qiCDͣ;$ O7❝gw.FV +{w/>c=bP jY1HH/ |bzdOE-1@`?Gr뵔op6x/VUP1=AX?cu؇*A4e@ ekZ~kʡ %%O0Ϩ)lWm0b9]/'7~尽r|&qyCeh\^~X-G~<{ghFQ'1CEJUsJ$L4uRJWåI%'(Љ;,3.GO &1m55@bO\J`;:A'r!bhpzms<p( ]ruQqQulm.+エۛ?PI28l]Vxz?AnUsKW7\W_?0[A+z}xe}vg(rVv떟}vY62edPUBBՈfJluV +4B&GɕCE9eP5g6L,.0A'^9O2ZQ$Gc ;8AǺbT\&8ਠH )%` vIQ{tl)@Ozv߷~Coz}oB)e{%$VY CJ*& 6{C L+$ [axQ{~nQ@T{ޱ -xF3hxG%`1B>P$o­T2dy#y R͔PwL"0r0Af珬E3"5S0haZ[e&(FK;n&藪w59jAC.Z5bt;>J#AA0ca[ J=B3aIy/v+o8pk].]xv&|pLDELbAS !> :K 4GMBRbee@º@6rhGw"]) NiAp 0tYD€J?>PUQ*HKGdSv'&Ei` p) N58 #"phJqSP- ! SYs g+qO<=iԂB7Ke{)t1GRV ޷NY)\4QIʹɐ$:B݆9)*Yw4 sX7@XFhARD}Li0S M2c9NX;Px#zcJO&m|yɁip%\~ޣ]f\V!&pXwVYV}nR,[A3#zv{ $2xBPE*Œnz:y&1~s1dzvj4F9a*ZBhFBAp7;IE9Upb DQ[ Kq!&0Iѡ֐8G^r}ZT^dpYe -PvG 1I" VzCɠC1$p`eB)mӊ(PXDۑ^2 3Z h%U;p: 7QAc.9XPGiO;xyun\5At4|ہ|u ^V艋KNR#]Qcu(r$ǁ%RC 1yA }-,&  XJx'}>v4c'F|I阎 q㌊4'mJ'hBk$zJq(ہ 継Q 6Ipm9Db$DFevQ'N5͇P֓ QF-6rdcOgOIHQSUHv +5F/jҊBa[sf*DFZR\R}[p%MR%Q!y ֚%Xn}``AH/Z,Ҹ.pQ~J!v]֝z+{uN{3e㾮9 1}ޮ.V@bh8_ŠОn }yHqrT6&rɅk{Wof,~̥ocmg{7k].YxuE:q.ኌx+.oޯ:,?}TsK{ι/M@p n*BF҉>36~`k֣^@ {}ՌaZZ `J@*??ή_ 0a@ÒHz_ܣvY /oq|OO~O^цBKG*y.>kHYT3-8q~'/Meg3*a|Zݛ{k0fs5Rat1o~Md|6(<Ý];Z|e'6ͽt],LOab;gdfۖmŭ۟[Rm1{pW8Yۚc`z؜$"9pql#:pL@1{Oaav{kCEjtQFA2qFC9,gaP(5z$12Z|ZRCet܋1ty;0'͹!rJ,<5hi ZF(ǞވϽ KG{F`:썷}4'0-*}JmVU2Ѿbpnm\c&2uur/䃙FNog{gU>(_#O7&ٛ|j禖gmr[4c{𗛨E _ZØDTr )+;K629=gzof8]]x/o/gpSZHأb$ e[TL̈B2B쩧eL{"0Q_N\kuz:]|qBK1ķ}-Npx[n`m3+Bt ?⠬b}wu}qLK57K:W'&qZ%J*+>âۃ[ >%k-HbZ:-FE׼mمʘC)y>1RAv*lC3o[}dp{:cPGҤ_t3:M!n% 0+tQibj[Qi" Wf0m $0W/b@d+*'Lx-:RQR\jT4 P[;C9lp}y9Az|쏄bӞeBSHJCjުr,aڕr+`4 2c0d~9WrS&D '̼Z͹w{܏Y3J~0ɗ_0rR˺Jڛ3ٳ) ,| &M 9uOڊt*WlLrIP:yUڱ_g9=W? Q^Vr^oNulA8(S/4SƔe|2Ǹ֪"ERn*{\>nAkG\o;/v1f/uS{+2 zٲ4Yhr.ȹN$SD #ĀFƼ.e-||tQ_6a¶"ff.yx?>CX VcczӁjf;8ƥe@c+O!h:T7CDCOOnk1$Uh,LWg;dvsy{ҫ~<ܼS-_͠f\!Pk~;C{rS rKו{ur]⇧uz.+D£"QnjBiDZnx҂dyFE`;]V=kʐ HeFSS*<>ůҭ`@TI Ψ.pVSq^!N45& ~)] JˀN>,k`F MBE.T@"+$e[!E2r_kK!rx]5{hk/yicŔ'PʤSmXNx:9 6wd'K x:PQֻN$*9(U 5B웑[ڀbwEߩx{Ⲣ:R&IBI4ToR1Дk\#A8]hک([JSQ(B|3H!9%4|u09;ŠR~LqGNɭ.@$6OpҴ. qrG[֨00H!ʗF % mÖIAUjxgTR0LlF%NZ3*qgD5SKNk(|&erVKYICrFo]diH'u,+'x87$  giE%˖"'g'fq8i/٣feC]ԝg{ZOji4ʟ\A\1-W Z){W>ɠYIEom9 ya28dwtֹmT4bԀsǤdHeE^d:1q2,P (S*P%}|t[qvir68 O\b"% l}_TNmR-IHe5[|{S_EGM?m[ 鎑VW{^i+=ZJK:W r磤4ʝ~ywq$Tp*xNHaa*jc3[.֫YmU=+[:}M2(,Wg/> zdJG_V4mmc< JuWھCH'.M2:mh7([+ jDu}FckfDkhLɞYFC`$LF%n;'}tR 5 3r}.XFVw.~BѫNeńVTY=r-Sri͈V;AH'. 2=Z.w'7NLl,ʻؼ ɾ-N-0LB Z{3aFBztGYu S5dJӞ|yW"?4ŋ*cpJǴR(n=5d@oҒBhuE4`3N%_"e^0c"@Zu'B>J=^>Q"L"* \E|U^&B|v|Ҏf9gD.#2G$@RR c9FXBpsNP}gL7ja-tT{/ydP D˹2˜\XY,a4P]5&im,@@= U}N4&Ju;xm@$RNi,0ɣJeK 6TNV9G/EB}(uAnWgc\=wR%4T)^hPʒ,2Ж@u vt>8omb^,^yg/$~1s;k7un)f"QT "CdEE(qrE2)7:q8r7d@՝d\=-~רoj*O8/p&|T.b10|#קasa4d,R:'QBah&RF1Yr ΢(hr^N (Dnb 1bγ2AT6F9a+ `ПLG hL: F>茮x Hpm.ү?$߁(~KHI3  Y𧑯YayYi"H/qһ(E|A6j"oX`5 "'&!,p3W0\.F>DMX2A7Ff!D-%ȠLMl@AT;_ȃs^+0T):+r@0c~@^PG/# m ?C9iy@Z,YF$9rDE@*׬0Rt*1`)=V$̶ONg O5,=Ϡ2{ EI ;`-Epb\ruT 7T8ٓ6PـT.C64 G`=FЅhz#:Fշ7r@FLLWfߌh72|"$SgdsV 8PaIlmn'(Ԁ(|ѿGgPv^B+ jTpx ԍ v+!$&=38juDJŇ4 \@'h# Q0cdia@?7۳AZ!hc*F$iI˘Y *yD葱CX=G_ B-p)ž((l6"24CCl" Hsd$(k`Đ٘%ˏX$1tO>c!sI&Ί I/D(37լs?wzɶN tԶ~H7Hg+_5yXщl=ϒ emZtX dЉf4*.V/₡;s^ws4yis^\ȚRDᅵ'&}Zmx ~_Ζ6NӋ;xx+'CᄉJb3xXԬ# 72j tqdzZt ٣D(J'@GFj2sl.߁ٻ8n$Wr{rwA6{_nd[5Iv~řܚ d,$.b q͏*wzV(zk `ZWJ P> XrYN :aTF]o%۟5҄u8,yk/ZҚE{rz1<&v%;e Cp\.ӰA4uN?y"_enhK'G,=[_L} <s5F1cH]HMXnT'?GЇ DWܥqKvyZ.ʨuݫep< Tb>2}xԅj]hKkܗgyjN CQNR+1oUޡ$l\͓R Q۾LCCQ)}j+׋,z<k21ZA,EmǥR|s!h+<%+JD5u.~ : 3}Lrxȝob3jap|"Z!tn|FztJ,Q^l{EwK㦽}Lr1bu* wO匉AoN25Yοu ߱i7گ*wWgQnԴ{cy.auL@պ8(]$bOxQ|>!F:%zCoE+T!~:Y%wO&:'J‘6~Xz]VEeipYPwp++$:Ԃ'rBQcʋSVxpUe'eE$IPp"ֳŗGDy8FynNI9JbJN,r{"rx˧^*-RAƾk]>:zRj$-CQI"O'>Sb>L)U\B =wA5!px$CTTp,G w$Ai$#'nCZ(2dlQRbvC:Oz:p10~9f!ޙ:YUBtdGS:%' aXd `"=t]hsaS5\0Lp05U8yiVa^+ϯq@+xZebd+1q~c'xZ)BRSD[P랛ݔL)IpmQx& 3}LrUTv}nB$Y"{rH==6E5Q+WMOUNdA&V= ' ӪtQ bKX(lOZ" P˷/s!HiBrzY;W鋺f۝4uP#]0稳کJ YmWFT]攠&cv" 718D<%J.xNߩDt9B%\Nl; έݵ.Ni1ƘTPb~W5HЬh Ni]㚡M=k,G QLKl5ӥj9r@b1O^'m]g ;\u&sVx$zmT8:oeEq*# TFG;K3#$n00B{ ]ӬulK(ea{?3͚|}if߾} #)it6D׆ǝՍRcjt#`N5e*NfP7(vB'e;Hl,fk~˻Ggy@h۷WՏyhoAaކtKK꠼  7N7ȊF9S'EVrb3T ax"^maۜpi?J=g 5ZbJ<7cB1q_#ԖQOi-v0R$ Yzsar8E?qSM꒦'C"F+I8F*Đv68|wLOSZ0D8T*fXn۸71uQu`m*&u5aj{`QX8kGYGԦ [nǦP74Rdt8U]o ׮fOǟ4> K`kN0UhNF |Pe%5'␐^PAh \'cbAv;@c')E(A Y+HwZbl̈́#M@ A4S~ՠĉFl36v \*T>1Tw߽l_Y*~.l`ZuZ 7CJ+kkjfl,h;v5&ؓc&+Ji(n#û)(SX >A87p K;9L$TϻC1nh[ȏ}c6YZ~`CkXu|ZZrXo&NIkQ.|7;'j넾 QL70qOѮŭǙiNߴ\o U4IZQї BeWj#Tb;.wTs!4+W$jE :佡R3(=f>ZH% HAN`iYCo< C$)!p_y1Ai5Ü1@A;=+J1Ζ[KxۏAC{Pe3s 6x5 z ? ];L]2 ThӨ{۟joUfj׏}<^~^n}bM”@`94ڑƲx兕q+`gX 70Y,VSk-ffC{cdB2=VZ>ߙ) Xk]y`vݿX=nX0Osq۰˳Y- `X6GЀ1ZcS*e8Si[J6>>>CǽrNi:}+h ƀϸz "d1T@IKoגb1hNqeIȘ-(I죓u+G YԁF[4+W$께I-5GDwsaijW{Lk$kΤe 31[s&?_z“הy:xT>̃#~(7VА\Et(G֍&a1R>:YMhu@CrSʙf{ NgdŘՓ>܉dN)w"mu年EZ"wG\^=i[ψvSwN[r吸>rʘ*z1DiejXp=iu+cO}Hk},ViG}BN}܅u =zj|oT2)*/CǥH9%Zʳ,gyyX+I?{IQI<-FB w$uUy e_YQѓhQ  yUZ2%[gH;j:wG2PђhZ2PiJؔ(< ,aq$(LB tgf6'0\p+N˜0KҋTăBF$o΄go{ qx1 jK(o*Fj!hCϟZyAk{!B'sf-MԦLg2oM(!RO$1L[M)*#1]ߣ86FcqFƪ}F(Gd0B L^q g`.e\`%X\i9FUQo(+8C`/<9By!A6QjZObd#Hkca y*5ʺQLsh;zFۣh;Q MJ*ϔ`>;0Ɗ֞bA&Dm!4|vQ1=b ]ךAvfRk md尩"+[7\Q/D;i-jpLFSꚂ?NkjJQca X  h-fRRhΔV_ŁGuI-6"˄0z3!-B cLaJQLz boH)d`{}TB_X_Uie9uI^r';ZITnH X|j]*7s,V!}]Ya U4Ih-5\E]$r_7Af0|V0F[x+5| =R[-!Z͛fuåa1o2x$y92vUo|sy/?"?G0Jjbm48bJ9*FW1W0.}eSHW5unBL_> O#O_!ہu,:K(Ysknw "旻/p_"j*PS)ZHN47݃~^~٭WjK3~}}Q)y^a^k7Zvsmz9 (xuҺ h=_=[ބ!* @0̷`A[Xxan^cc2AwίNK)TJglFø}aTs)Oz$ǰa4I!iڻY<-PiڝB =1[j ~ƒ'DUSJ+L#UVxabx& 4 ԩ8@tj%nB). > Ljl9ijŜ,NJ["+[SN8c01J GX%< 07 1x)敓8\YՄ3VP’JJl1>  =@sXǼn8H=xj|(w>ppL9Tνux'LgWc'YI``Sg5hU WL5SyWI}+Nݡ#`VAxΐ2Ri6g0TF*V ~_w~|w5[/Wֿ?{ȍlz%X`9Iev`1b[d$g~l-f7fK Y.V}bU?~z*v ?8uw+# 1rqQhXoVFgަc8aiujbifBU1w`tCcĥЬ [> k kCL*6bmu j|yoW5RSf)g.lyW@߇2}S7UYAc"bvQMf0Mf0ܢ0(9yD W)UHݏVg Jm9 WOy)$907󰄸uqy~&v:|1# ο fǿ`$32/f~\.ߓup]Y|폯&nlvwdmSQR2ӤrghJP{Y-=Ù7פA=޲K̰<);i*<+O ([0,MsƤEZ[Z{u\{E+S↽sZ]?]EX-LSn|r&ۙo|xEv{Zd<+@\=FhQ'ֽމ@z frbvyQ|~]4@TKƆڜi!#ɆL>Bz jQa-8~v>(>;dr9) Ğ@(GbZ*EKH[ 嵌y\tUz:6}JuP3j"W`<^cAʽ_WU|r>ܛD Nz9f ;b2v.2#+.)'! kA8P4xFY!_uL&\kGNjgyCBie7/0!DE5Phd DJ„2": ,5 [8j4!Jk'(u[N$1%`|=r ( x5LԒH`"^hZBH KU(N*E x JSTuWRʓf\vLsé!Eza"J5xdkԬvfĆ2ZJI)g5-f4o*ZS+v-l[}k#ZRN)&mDnŌZ.4dUD4cZ7)XRN)&mZ ٷu+ψeZ.4dUH(|_~o@0iX!E6u0,x¢vиkZǷh5 7ZGbwҢպ{ĺyE*꯹^j;ӇDxSV׎r TZ\meBG;r]$mEvǮW;kRַDXtdmCc 1:i? !(Q-'-R:AF[vd̛MTDWk,BIqۢ{(;eEŒDZ-i iTDkfW)U\rVQD.m2b)Q ҆lоLZʷu-o,.VHc^W&$z[+ DudTLukB1# &H" a,sŒQ+x'kcځnDY;:Gf?K`bY$.]!ŦaLZӔ0 Jb#ricaEȨ`aCJr% g^ bmjO_ȿ̗7UZsYݿnk3j[ylmXq=ktŪzD S^'{ƔE lkkKNZ*Ub]ubŴ vb۹uf+/"}e=kK;_vpSic_ܥLQpw!|JS~Sܾ|?N@x XTYO8 vIm;/tk8O]DdٕQa _k4leD3WکRqGD*VEa)=^0MpSCQ|瓅T֓?瓣r~p|*\~F#ǠVxd>N㿆A\ܣ D0$r-1jQP-ea=sZ8 +aHSTQ#- I-ǜBccp`"uLOtM6"< qmTD=W"ZRsSĔ0(2i!D֒p+:zVۑ {vW(yݠy^Y'efL@gnWnkG#@ۭk(k11*BFɈt8U4Vd,6[9 xhAYW~hQoV4QɿVs , q|trF`U+'f5E|ljhԜz H|hgHa]TJmYEY,(5D' G5(fZ@KxTx&(@IuG8 DM0JtQ cT@#:yn #e8JZrr1ZEB(8譴8e Joi=s3 Պ)U>pĐ \!$mP1DRz*sbQtFn"B׎KTDyX/ ^D #Ɛx0uƵ 1AuJQKZFiΉaQQP`JPve FM1]2jʓQt|]vtw_-v?}(w }T/vnXoRJi<܊}zEid*dLsz?S2;@A1=onsWcsWQA]5?n]V^֞*ճpSrVn4dUH$|iu#L)XRN)&mri-ѷu+f4׺u!{%:%0~ҖuNw+Ahb#LYbFs[*ZSDn[71ȴT9SJuo_=πT4Z^oYEa[|;{ Yj~>ke>+qT(VYTJGM*X<˅bjD4;MN'1 Q S]mo[7+?@eHCbwv?"૭]Yr%9/e>|xsv83p83R#+\%yo )Dy1wcև{[fPvЛ|K*CfLaa'w"=&(|K%p'lVޒkd犱u/ !pK;ϑ)9WBJ#,F #wcIxz yJ6il"/|$n/d8^+!:S%͵Ŏzk퓟-f˼v%jsտ\l_LYg/~h~]lgb-s޿\n6|lUXXZ,LgzcO%wkB>K*5LDe9sgluNa7 ;a'onۈ9;5].8s;ﮠ}3 J|3gH.9ƚ{02b)d|HNpʨSf* (B9p FlPpjlM6BxVQHF:'A.*rc Y$@ԈJmȂ'w l. ,** v!p)I9Z  ٳI1eF]NsU(RV8VxKS=By_|ԚGYꘒcf4g2Y\=SŵNf 1TW R&(YʜD$cQ$Ǔ#ӷh}߂ R;PLZ@ Y8Y0/&>-kaENRK 2H魊Ehd2bƮd,Z dq 2 R_AH}b8[b1"z r+ 8+yA5HmI]= f# JEҦ I&6+=V@aE xp R;RDٔF4ڪ( R_wIyrIJ>EO'2iH֦ESӐC"'NVAyRZ:мi2]B Ԝ<-U;C '/Z|mFK&ZJXO{APŖDM$۽fO^@tA])!|g5HhdC9 Uuh V,EQ)*@g"YCA(6D EO"PE (bGQz ;G!&Alb(( ueKpdS{-}K(M(dJ(P~\{ 4nU@:[ܳqgg7[!PO: :>-ښwB@otV"8 ǖ^(EG욈S]\ I+!z{@k&h,ϛڀ=ߋ{C찛[^ xSXw?uKG+v۝7@Bk|PBQ.Pz \cQ{fWKkRJ;23pQR壅_*FE9]Go{ya/'Y7OeUs@k5^~旷&gׁ,Hwy$F!ro"YyF [¾WmrQ7o@+$'`(/]Qlٓmt 6qZ8$آy1"-ŜAJ 'wڇ0^DSS~͇$-DSdcft..k8UD݄bk`puN=PG()hed$#[G)u (Kq]Qo5j#{R/COh"QN g$DVq&v[Naɯ[C*Onw(7G`#8@ *q78n9ȸiE 6J:dE=X G3Lz)EAm=|Z%ݻUO a#jg|/S9=SO(f3rfA2@s(f9R@sTfJ9j sc)< LDB(7Jܣ >x6EOr>46_ηMas 2l>%̼#00TA'(yM؜̜Rd ך[-*5MKKc\`&Da\AH;zYŊ -Wy׫Y݄3x'*,LəE/c9=\FC`^ iĠH ?i/ڎŠ⪹_T 1*=$㬩5 VS!FeeǺ16 A $Cih"$̫bY>0r*l7GXJ֤1>m@$; :&+5Cl_=͚~\gz]p=JϨ"<!`*"F\uö[3(m W7s\Ri_=D{M 4g''NO& 0H_ca% o+hsJr2sH^ HFѫkF1ϖ|#`R .=n*[Dc\DM#+E.Gv&-='Pvʫy4Qbyk{zaNݒ-Wey?@L/dp=_7FUL˟`vd4ZSrؚ+}=nm T8JP}R61(_:Z2@YAR6q 9:p&rtH<ڗAZR$c᠌xVҩcau@lgh3ڃ;3*7qM<gnM`<1f2xp(pF,l#+9@OڙlhȨs b"@tw8|M@j6S:s|F0( MݮA`0ʹPN#Y#_< Ȧּj%ɤmÄ;Zp0ف 4ZY$s]KÞSYiI]"i5:6iF9&rM`uIW)5YAsCD 0܈GV(8ֺ|@_7J{bgvawfLa=32z{/9v5>e~?mGܽ(3c}9MO5qwbqquuZ]MrL9yX/k^b7W?nWvq1Yf\⏳w/jO/~l`s>_^jӋP=d@C#C>w6紸/yL-@_,<91;#Kq4}Aƞ y LMsIX\kcu;eFkX6y^{5叺P90nhzz~w'hz. 0,|ARш '}a䧇{Z ɒ,`rG/RZ]R@ؤƸUYRKk"p<(J K4|DH` 茪a9ޑ:gI mtI$r1A4i6&#[.9!KtNٛ'\X)\{zEa0Z4ko , ̠w>[HZ:B| 1H6KIde)"bV2Z\13zkUYR 2?=-5* #-&(ʤ`PGTYgm.^:g}BW.L_ITuax2\$ddJlp*Ji\e@B&E)$&E҈#V F\cLAj+y^gI9IVIApKA#-ِr3=oAjgImiyƒ-'%!.mdA,RNSԒ}=_)HJ o T)Nl Pi"O"hzAjo,U2,hh*BA&^]Kޕ5q$鿂ˌw0bf%ñS 8tf5@T(Qv"2+̬"J%p- 3ӂjySζ:\hbTZ4vR(q%4 Oz9A9f$0ׂjCiHu@&C!p9qChWZ ye|\P laSЄh4zA>s`.SBZg7 7k}-f+Q^!6t6K1 .@JJt9`e!ԉF<Z1:>l2|T-N E PT4˜:dJAw:OtR-1J Z;Lq՜jNP7:A`0λ;fc$ڈzCFN:Z&(|IEIAҮJ|@VߝߓI`˲V/ /d5*rSA:eF*(0K CKXDX Oq"ik!fu\yqwx=݋o"^}Eܛq?[@L Ab'Eec40pb;aAƠ1vٙ`EAҁ(ѤsJbb"`xh5"3+0 o+i=) 1{DY  Zl >@KͥOy-2N1aʥ4u\?'iDc58 NFY1C!SKS 9ъUG:YW\|튙g]^ yC\K,No D*rKz]W:3-n2׃~\G$]/P+! O__vѢL@~ Y~!\_ "C9 tz~wP~1"UTehʌm.A ">W(\Mu;̿{]wn&'AY=Z1\&8]Y7ߡ/(KibUEw{uhJh ڗ+]ٻχfApmXXhb*0QӞO|ū;^]x4UZs h.k[5(_)|8[t(4vEes|;&oG۬m&NYk zEKK-Ga[\['-:Ϋz*N[kBX͡R !wwF٢ 3/ez¦/rט9_ s<^E F$S\X:HX: Vϩ_r^:iqאj$-UV# ̚/\b&GgN^?6WH?Y,M ܯ%.a4ZvOh/:;|?l!-Ehٞ Ur>7DP٩D0؉5U\~E*a5LN3(EG W}? 0}wћv0 9HWi ~.*-SKj0b"{m[Lϗ|~ȕɕ}6W1-Wa#TMgb(ȣMF#'\=Oyյd…!Cdn!;I6˾PmjzuR=74RLT'tt$pe ͳF% H-q=).JJ}-0Onz># nKDIcPZkDbr:spZ?QUQ!pzը=sXodY!A*4.P4 PXW[F"_c5&HrNSm"AJ^;"FhBD5n@|uc FU`7’֨T2 ş-`DXvGl1XkE0%P#`ս~?}EfjGl!|Nx?(rԒލ7I(bڬ%nGlhҠSk*%(UAP{Jr}o :}]c}h5TWDJ%K8h/my@= u[CxɼЊj&H [FE/*DΡFRq'm9n`9{>J縖Mh9xc)T^\}pLƎ:HT ĬFNIqO'|jsN!,v'hy/Ŗa'MםW'o$ <pdrѝ$dعxbBRS?|~Ǜ鞯jM⃶ϸtoqZ|? IWLKDŽlgUV8Ori!U}^;Vi,T)oR(;;Ґo\E[>u+.ѺŠ}Fv-!'n?*Z]h7tDX7EInm1:mQǺ1x[s0־Դuۅ|*X4`Gw>/+Mk TǘU^APx3 0 k̾B8H>O%7?a- !Ѹ3A,%oEc9 DNg<=D$d,w7k oɴ`L(VM9q2aS$f;Y| } ?"*=>`kʌ=f"sOo{/6GC `֘WM I?)5X\|6X@{Iqez:0&Īڍ;ETT)a4?دg$_$mV pq񵽩&{c4;\-zlts;<\yƀ}Mz](c=ٗLF4pqNl_ϖoWc<[Vxkk|Ǻ#@u~V:VWXfTFb}K>bLF5Jh_{rn0yA㠽|eoH(׺{ ~]Lʽj/(/-Sk1ǞVN+RKHV/$ N &Π2e')5ΉHhN.AZst'$5K" Qp|Z9@-f!As4Dh3*"Ad IqS$L 0|LIG̷ZF:TH:@BgARq5(WB0\g%XdmԻn|Ug~pzg]5+WޮΏog4$v|v>ὛnU}ȹVF-j&0tV(Gh1~ĝO{ƩEs]{x?>{ Q!{L4%UAz-WгҞ}PKbphwEt!UYT[\5HD4+|mrS4T_@mX26Sj'&@N?*6wiZM~uFs H%1Ԡ%fuJsd-}qQEDThA?N%fj9Jib^t>üᢔ0SNJl<騵rt$rUZa~/ oÇt<Φ~4^yShn=`:Y~^?wvy_oEX.eOt?|wø=VJWYJWYJWYJWe)-o`b/l>3g>tMbW3",F <]>"Zi04+o>'d ̣:StYEg~ps?uק½w[87W-1/'v.{ELcT:0|zl^z]}фuM&qM޺G}M4-զ.޿ƛ+kCwzNYi^Z!& %6ZO5 !R"`r/$c!pI J܁RM[]u )U>hpD0hqJk1!*1%S 0A`JgG8M(ZLͩ'p]mo7+}94!,i@vm8qKdǺHN5G垞_fǞMXX/dUǣs <)2 'vNƏ] G.YYd8t`6ב2 jm<-5tsTVȔ"k`T GS0q 9 =Yj} *Z}Ү-%'U+T-=LܓVz$Oڨ(v\62c`Xm̜N93di>Pa<砩Ye znzU邃 OVZ{RR g|)B+hMgs۲@" y6Iy-bܓ^ x \LL_pORE`TBE`F9VTuFjH:^?yʻS])o}~/EdTȥz}Y%l|Aff~U)qJ 'P2".ꏷֆmw͓>9 的qYq/ofUUQ s)8wyK-W(.؝ckV{v(.Y;!9t6%G."4Mp8,CȵSF05@﷫’:]}~ 9sE6۟_EP^?T}f˷'Ͽut$3L ;\?,ԖTH06V*~psJ[?\g9^*- p.lntucb9?Lm}Qkljw A" sk*ZFGa:rPUjQdQRVz Q_lR71s0hcQ+pWvG)1bKEWQv0w@w\|\y 1i:6d@=Ȃ#y tX9-^umRZ'8'\4AޝW 6Bdg=M1Y(Gpw,7<ʨYG8<9ػYn 9>m(Bܼ|{?4]t˛7&b=Xo#i [ޥm}ʠx\$+n|_9:FS5mXwE|2z6ړ ?[2d2LџZvKB<4D&3}T+@ѱLrVݩ6(tG^.ЙjH`d\h4ۻE$`(._~Bi޽ɝm}B367TV~XeUOMX Vs $Ww犯Ӻ\<8IKe6WūOBw5&OOQojyQHΜAոG=p۩-S-N'\+ &%ʃ^kG/v^Psv|~MѪQˉG.|Zs[]eݨ <$F LkB7^8zUQh[_@;hY_IzO4W o04zsEQAAtv;!x[j!CtSR̴S*~BTE=鹄R`MozB5)~v(: \硴Jy>|t#/$u^@48WhQX͜UҐuZ kQ :2ARtږ`Jl ƽĕJ J^ P4V LjP-DբNJУBijᔝ773 7`;,˗~uo団i5i%^ 6/(9{oU><:#f` _cYz't:\\ykNz<'/^)O\WEk45"5@$: ~ϗURغ eX#Kwh R/!V ËYH!D} kD"ɠ"I *ډCo'P/wM~IZHOuk$E0մGNdGZk AWYU˾Knj4%O伮' V BZw)v1tr91M7ܛCO2jg\Toɉ4h+Khpθg ,:& $KϢꅶnS,ЛB(5^r6o`?߱/΢Fp뇪}J/ߞ\?xZUd:lre?R[SS(Z|e<=R*Qu_M+Mlx$QkLWӐJ^g?}UٻnPϢ@6`4P!vƺоX1_(yOEAD2LQ0[L ͛'1LED=%pn[6͸{oʛqGff"`{O\6!/Mf;C-YY8t% kIZZۏV7zC3ɋ7ݩtǛó=VLvfm}hϤ䥳g<|g,c}%3(>B6dbsc/O,zYɋYo9Ճto+-!ͷ+mcTrאbOG kTơ:|,]MGV&chח9MQcC@7^"=Ɗv#3=y5;Z56?\\*T>||)G\nH4֗96kMjڭ9L6BqvƔF0{xgЛ!Mde2v#Jm5J .U&|3gz_݃H[H n^&0ؼd|Ɖ2L0}ɖlڒ[IEMW*U_0UM}"6ylpfimTJM=jJ.O M,x&x1,.>8OOL~On?ݾ=ޝx e8#?zP7'܅ڏO&a/^iu|["dl2?wN#C=Яm ޞ ` |O^ Gswy;8"2'c@SEJݬ=*|w2QSx@# I4,  ЭB.) P^W&7)a5WDAQ1z<  MjوssUazqrF$`!x@K9Q5%NCu@s\_ѳVt27 ӗ!F.8 A2 'AT4:*]Y^ڀƠXff!S0yytV8F/¨F2Srȗ6Qy4fRԖ(;8$bz-31>\tp^ʊ[nTU"FLɿAIkFx|Wc=~dLR)‹"$Jo3f  AdύH<@I)"GB?yIJ@u>'v]PA (ڻ'M;θT[NW F$b p '3`3`q̓i]tqW2Lnex#rFcPԓd+/L `&muxFgRG^M.Lo'v{.6E1> ً0ߧ:RY-rsr`@B(~:y[+ƹ뷧@.K "`o8x * 84V]VǨm|2N)V6FEO95xg4ܩwaָyU}kR$ѧ/o1d8 58x$a{ Ί3@G"9.4>wN5Έ^ԅFwj41ԫD=4RGom N72srFBz)!Ѧ|rވ B\bTVG$N zC-[WBUa1൜%21EDjl!A9jo`Pd߫JXeĀR *@[Ody?(pFS{dH*\Rr1#*H.4cnXs2De";"pq8EB(KGyλ:tݻԙ3 x*A#$db1!`rbMH&zlNHkt@%2$$Iu!OǛ ]M%9u6V'HUPB蠸c$J< 1i2)'G@S-j?2UX*RwC'='ԛf grPZHDCc~ d#kW_`l, V Z7- HmEׅAgt͐yj? L>Uo{]wD\/G1s=RseDxLB>AZSg}x$bhI%x۫|6ӣzSM>Ứ:wR;;}8Իu,t!x|NQBl#Ќ(|W77]EF1z%,@jq/}<6 &R׀ۏZfǨa81ؖ9][ڬvWr5c V7S~x.ݜa:}|e 咴ܮ+x>MwW~Nc۽q SNqnad.-& UXzt.{^_YD]g=' y*ZS|4'\Q3IQ.kIQVu4"}0 nJŴRʹ'e>4䕫hNqS릤V*)}G-TX֭'Ӻ!\Eu*EP,Reڱ%¥@XoKW,b4r҃T0%*%[FK&ݏ*/:PZEgR*N9J΢*&2@h*n17 ChL-\6t9<3lU'VbuGRX畄cq&\-U?',h~͇w'8I]eNn|ok467c?O难_F&׻}ZFlYe=w';P*sT4xKR :|Xץt^oķ@{ji)d^u@/SCM//K?]N?^_͔Eafn9K2/g U]=+"HhadAi,$VblwJSF+;M0N`bh>̓TLKa7qmԼHoڵ8RU4k:'oNLh)LV8$@1 ?y{@/P fï_(qD#h9hssf>|""FȦT SSSN0L[ϾWxF 9oYk Y48r7f"7plش]/Bp;^8L"lVF#5N:.acl~W?k6ԂO_ PN{F)-ttH}%$ %^&uOt&C3'AgRTlݗB`Ri/R8 +x1/Ecᆫ@,ºT"@c_Y}.X_K/'h8oLWX^9A L }"[)[6(Q{kZFq,9RyL ?;urLeAWFZ/B- +Npňk:ӰT)V{[[!w\[xR${<ԣMQKZK!2LHM꩘ZN;Fw7َ(7 `ujv\$}ua9ڮ瀄 mӡ2LPBz@$_eukiB+Љ0jyjZy0"G5S*8E*jyC:16n-s޳2n`pIMF>۶a^Pk*8D#+mѵ4 -q;<(T9ź@qMD mQLTxva 9b EhS&= .za?ǫ~ܤ7P7iR԰dsY+e|Il"iuw._LyTvmG11n#05KĿ$ԛdV2PV 5=#+AE# UK쳻@Y̒ݳՅZ"C ecxc5ưӅ{hG{_t9SX~{a.]~_͉~X5L9L91;37rAU$(@_pFJ+㭷*U9 hb1NR 0lk;fgnшb]m䶑+rM2/p@p6`!zٗ۝G{fu/)ꭧ; {F#zX|+>,E&f*N( 7iDT =K 1)0:zjtn|_ƫ!]^˔nJwOvXW5.?Wnz߿~Źʁ_*_束8֌ ?0O'o*hc]SW&q׫wZZf#2@DѳtW;޽7}ޓʍ2jl9<ù^E~/u nz(ƃЧBϫ/Q8CcτUW \ uil[@okLB1;W 6ncS`vwyv#P9aq4=^|o tv/b!FY] ;sP p-h ]1j9)V}Sy!yj:^یso7w\uMLl47n~=|XlOj~bmυ޼bOC_1˫$Y=?Hӝ+r\ ٟO[{.Ql{cV~Kӧc]Õ@ 9sp?Qznr/ G"*T{;#y t&a ' d͝c9K*^O .pϬn/Nҗd\uftCЩC, 0_V_?ڨiwQ8 ݼ_4'KEqjMçR HP;;~"XzF4.,==ҧZ/!m5I~OXʷǭn[-\Xz,RNRa,eF\Xz,Ec)n/a,ZWy.,=ERO/2F| kIY7vIqV KOaus_ExKkIR/,=5]}Ұ]Rba,%Vef,=fuOKM^%[}j/YEKRrj٬|ra)(@=FX )A{bRA^ P5,Z;fZ mVP^Y$nɰr ד2W*AHG(?g{?Zl^mEjXm/^_&D( N܄.-%Oƒ"͜Nӈ4営^gSQjB 7hW%zָ'?up8V㔟bkK؜@PQQ yD3[Z:ApM|&|rr3jxϷ xB-K1 H8rI˘LZ΂6oj9?=<\yS57(JZ0ܔ wk̭g2^D!*)1UY%E~DAT 2I,EQAAz`!9ED*_dz #T i+UZYVtZr,g( Ѡ\}QdV4c2mV$T;U)@,XiXs*ϸeyv/FA < fE!nPJ0*"OUF9// wAb4%pf4RPt=TI]+ywyҭ4-<. M1USl5'wK~9\VtH t"*<#;jx;_"9$NO|(Q:M0n2'F3zڤ O"0L^ܸ&Lzt/0i|03pb#2 'J[~6.@lخR&T \L%, [ dBB @zx"8o | ɽm 2qu?nlT-%OQd'4bDc1 'G96OG0jVx"}yx"I/`Ҹ+y,CAH6iO ¡<#!T=Ē=jР{;M>o5 l(ͳcD):`nJ8Ze ѸwݲZ1Q=ڱ߈)oHOܵ#ZPK(3q:\6%Gdz]".߄4%\N1r~2L#T#&YOP5Ӑǿ={x< Lt}vmEKX%\[Y` T[J4ĺyHt yA#% X߾{xR du0q]J7'acTs> &Hߘd.qw"wDqY2NDt>E!0-x疽FuQ'Vbk(ꊭjzׇϷ7]k/_SO7Mj^P|F?ԥ1Lg\?)0Ci*Vvj-ϛ~j;E=O۝q\ϥnl[lG BNܖՊK.ƂU3YI+ \M.* rLҁ{ l\z?=/%'{B$υbE?\@}PY~(OC.ϳoߋOn&}S[V7\u=wkfb^s6 Z9 WpDB7h* B0k3oS +&7p썂|wp@ -akH 0c( rmU+knnJHQ2UNR`Lj6`Tp:z>y&PnS{8 ?;h mRH-|و_:U Dv$n ]SWCwO|r&CE%.*/glBЅq\fS`< A' L._Ù8ܗ|K3xg"*3~1KJy4ৌkdŘo%ݟq25C/á7^"1lP QKV#c\:yJ}{;~xWԆ+67Q؛vuW|\?Qo^w?3gX/u:dxw}ݼm~S:#:!;'jZYh* .Kf(`]XefU,k;c(?mwg^Mᚦ/?}F\KH92~4c/|q>d._/L&qؙTX`PˋKC66A$'u^d-Zu1fE*TV2 +=dEi^>ݵI;LNn~>nդL)BDTrc ZC݇XҐy=oB$DID݋.UUZ,,mndnM^2ÂD! hdY\pNVY "Yƌ2 T P6ӜA%kA+ dL>bB)F%TbA[H"f/?ӯz븥-K%TGu =5&nBKj6s6zKpI&tdزIߏvJdFFs`   &HM>դPPfMXbl΂dtucH y:fvs&-O@>-1O9|E| i;ʐD˩p^1̡oX9, &pN^1Z,*I޷g4Æ:8< @o 3DoC-UHbOgl |3g&#XNH&+KY \,V2YU%zP6R?T -v@* j$6$ִNtF!KRSUH2S9Qa M AFJ%`Il, 8 jz1ap_:xMEc/d|vΚ r!gb3aQ[\s0h U2 ce(,疑tLk4Qp?N&Co3 vPP`JJ*kQWU[\j Ĭx9YX)u%5 nxƓQRPz) ȬP ǒR2LDf]}b# (Spꐦ}n5+#T^=R&e{UW/15zz(r܈G~|KN~lN~ȊF*%'؜|q 'LJoPJQ3DpW'Bd1rNjy#>_B`qdB*1'L1|la{}حAo6ft k)L f#<9WM&]5YIQHI#Mx,Dh^3'Ӊ?3\|-l >=54Y}w`}!rMh,u&<5fnP'1mٝ.$F!rM┑_183. ؃enrBw,X5|53Ǟp ql T@t_Q@[gJF5i%ugcKx"/<# y(Es#$PKESjiYq((s.D[t'.紐7a}w ﳣD/w];_ZWI SZ%cvJNԆnާSj}FpJ*PsߕCZ߶nOA!oAٙBg$J;@vBXR&X7vM: R$REMANs6[t5-E[EiQ)FCX7)^-G4~WԵrtk9ťV>SL1S3 ZIz\3tjK1*2'lIBq렴!%7As hD(q^r4CdAaF aP8T]Qp[p' BKV?EEY4“~L3Ҝ#j]F~?[FZY׋}$i;%KzsۋWKRKwuSZy1~YzQd`!Vpl! WRԺDLЫW;RufFlHl+BF-r -ISi@F#"-XR!.fԬ]YvvScRol̢+QR-[eP0Ѣc;`^_>H"x>ަu^_#>?y_JA-/:u7SaY"H8f>t[Jszn-s%Fdk=5~oԋ'R s;=9Dg1m5g-4yQĿI^GI^Äv.y\s9݌ę5ydcHj#$N[j$ ]BAuhG#+ȻF53gs lx[!>9t{]Zmݿ]ο柞dy7lr]+:8m;w +۬~[&˫['8(9>!gje*-봲x|zY^:(Eb}GfxfvB9ܾQp!J  fR~ZWڙ wpJl_gkɆ%k%$HKT0oF9(6h_9 = :~ [6dV{C9ŋBRJm(+`K- Xm!o`!H=͞b.@/,;-A34",xYDu Boc q ]m?m(ǛOl} IB^9D`J36It#f&ަ rmۈ/"ѽ:uV47WNB9膊M!uAt~Mt3S,uV47W2o숪lHs$oyWϑ7l9ʼg"tTITNJL֥ԪT{\ձ d>ҙ>)tԋPz(唇R.ʽ8{F.M ǍҘeeYB&]R/RKңF`y(,ރ~LzєZ֫O(=>7,k;"=WS@nMe\NPz(R$JwIhJ-HbqݤātHaͦOC/HX9 ~|IC>-rRJHy4K9=M|%bĥo9R^* ctPhg6._DQ0p+`JGN,eZ?n[nmDWp?g9&a=ްOڲzi'HUma UkYA:gEEfp=-fȵjbkzd>\Og9L>Bn2EY`&:}c=t.(fUժML[Ĭ}~WisJ6n[jp~ۖO3j5ojzzp7nV[<*`vv&njц%$GӐ;L]1_nڰ0T)Yb.yriwծ jW,}LhN@ׁČwdkhtzTBf*!W.*ޘ딶v3* 3+2AZq6'tQ0 \8{FESj8U0y(b.']R/RKkǍRy\.՛]CW9 3k[fZm.g4g.Ƈ5nňn1u23:\׮/!k쯂 {,`{T7դ@ Xwq4 L!)YL̄ҪƼm8?~Mi5VcRgr)]/Uf鿾[>Íq}_|w1^W?ŧtKz3,KF7' *Bje>Jmz_2F5ܫ9G:7啿~qs;T+*ʷ7 #u^9r]OM /5:&4ۚ+=!C&?n$[W@7DdOyѭ{!Oft!0%|Sn]mh"R0Xt^2@+R1NP8aE;–1A9V8wNAZ'E*3!3b5${I'rQ{E'-tyx$1 :]R/RKdTy5"3PJwIhJ_vBް,\B))^zєZ;ZPz(ͻ回PJ|(m!0FYr*Pq=Km_`!A9D)ş,P [9k Q>҅oͨOSy")%?חy/g%,b-u tck?X .]%ͻ$\B0KWaKjأ&S$9NQjD>'D5mOA!+/Q/_C^lZ9uzX5'EZzb^` 29+1ʭܟmM; 7vdYϮ _ˣ㭂BVL2 VdQy`* cMq;b]PH R1ӌ8\ 3h,ɜKSą`BKV:6E*m0̓EOHfkjZzAmLqXEyO9чk͓NҾxޜ}EqwW]ubg 1E"Q4ghDiX12^+tP7JY(<-(nt? n^F΢m4Tɯlj]ےzy9UDBd2z|t")|>5Sٻ8+lcٓLgۆ^1o2uJ2I`wfu2v5T*2ddG yH 'T}V€^~Gc/AV޼sO=s'=Qƣ8"om@_3X3- ؗh5Van',>%Il{~CCm<6de+3%)6-)1'K%ת׮?8J,]ꗳ!-M䘡~DǸyT[j9#2M;p'Yϭ8hH\|I]fW3 o@Rz<l2jK487D$mTL*xFe ?\zǐf:Po!)ȭԕy!xYsa@ 2pN8P,+iMAluCfCgq#r>*= 6rPJ'?_ώȜy_ ;p[]/Wݡ!C|>LjDYsx{\CӊaV>QD΀@AʠcJ6\If5Ra[k v|O R,< 4OD?_!Õ$B> f(%&cL$ 8XݱG‚Ͼﮃ(0K`)VVa<&qGk4QZy"S88ĥOd.‘~m*cAR|6iSu7#hlF>еW9묇GAE0BlA<=zыBTD!ie<1A3m||~ f2(eŏN k=Ws*i4;j5-~Q-('ovZIQcV ; o}Ƣ"ѱ3&DڳyJKmIL Qrp$^%ҭ+q8*RHjQGLiJrjQ_[f[.^+yus ?N $2sL'Z5`l̬KL9jXIPV:틟?lwZ:<p˧_g9>lΈ=~ 石T?g{@8_qpMx[f߳@QAIhG/kaGtd-LfLB5dg^[$ { w-8l  ~վDpV$ڴ: ވð֙rD':( (M8 i9yHΨćE;bHzlwOh%,ɧǓmP(o^(NfՆ*eYR^g@nmi,dF7NΫ,ˇK%hHuKf!3k%)Vp/,`ÑYY-66ΘIH'#$uP\./^"1Q1 (aFt~ͩM5^rjٛrcSBTfȤ0h*S,`J4T%]@Ҍ7F]a !۽#tD=ŇF"}x IU:[Jɬΐa|emL,eem`C)nDn_Sro\PXOCO_-:v/AZfOۿ3f7Ӣ~#DX:]T]S KHҿ:.WY8ȁiYV t#ʽ ?{\`@FCKYPިl $euP[)RQ~d,An2AV؜EY0(%9,rƱ4ZT =тALsBX ՞X#oJB&5>'hF av,8PϮ%-{읾ףޤA1' O|hD0' VE0w|hRC8jTk%>W~քFbX?aHT;Y`+? Iq<9uO*QTIE8 oF[9?A]# xyќmn ѵHb4 |fr2nDA8x#V8BђX7Pwȳ6Of!(Q'r1Eqt-"b` 2hH'HٿLبKsʬ$Isy/-}yj/-N:ZB C J}#҇#Hr;[ۑ-yza5^~@0iBEV@ξnb*)DǏ7×lWoڶ/^3ͭEC )B?[ңfTp6aEh7ZvK?-n} !)ҳ8U [8IFLvpA{e{8B.1#2='ϩ'spL=7J3:p$3Ӧ-5Mx3R^[mt(<ْȖbj'(]6Jy7sĜJY޷^R<*h$fR<6V%TFv<.5RPXmqW.`x(?[B @Pg{Ἥ}['xKj#}/UV7p$$(gA1K#r7UXGA :I̴?ukt{pҹ Bg.ԯOO n\."St 6o%!cIOyuW,ǐ X>+4'/R/:@Ft +L'`!)3b |#`iv<|4\jXʼnkF;NKPڴz&V5mտfH[3!'fh%Pj} 9\B]tvrWv.ϳJ () g\檬|=Swc]O&>>,׽v8ٙOBۅ6+"2UiR!Aƻ*W+@#\re.Ł-Ӌvq-eّźʲ.GE^dJ9"i-ZPΠouUBgXx/Ƹ5GsʁmEk/TeYXQ R5YQk*+!ET!ʔFRZI^tZsa_{s u0_IGfXoʴHm7}|nqð8^kg۬>x\4?ǣXВ7uʗj᲍.a{) ^^ S؜WWm{>sSվz737>s̹Dc3'iq;gG1 (Ip3(_@aYѠ[  㱫i~0I!WOOG9=L,;_0[ %t&w]gJik{9 vbff_.9.%yՖϙH2%2c}|1$c g.t~]:I&G>jTou&iLtG͎ыD\DV3C5&Ƌ q3>t\Oމn5Y#:Zj)vX:VuaB&TphRÄ7]AcFcKLx} !)+f?f7j.M}N7dAE=^vK>n= !j*#VMK;tRݦEh%c1N?n} !SJ)㷚cwaQ7uLEuv0hPДG"%4锱ULsJ k%ҪuѪspV7 D )zkB R^!eu'R XJY޷%^R>j4_z zco||aOzNjϩ(D:d!p*çq $kވð*Ta9;gA (t-Yb(4"sT"e۳:KBauN7Zi=4}**ҝ}u؜/: QV` bZ"A}7a!VN|- 0RDMtlT8>p>)8к2,ąERb`iX#'N30$J:Y)PXΠ Y(EflJWuQhm%ٻHnWytII5@^6 bȾlJuƉrlO._Uw{jm7UG"E[ Xq(}X7*VvFU^IF bYbUުAST5u]EV %eK_-%F) x)CqDhkWEC-&H` *( J)X.F7'R: ߩeAyNo3"LÐpNlT,B%谱0uqX0 :6>+ A+ނ11$&8Fx.kX5b#!dh2Xd-.;0tnF#DSZIG&]"#+ļA]B͢`c L]K#H9Vk1q4SM;lz+q|-w2.za~-ٹ2ј 'PXprZJ  l<.S#"2΋~2FA bШ Ofja >eyhgLM r< "ww&8ΡwV;'Ǐڬ3ʤjK~qGvV#X==rz6y.!&-\~Ozx$:Lm9hxϒLð|zt-F/4 y ٞsQkn]'=հzsԈ\}¦]1HX9MU գN k=MlڇpdSӿmn=5wAtFw.:uj<[MM:s˻>6eAž>w~3ѻa!/D)4vLZcsi 0O) 9-APwBQ e:Bc †iJ9eea4H/%4`) |!J740fA"J{أNf7u((ol'*6Yt?uE;fL"J>C^uH%ұQ"PI}*j)Tr;Tbɨdwlv &PJ?Bk)62brE`bֺwb&C}E-ZܶX4,IW_m hIr ʻ8zK)~g8f=^-'BM]q??-~?fq*fNfяUZ|ale3mk ND|eg#)Bb&jnp6^szɽM  3פ& vVf?ٟPtq}X/?n=wKQS*Qj9to59Λ ScAaug0ƋXCQ{gYZp M}r~,]<瑾cWe |w[[9BXC(NBXCѵwδ<+W3M]$=$(1B%jx~6 q+ 6uIqL\ '5uu 6}N6Uf-wzZ7p4C#~ d7gOdד` v=8.#2i% @2u1X-F7kbZ>ֳcǷ]@-N. [}Y}y'dTz]q)їYf5|A4+u7}ݲx̫1T ӛENo _?X G>K'Rȅ~"CsKx>2az/Ĩf9lc?pDvC9OL\Llȹ`wyscDb9w{P̠';Kwp1 = 4] QҧGܭ^V˯uY;%c"xכO򸺪Ȟl5<-[Nzx$XVclqi.E7o0]4559fu)|>@RɉEh~;s1|\_ ~J·1/H)+HHwE[ hq?$Ji}-|4r=%T/8* Iԅz]ƝZ'q]΅@^kGZ6 ɧjP'D[')n %a4`mBv%bPJUP@CAmSfoup\zk2:. 8 O"\.1q|Z;1Y.1qRY|l% 9s o^ >A;r')Ԟ.JxvQu&/zqEyBg[!>5CM.ΘV_)ջ[|H=Xُ.ݝx/]Mq>2>owrcIc;_M`|G='εNÍkwC9nf_QGnKuhGmE.4+U0gک!*.u wX}bכm.1˚CL|kхV Ms: 0REءMdZs'hB%}ZS;3KL,Bݺr7|I 3L^Z-%JW baLXp漫(H^Z7Rd($?m$sQ3HRK笢8Dѩ5xtDʈA@䲤ƐLOABE{ub"FX,8_zBI:&ʀsJOSxi]Y hYmT]Yb*k*LĊ#hLɪ.ڂl1MJLVW@ 0Oe][Wz3 j ڳpn@i (UD<2#DsV([uJ5`l[x, )s)žwV%Z#c|]UIu]ƈ*:G_9[(6A>KgхҐ5u"zL#/߿fAyKofd14sl󘍀=Ң*kRz RjRWi(.0pcvW "+nfDDejUYF6y;ZSdI%,@#6)nu<bK*Y)0윔XUh+twTAƊ3_d,Hݐ`ސ<Ĵ!t̚G#pBeCp8 $ju``B75#2K^2ĐQ9Atn; ,,s0Tþw21a):Kwp1$h.҂pIEW'ōF=0Em!'Dq0*gxpi1ުOx:j˒X&ɬ˧GaꐧQ _Ǟiq"K~ BbF9aӿt3 IuM1^znrx9p'qLШFZۗc2xL&JF$-:DB]$$@R5ynq͝Iãі.S(AJR\rVfQ3+XY[)4+%Xit'aF}9j6r*TI*vhB+=4Q7F_T4+n<@af\Oszc^Uu>P<<;uEY*dTZRLcVimMCrbPPJ"d?ljiH„&]NFfrg7 ̡D'= )9gdk]μ/YQEcna}hx+61pͨTl9+QO܍˿.=8e!Y!w:}^?pֹ:FSl x,͈n!cKP1j7/z&B8pG9gt)5t6 mhjwRl1Y ) SyZj#R,[V [eVB]*Z3Dqq̜ƉC5x/UseX%˭.캝(Y PG0A_JUsrUu-K8:Gw~?3:pכ OW߇:w LӴnԛoʈS~*_?6x[L~1L?]m{!mza/goWJ>|_܆-mc!o yX۶^P?{F俊yz,6ؗ$;L{$9b-nɒLݭ- @l.VEVկI T\r%XPkAkcVPA(a~1XY%K y6VwgBK+woz %R?]ċ/boI,cRX ӷIܴItȧ%ËZ1A Hd RjYmN!z-w}:һ646eNL.@>cҚ8k"T(eLH6UDT}"y C&% iVJp}xʿH(d"+^9w)X(ukD2E(`BE@qeHiΘT J~J 25NZTOqaܵ'"Ʌ6kck(i{7eoɻA<3Y$<- 3T)mOyvr6&CCPq9mc*/4F6aD#Uw4]QR(QZ< E)OHcB_N˧Wdt&QH4oLpN8%ϟ8fȺqBhry:HnL)[~YuBC^fM̎e jc|Gv3t8-;?w@C^n)>e.W:yt[qD6S}ItrsU?!2GȲF@ⴴZp}F=q-LRR긧]$TDԠ%NYFjH*fÍH!S0S@QŁν"~7b5Q š.uU.8DW~ѩR JL Q`+A\5RpdPSB? Vj龲H4VZ5wǴc XC`X\"hiCKQ}Ix O[KQi)Ɩqh>/7YKOXKk!<8Cg:T_SͼӣϭN[Ki)BKiiCZzZ0NKYr:-e ՂTr\.{(8-mFqSRqZb Nvo+~Fus~_OM2)in~k"gAV} o)]n贉AH%2m|:lk,vBI㯔}ݕREZ)*OvFk ;HR!ԟY%X1~ sOm[V+.DZ,3뮫PmZix\]_\NlGֽ-Vi|p:W0U֝Py6سP 2FvywmP(h:`7P]|~J|Q\W  ²(mYWDX#ٺxf+Z51öhӂZ̃7T(FZ~J$ˣ&N$k# RɾxJ^ߚ ^I`&~PggI|>VېWt9;ە=4@׏%2rʴJ Prw'+=M:R^4z@XֲvTBM Yռmkpj%e"5%Sk.D*jʀpf1BYa*hY*Z֚%M^"EFqu)A봖تRk5 H)5LZh? v%'pAȲjx$DxnJ(GzJ3IS$mA؇wG^I{c`Ƕ2ֻoqּ5zxO)44QqlGrz?NG/\_+# %@ $9cg5\4lJ6JQhE|R"{vYkLM#i hh~/7pQ8 ٩1<7eDݔXlO2\ 9b&{%7yB#t)Q{(&JuB4>5/[0Wa\B\54 ۤ+OWZTnpb'N{T}Pm[`fWֵׅBV8$FBK阫9jP~pJYEEXj0w __OwewדU#xMP?z,>,W6kL@sRcF W4̔hahh&KWky6TNJU; 3nJJOd/zL>[O{;=<]o#ˬ< <ak剟FN\u6afO5dƾ .ƹch9 ϝcdl w4Gާ+ HR8n`)Iɦ$1)5RPUEP,\U ED9J"H5)5IxIPZ f$hH$ӽ1GQ*eEg$k( Ȟ]љF^Ӧ#|42۸;~]@w;D IKD%5JSeʚcYqZ cK?|ȋF2qrjIO|m1lBK҆j촵# 1 r8 -Bdžjĵ4rG>U"<,n>!o\-~?כ)/Vl}ciL-JFl]9ekk&LK] A߳+rop)qsoD--7edp蒔n>grZĘ[yj$E>5wQ""VJoï?ix-=9zRzkMpj2@5ng*űʉ}j ^,$TjrH$𞫹DLBTc }1_ cPWՌ'vX/ǻް&T2.|s7C\}7o.(-Q?o/Q+ӧ/}'lB?|{͟ ymZu1T? ՞Tx'JAz~ {4J~O4L=~P.?bAC؎A7GjoGAV~/7qzYG_֭9m57ŤEHϽq7o.:8&a!į,_>oACdҡg4Ѽ+j w؀a.{݉CF+̻/ =T,+F_5;&K97:gRǕev}/f5͏6A3*_zA.'fo`*jʄh|.l?%38u~*ё<4li6f"4?16QVмRCUz~*CzxWë"Ti ȯDSj & nf<v59dQȳpTsQ;p Fm8_W6|$`M[=E`~GW%Pn>sp))p\kU4v2:J(JAvc4MeǾN@r۠3^!b qkxp[A" aala}T_nR%a %HLBKNPnݔDI)"q52JPڪf5L"peEQjFj]sS/+- arW~~J/7)?hAQMUhf^Ta OX4ш(%V>-"A~tlDݙSPU/ wrh宺* V*RNQg, ?z4qy d[x;Zׁ1?YSصxH<"G D g;N^P N$JD;sZNS8J1) ^$ͼ$p_ f[GT8E 2R8>˲F}P4PvЎ 禜@~̃vK)H'% 1ڴِ]YWHTTPRtRPDH Ic*;@'tC*Q{7q.+`" O+ K`j`RKA"e%ԕP׏xH#yH C xh4҂@[X)4Gu*tITq@)52=!DS4?(/rk/I@doSh%tӴgTcV9{Oh hK_mUky3gן'Upr/^47\_]-Dnyͮ]΁L!Q$*hkXۜڑ&3N;gfnWcw-r?=-_Mu +W/'{"UTQ5`*#t] ,HKb-%3Ew}jc<h'0 4\Xw(&!EIf;N^P [M PPˢ)*f\;ޱUP;VKP1zZP)P׸EIfq;Q0}ŦϛNwxār."DבmoEH{ ϧȻ8s|? DZ/ɶ -l" ֳ4e4Z󔢳LBv0:Ϩ@M%!gZxƵ_1E(al[4iiдS,݌DRnHH{(9؎Yg:hy<$#.Uh_(h]1i5kcKDXRwciIlI@%"Dֲ 0ܑ$V9k]$;j B ˔t1wQPZ/gcBf+JQAZ~&䑧xE';TX+mW;WO HʞMamH;NE>M?'L4kی5т=|5x _"b*QwUbbeɭ1ӈׯV9FJܒU:X”)ùf֏PTL&PXE\:b8 $sC"YݽD}xbBJIo.oeHsV44G̏7dN%BqV+ =d_зjS2w/^2\b?Oةi~6LtN`cB[-N"BHLW"!9Y̴6T4Y/I}2t?ٛZfTuk 1hA347 Y7dĖ՘1-X-I蚬lQqx]C;eI]c{޷ɶHBH%kQc-Q[_M<$P(͇ԒTx2]Zo}K (Q8PN|e!Q9f:cMT.%{*]=mKJɫnPVTXӐҵX1V#(U?,%"jUº84w+"/K[~wKBӽ@]dTJUG!d!Mj52 Y,5 ,>wȘ\21#knQ/y62ΙqH94VPECj) 7+ ᬚaEUMެ@ kCi$n="9AC"@-b 0}I~l>PH>p{ !; iFʛ mXdSWed ߥbfj&x6T>eFV0-Bd۪QLV>/L] Ʉ #(xiR%KҝI{NblSU~Wt&MzS!WH_ZrGs3MU*{(O]ooaUA]RH=JrkPMqb qHs#0(Ա0(,]XPqEkgn Av˸4?KA&AwI(+P5EҸP(Hy,C%h)BR°P(ʠM>G Xkp~ Αmy5;,作I ɛ= l^?a {i/.}>i9u~CA> ?W؏n?TLgSI;jʔ׵?dO1z3_mҜD$ cH3Ovj!6 6==-Jpy+=yP,ۼ-5Jm쳂U\PbvBj`dUGr . @mi1 ܹt_ ZPdzf,&47WKBx.Ա{hAG)#g=]=7k] >{.ǛeM*0svرB1r!Se~>+o޳^^F-ji A _& <12C{yJ?aR~h7@fωE߻lTRd_:/^c8q0=h] vݠm{D.DQ,dCG:D2F M єIcK-^5 /'n/:-ڏ~wwzSJ\ `*w847C'9ؿQIZn?*<9$qg]o<&hpA ;Lz}Bk/oO_ ;`m8t| ׹N =H4/|A|r1L:nzw?v_`:qtǐ`yRQݫ?_W??}ϰ9pv$7s농Bji< k#PqF܈U7lpϔl|"Ob+dY5%Xz;i  {YRƚ~2qMgoSBebWϏ_麾o_1Xz/eC!w.%nL\߫_z.$?~*˕ػ}E3LZ^W>l< ˫[myYwn7}"0QL|{27 vót p*;gerK/* )^\Sc͖w`la? I/Fb՟\vzʯ+Oq|gύL:I? ^5f۫]3_`Sp2H}34m &]7ɸn+i5J&$Y;]%B&4Meſ }観i~wY/w@Ut=+u.}h&n\aAoSznG{7o>E i ?zߞ{i>@;? dwu:oinKHw ]lOB69ǔ)4ȻlPvQJ@F]nӽyշɖ~6nxAoiW {ԝ2h/o+dW>MtR ^:h!?҉ éS6's?ɚGu:Rdk61!:l˴Z, #l_bz!yb aLN+(dYԙex4[ c'EV>Tp/3]Jӧ'|8xCa{{Ge]D06LD!oe*ꬽLnPzB *% yg3l/$fjQї/"+WO?~!! ~Yb*Y"z߿7ʈ,"]`U/BScnn-zbqI"<ׯ[*AHDye/ŪƓQTڀZzE>ʁZoQyqPy5w\.A Y[A"IߧQ~ cL/YX Tos!=< 56f6Ab؊ (Ƃ l߬Xf."{͏oK6d3$pA )b5_5g>-+=j)?+͉bSkם|1UL? yB?Ks,3Ω$sh ~Գ*f;imP3{׽v{RYbՎjTvvkYjN(HNwhP"5Zx*ZZ癩㕷a4*蒕Z~r5]gz҈j:=-W1>XcbBc+d}j xEO.̩|JFH4jH'ңM՚nk@>F@!NMCcMڅ4y䛥u^YQ3jm6(^9>cukmόu6qF sD^Rp, N廟+׎ǩVn9<8S]%4Yf1ߓH`4x!x-GGW58bb %J$Ƃ Fml"Fa~A#õ֊Pcc,bOEė(lm1=vGRuXW$UXE(T+>+'bMb= ǩq1LtNL:((F/#3rAT čk}i2::YϢmsLY6z$%f1jЏ,by 4Ml_bV5,`p=lQfq t1Ɏ&%J[9^0-;\8MWh߹QŶÏ>`'i' i˵R|r+K>8j fjt9/ -qn;H^M Z/+ƒ4]yP:ԡcKѦ**)c:/ڍR23RwXޥ*ڃQyWFNR An +lQVB-F|Yro)Nu7G~gOò,&eÝz~ewQ 9MCojRC!a0s4."J+%RPT #EBJ*Q۵3B6x06 }d0SE&ZsB%0!(DQ8ǹ%1i~X*1RxFp}!L 0@asXr[dqS]o9W쵇"%.{B3_d_%ۭVwRAxVbUXU 3Is,<¹Hƿ|ݲ 똤)$Eug qگed>\"Ȟ vy\OOdJ&ᙂOڪm^/^}P,0lƶqazuA.UvbݶZ!$zF+ vP R PFz 58(!SO/&UAM40>,>v9>ӳmY}!P9Tbڷō\Q\wl$t kMܥ`ŗ1/(jszi#g{w,<%7}Mǻl?|_5װnp [{A^X vykYڄbt_q9|@ֈo ywp79qR7GEnac ^vQ| Hߵe Jk3\W^tH_wKjzFu'_羞jƹjzf1f}=)O9M8mYTLINVA*]\uN8]~)RbLJuތ1ѻyAlѼ #~ RE5OI(Ϸ~$3t+$2Ak'(D$:"8T !s56Ё=堨h,VY9 KYp1WDœ6k`21i h FPx= K05+#߽#G2Q*B#=l\jӁLILȴ*-'Iw(w{iLȕO^g7owZqbFa9WZ% pS{ׁ:߻^. Өu5E0N*}~rX U\QW+ bx*IBLѤ (Nx7>1c""`O)Wm׫T)z\dT{vp6I]79;i43M{Er+Ԛ  )J0VqVJ%"˄I@zЅL WܞfNq|Zs[Z +, UB]çM%:24cV1>zM LwNfGca\;cw7\忎67uɥ\Gӣp7)41p Op3QS(OLW՚?{ysv{&'F"藟ߞqLV!o?\\ϱ -K0ro.F=7'3wCKp\Rͥ;h*d 뭤9_zzeAn3j׭7^S+azk٨L ZmvՊdَ@В NZs(~lu~ҁrEi_ςڨ@6o/4%׷7?8 Xٚ&F)ٛ@5y5jQqZ%R%WWJE@")6zb`gΘ2 ͡(aOa0 *K8Su lCdOmuc,hp/W`eO%('8 vx|VԎ{dqŁ0P gjD;|Z88]qEْ -M:n ^6%8X@B 5ULY#to%4GqL5?r*S}uIh6\}qĆ& R&pQj/5g R.IHS(qiM/Z

`ܑ-c7(!ErzV< os)ߡ?6vQ33C] تڇo̍u3}U&} 2;zuP1o mu;{ql)fA r J-~2JؗoӛYd2yݫQcmUZI*T`цD4?6oI;%VVw!hppngGB ʨ")%W;ͶL1 $PbJ)A<-WrC@#DOm0Ez*lU8qVa k66diT` 8&cxX㉣I{i~)FpN${_wS뮃oSnt!UhTU% +ֻD}p3)n$o;dDp"Q LVqm9c!!D۔O**q:i"QxAh9ͽB e\s%,UŕUie 2I!Ez鍏 =A QG,O!PW' ,Pҟp[( Uv]ӛ><Тa.q\ĝ e lP(ڮaiLTto-+ۈ\HSO_#(ϿSz$G[qUJ ?KKGu8_fxϏ{4Wv5ŹWk6x;Qr>}{1l· 0V;kj0dBWt:*O"XdM>jMĜ0Cnﮦh+"#IMy@,G#zWqÜ_o+[5>U,܉!#*|(?"1X9uko㒇 kpٽPTKgt㚗 hBdkeV.]ԾD-=i|ҬLԦ)6.iVJZyYWɷUPE73 [E9` kAy^8Y_hQ'gKZ5Aoӟ}wcC$>Ml̟n1esd =ew:͡OG?%ofoy^]Csms9דGP镣Z0-Kx5y{5`!j\_p3}[κ0k[7p̳3σ>ƀrX{t/$<4 S}k}P S+@syt^1RsAPZ: qR,H%d ; xIDaSSΗԦ@#toj<ꒉ7rhxy~ߚ`{n׊jĩP[+-Q`(GjvKEpyQȴNW%wZ Ea;69I µrPbQDc1A $F[xNT3d{/!WQB.?1f@;gQd;ədho,,RtEF}=s^M<u:N,n3rLE1f /qR -d7yjB9@8d5﹗׻873uKm[dϏq`uIB\D;ɔKݺbP":]h^;@FڭDK[r"I3qjf jN1hEPv:OT!!G.[˔ڀYWed>@&t/)χсLƕ,yҫ-XVqE]%ӕ ATиGɋKUWoHCMJ<>Fh{Z[nrf#FD0bj]gBENi/ ZJZ89 MġP-%\y6c2| r6Y/?eJ`ӧJj  < 0(AMTN4K,y"zL@"nٓ0>y]t`!iDt,V9G#&m'0VN∋:P2B0bmb3,)$I 8$(픗Vs!w4m&D 2qEt/A! Ih<ƓZ +tͧ,Gek|ٚqEg>0y4yeS/yCaBO܎^aF{  on, ;妫b{X rQ n!Ѽ;!'D vtwr)kAv&r 39?U=]~{}=;r|>*Mȏ //XB?oƍ)c>2,Lst"7qZivҳӝL?^]։^/@*+$}~]![cSX&^%KN`6 ubJ2$ְWZWn%UK׀do}l'>c d$k+ޕ5q#llq׫xy^UuM5>}Y<bqD?X}%D&2`eCj#9$; =Oaf)ܻ7[8}e}pfUA8/äHfB)SQڃ 23>QGQ$Sp_Cw͍ob ɂeU(|*/PO]6b+Bz:X0,p4IAienK ⪥ $dI`xvع.q'r)(ӸUCUkvcUL@fՎqŧه䌧bL/7𤼢 ѣxoK 2S Jրkn)RHh 0s 4jX_QADs[` IdދJk$dVS%JAe5Qjf'J"*%EŁb̦ABb׼A0ʅ$&x;o @ Hz.c!{ ױa:v=q[=PS>ԭ'=`S?Kt KG8î />~HO2 ̾I7?bcf[K]O$0d][(QݏeaEŀ0zۭE|BK8 퉳DI #)0GCLeV]?8Djd$a' |z*%%yPP(%ÄD/W <_D"^3J9fL!p_mܡJVZL]?*uV4b:ə{4a*:ّ #k,(_Pф|\(s(uՊdмUy/#cGTAqTw??XqNn_Z.Lۙ],m[p?i{6Op+}aҸ%f , aVT K@2 ^?xެ ~N++lHd]S3+cnIDUfS $h ZwMA+a4gVT D5rN5s_3UQ qg"J1@l~%%!{: bx)dRс4vRU5\.ׅ(N\s0> s*!ER0+р;Wfpȣ< [f4:(B`zg08k8g~9y~fv'Up[z%=ſE0*`L6"1t {͜܁ J]pN1 CT rha1GaBic (N/R^ʒ*c*Si(WJXǝ\$Q43qU姟 u6 nAhuMdR=Uugͪ8 -u(A%k{~N %3-n2V"BԨ@_GSR9ۜ8YYqZT&.IFnWem9َw[h@Z0IMDe Fba+dDJrcK(eDBC@UVH+)s eyERZR;֓tn Q!CY}h'^"FpfhE;H,pJ%vԮ<CM_ 9:V1nљFTx^ RhyЇ -/AP n qfdf02I+S $lx_f4L0)ݡ*u?x"Q^FLcc Ɏi@gVQ`;бY /_VOS}WJBGKVRJʲVU"TVß( !$p[{ŏ/C ⥥$iS@(^%|qTLaRlR(Q)H3uoǛIq0 !$ 1MܻeiYLt¤` )mܹ=]B "O)]!4I2 @QJI5`ʕ*%%Ĵն#2<)/+j@zC ijƚD}rc M!⌖(Ӟ%%'?*Y3--WATTbvmY*pFP qZ^1JlbϞ%fAoq/IWxތy+M_ҺI'L`y;Q=| zO1ƛNt(СԿהF]ϒba,jd?=,)l~rX$eo iwE@ab1 lF-ULa>y6vKW=jyx]xeY>yRo3.ԅs* ß?jqn쮺B8jnq8 '[mto bJy6_K_p+ulY>[1,JRT%eA30ƝǍ8'&yN']??{>y1fȝEoe g-O4[<> y.gиsOc0t߃FQ(9:Z[j48u[t}Cμ={m7?,ܖ sAHnRO7"ڧSSjg°leGu=|3ޓ#=6 M[uH+wA)qn/4.1ZI1.u<,&Fe^3l=.z/ |lF_Sfg48khZ9kfs3rl!-{mӍPJ @i0ݜ(P)17J-K%ء:v4ĬqfU00.м5h8ge>(d3܄<~doMx9ynۿ~_޾~5AUA]N'o~=nePndzzLqF\F~)0DJ5+-_UJ *\*/))Bc(Fb4P.j&S'_NjҨЮUT?:}Na.7k6mrQX3R1CKj! F')kּI֖0 >Mc&HMb ;F6m)ō}+ 'gY`!Yfӭ"I44 &nKHX[p@ gv,8r)qU7OJLJbiI<4JC8@ g9jTrуCcy 7<3nr2cٔ>όNfw{?&_oc}o]O7ޔ1\nQKwJd-6/,{#Y]GgZ>Z=ߞnoW _7SkOJM uA4 W$t{EauK DuRY0pFhu-y[hN5oEcVB'K-\5 3#5B%ʊk$HYZv6$ bܑ$ha+he*"5vF}GzDD|V4uuizOG+ǒRXR!QJsPp:ʽFڜO=X6;?t8Ozl?Zpv3I] H)/tKx؋4}b+ j_U"N|^GGKK 36r s(Yt=~0l8 phڋjVfĬ{Uuʕ|v;4[zӧ۽ X`D1BOf?Qw#ɿ kiU'Bmu>֭bכ Y@Q'/ɮg0^8AݏwqTse#:mb{_M*oV/&rOEBȎ m_^3%Yf9kv>=˧5E%=G^9͗nH k+ \Y2gL˳uJXf*O7ɦOclgFY ɬ}iY#t" ڛ" b˞vC' !czgqe;N= #(ضNY:0jg&כM9Sľ*m2,~w"}C2 ̲희NOz:js߶`gkk?T?Vmx #4B:ǝ9yڕ97]vlZ!HW{YWȟjֱ&=.W+ϫ@.$e.wrB\QҶ7[;֜ITIB|(ݮZxo"OcYwrGs?L,?Nj񍽘_\E]pg(a`D(-G T1 [O阜Sf8ޚqI?=L $bms 32gWO Y%e[ވ8Q+G[P © j{]=7TU359jh=]L@oqSIq06s4Vwat$Z+B&ce ʹ: SO.rp1wkZɬog:4rLaZ(YaQҡ@ "ms>,=`,Oo5R;X6Sy^Ww.QNt&cد.~,#/Y~LzcҳXճǛº ETp^9 Q3)m#0buZZi4?_\Κ*V r0/r`ӺZ1?̋N?~7t{/4h ۋ48: rz='SfKL"bĂ Q`kĕ#"b;ɥ+ iJ!E&8RDXۙdZ?Ɯ6ȈLqN éfè4%:x2iyJ*O @.aш3 A:n@BH5JdTR;#\^\`"G×Jju9s*}ox0u+ʚw_#~ i*K>jzKxg>{f]~Dj @U cRu0!ze]ha9;uȕXVηSC*|zݹ&ti)xoμivZ3ኒe?|+:(!kra K}DDp=7CX'ӹ|x}.{f2fs?;>՟9zN^T,3.PZ4qeߥ̬kEH*IN_(!) ob\(PmYk)[ujN#-)Dza5 ԒePC-"f1BVIR [xc-&U~w|tFmbǝT}?ww w1EFÃ%?GMJOlE0(FiQo(9bJE7FP% qOybgV-"MX 1^׽(*)aP}+FD rcy|1L>I7_u9,(1ŭmͪF$SB]o7#9 lQ!wΆ{R`O="\4Nv[\=/'GTH1/G/8 4gDDsKSXDj`ƇrFrmZR'f$z {jFU]mgau7%̑0dgM>^jՇA7V(}(sEj[Q8gjf.>Lt鱤xP\u֜lwxC7Qw6@ڔyy}ޞ9m7}kq]qN:jCyo%ni6,o0M@nC}5m+&Xj'V>`Ӳ9Vzc^ ,)iiX+w?& h?/͔IOK]$Nd6mRcFyez#X&W#)NVc}IWEw2V볚+wVJ |%IO\$~ +gЖ$w.!]ux||ߚ=CjOvҟ,]9Oؘ2SbsJ<pR Vt>i줥/a}7Cf  T  @ހGbiLن+njpBa$Hc<2HgVI<Tb;P`p! rcZZ̃jjnW_`v+iYX);s.-X8VyYa"S#*TVM-q؜QVE(;\V9YHs4W@w2H/&Pv`PHǷvGk"t&zD˜E[# tQDsEA,PefZy2>P׍0n,>CXobp_8E¥(d7M"_h('Cew#ՊsjY0" [ʲ!rjWd΁n&o>f2tw1xH@*O(gxdI9W3dΛ1YCb fuI=`%ߟP5w7K7atL|mǥRnk@6&yM hA(Fo%_AAELDJYalj k/ IH͍W|wne^V"$lf*%D2:}ZjU5Q+v{N,h!~>훠Oie<,c\R(>̛(e{Guz VSnL˛Ąۇ ?{W8_6Y,!xAd2/=1Vr,gSEJ6EvER[3 MwTC TB@ oP`CӼc@t֔'[$S(w­fNo-;^V3Ģgvq.&J>+oceI-N~ȱ %\s6'b/J5?롮|P0F[>N 1F"d-Sn_fMåJIL؜hj4~V b(u}Klۼ5PO s,)AP)B t@7 vÐ10zg#Al!O5։ػ7 R&hVt?ɁV=Y),G'9ʵJh-r}bSrn \.n+p}zsŭ+~ 5=ԭ"&j '&WWO.{6 GYxbZa;U4MG*ɻ{TZ# e:h3 =^ydl7F!!߸fɔ'i7k㾎r1HgtnGܒ <[}$j!$,k7&Ow EtrE1稾M>v1MnCH7.e*Q}&֪#8Xd3IΨ M}2 Jٖ?]Rz&nKՏ\8`v@pLbVn94ʻzSfꕦ=g-yѴʞd_5^]f^|q1Qf FpfˇşqA^ .eͥ]v(`Y*XEo4zi{2`||}\~w[͎跓|yCrg.`./H"/H Tkd2+(QwZ;U!Oi#7R[3(cIsx0~Tf+*áoӸnvwU! 3řQMe`-HR}p % :?YZjjIt0eq{l6jrxL^DlةAe:/wy!$ŵ>q'e*^*(l.tY|Q.>JY0>ӒݫǨyU} ']$`Gd(ϰEa9GE'a)i=-)#pKGH}/q:ީHBl(=B^Q X V{P8 B\i2DXb) 3A4s^j 8ݒ%rNzjIYaj`"8 oξ _(z~ zt®v{ͺ_c;cJiՕX>/K0?< d16yL鳈lalig1)C.;g,8jdqV-#^NaYk&ǡU:f*qU?#$yp.2v/2C!֟?0 S!uhRE!A88T*V@~~\7NX2?l[a'rB" -8pG.ZR(-L!uRE>LȊgH) #?.ټh9 ̻GsB|")Q֩la޸'WkJT^_t}_?)Uf?lv)fY5)j)פ z:-VeeFVsu}U\ȱw(S26c$B(-EZJ&SRے]_#7?+ ) sO cG2\ 1eis )ÜEж{Ҕ"?I=b\C'Tih Bl<qdTar<æN1H^!d>Dcͼa0 C 3,J(Q ? @Trr P S )` ç(p7{$7)}7胀?Ѳ:' X9y n*h%:%J @C:.OTgMey*j3N. Ύ[Hws9H ,`$U;+T n -ZR?OeJI0 NqF䘉C35 cW(QCgaڑx=2쏏=AT u+0+ű a#¸ .7\Dru.W;ŲВ8>  )a\`1BAj`1Fq$z~(Ѿ_fUS%}lӗmj? 6돮oGbv5s\$3Ec4ii$ +Ԃ GCĝqHJydjD9`͘$^Rx{g7$#ջ&b\ؾfEiL]{x%"1gaEy-K'#^-]m;S>ÇAܺ凪]nRX%fRb;FhS?%LkGzcw>vwCNQ[p;WWozǦޱIvp7r9fk,, R%2c crdnWɀ^#r@d"t$;|Mjfr ypr&ߕַ~EPTZ+|5#BKS@o&В/%ܼN]:3YNx]3uĵqvψG"j҉\磵6-:=nb^_䓋T'a2)RQNkV^6M#1'5M]#@>8{@6  -?l;CQІ>!a¡S:BX[};q!:nee.^WI@hS2ܯ1C~];pAZP}jE`8>[ tYLL鿄}Hx:o9%yd%x6m~G(tS{?dg[4ŢE7_l<6OA*p˳#U_P'Z)5~]4-Ö8-M2ϡB4*'_,)bxZ3% K>|[>poC]7CE>|[A}>[i"/ Զ}~3Nmotڕ]AX*^ GiD S\h*}Ƃc_$!e$'$ f :Kǡ<޸tЃFS/c kT*q9~B ԥ؋3JÜ'ђH Zg@puj6l0򝅄jaYrMZt~=lCv0>VM *:?ھLV"G؄IZGa B |B/܏3퀜wx>u hkSB/_ݑ7hϩ @JyR3Ȳ{eX)q*h?eJYܿ%z}փCp }c؅umb.U*us:_fjvg_Y1f͏볟O߿w|3jf"-Ogoq~u hiIniA,޼YCŹv{$_.ugq&v"ęA a<)z+-E׋]AmV8VU0DiG)\q mwr3@}[ 6lT4d47i8b7Ouhh1*2Ъ`c1*+#Lv`^&rbmΒIQS|&s$n'P⽎}YXͪJu3r Til@ {QVh6RlGi]-W5DU$;+LQ0R825ZwE0ٱm }9hc!xʸ1[N4Vphp9i?>-Dˋ.fﯯޮpY<^J wHPJϕvl ZG)lh$Ӌ5Ozg%wxvEF[?14caq݋~ZuKВ(z8HLQz "s#-ej43:a6!3)RXyn-l>neŷ埒lMm.iT#K8}7T+g41hQ&<JۛQuq,;\+%,0a'2": f.s{i~xPd&P0ˈQ 7+seMLvlg;^gc\~N$/\'}uh5?O~tŎp~sR[:LY!eRC 'KRdf?7<۹ :11wd9h!oM@ڍ.%7, Uo]DBxO!-PҸs1H 8OU8HpO70ΥvWJ8"zqDpYO(b6P<]@@Eh>|w#u].ΆgY[癲O`MWBZ 8>Y|)xA B5X{`5v3G[3QĂ"{+S=l<8"Թa>l_Tթcl3 2_`d'1$eyO e$A3c҃6f]{uSo oȆJ%F[hΊC ^}&=,|eKZOQXǜ-e(sa`]g>'cFkYkpMGtwz n;[Ɇ>k 0$'flۈR[Å63o4F9m"E);SԊUsDwNVT"ja9ntj뿋h8Jqjsޕ#>^\{ʘԲN֑Ε:10x=MPfS7|҂wM7ŀgL7Vժ8ڤDetC~ &n?V7Ѝ$(M{'|O=8+eq3y7 6vB&SmOH*Yl[Ec UZih ;(c#{F FUahaCܧLqn=2ǸƅhRSk>v. Q1ejܺLcPY{'Nf 6yHօr^&4~UP ]vZXN C%2G>, t[Ojr6pMy^c_$RsQW/h Eʏ}Y=-`*>9)o =gj躋L;rw8Gu9"@<^Uc5(30w_N i9H.id۾(ܥŹ/n8IٸnK394} |;iLe9}YO==kӳ!\'JWG q9ny AQKV g>(f=^Y5It>MM.|067K)f kgi4ҝ[9[M~|;;"2ayr )ĔfDZV JgOa (S9gLM뀘 0-+\XtL! Ax5$Dyam6g73V%͓bJ+3bwg2D%^iX1Gefd< I%'|y@-sM~)aFO^+A7"DφQ>"1 \aތL(#Ur@jbhF8$+$(N(2 $rLĤ1*bMMcA G&+nỲ^De]`1 lɑM Hbh]Wgn.ç5Aq2E pI\%W⇛ ޠ VϿB6Hˬ&Ej_.{-^|IN/Q]8L9ifl9Ǘ']/<ۇ 2_Ч|3cOw.by ?\^ h#H g"5|ٻSKZB (g),#5%Zoxa(6 G`mVI\*Y`BKMP`F!JSRoJʋ057HKeyL^;eM)؈HiAMϵ}ԺϚZYݮ0ɑkmHsReZ M iѐM}΢(jJ%BI@|lɭ;ioX=ڙ\P= aiqyrB6T_+Y*II83eBn T٥<@d@"]shhVMI!ƥb2;sN 4̌!)ESSqa>AdTA =kD:>FW"&)% $Q.qMJwL%*?") HnLR$3 e&XHe!Iv9 52HHQIѤ 1 \&RB & 4R;s0")'XUhJd$b =eQd.DKCPFzȵ)>& e%cDk}zP(e :dPoVl2JMfY┄Vru7K {sv@\k*e G`,Il57`FQ(0چ[kΘ.&WJA18,l+v'؍\RHsJpF9$R v'Ny=߁XUmXi0+Jpk]p4.p4FiwoO rZv'$X^=ٷGDr^AV(4pHb{Xd!)r8[4^f|/WG|f;D͎WrO|+=Z5AD@0` 7<+s+2 ;g3R D%w^h&6pmg*3;|lо̏/w7#wZe3F$.(¤E15{R޽{|+JD2AЁ&C0+HR5X+t60t@gr؝jY-?w`GO!X6ɑ=:ZEM6!:ل6glѦgBݺw(RtJ!tpbՍs ө;Ƣ1dFV^qmb.;!dOYʘ;9<Kӷ$@2R0L#I_v>KpXa=K/Naݔ,^mPq)'r$%JG/ qIȱվ(Hj%;m/vO:Ĩh[Duq3! Út1%Y.gJieFc˸]V3CP)I=0' * rJ1n 1y,k4fk*ތ9!\ϩi3+E_ZNj:jZ6J qזb,f* "&IJP <ʮNtFi/߭SXgOvL:?I*1^M\9@;#s; 'a0N?^pRRMM_ w[:AauЁ9̓X)Xʓ rs@`<+cūMr SxomHKDi0܅7@݄/T#Rѽ֌TSȞ0R Hwc6Z 6ۚuFdSTg'r,EX02h6B3$ đ^ ?)Qأ 8MͰb]^jBnfÌj xgPlM!!)Ư./#;ٻrbq]La$eMmŖ2&N٭d@~ 1NTl.CHeqb㶗J"pс*ݤWHB4v)>p"|gj%,\Hu/*UƊPTs\QIO@%Fy4Ś <`p2(i9ӫ\]܁ ,6 o{TL&i?>ƴdbߡ)DWJ,["z˅{{]ɛQkOwƓ?ywN! ]F.HKՊfz}OPhr=\JZol15+3SzOO)Bzv݉\ϻV[X%[x ;+j}kԣT8=U\K!BW0#ɛϹ:Xga2CpsNmWH 39y5|wN.2w?ق׃۾Gxpɥ蚣NV=;yKZf㕤`L[|;Ic/ɥΦO a/GUՅHqc_:z4̛i;U3(mQ=.a5c߼sOg/38]۬="nQ>^W?F˽3soO'+g )FY#d.M:wn瞶S$Ay0t.{ D Q#TȃG*7(g[Ȍr§1E+ip$1i!%<7!z~}I[¤fU=Tkt@Y>\p""0BCFȼA^:A0^Q-' mpӍZi8gޗ[fU=x9yܩz0\|[_ ֫:}c~7ylH1I_95n[V )=@~Y5:e "ٙO!)S'+8ny)C<{&iExjW0ZGXHE5te⇷cIR%ط75-K̉@Q" j \,&ByQJQ9y֛t-Er;xU@40b1+tx hp?+W@gJ3_߾?}ṽR?Nl8x7p8l4i[x!VZ!e.V VTn$\E`~ _H`Ra0<ܔ'vh[O/c>y!tt)**L'az3.\O'XSYٛqJt7,{`25SX|jԟ5&hXK(ׂ QWЈ5Vq GQB6t&\z7S;(^#jU kj,FB EǛDiy;kn/CN,oQj$ꈽ:S|[)KG #guzC8C4#`qqA|DIYYr.*K4h]עJ܇%ʹZIhg©f&t !xt.6 S*I:7_.~(lJavAvuڽm"SSr^3:kŲZ%Yp@9‚ BYo{/*yJ'ת2JD (ד,wq򌐜E<m0C?= MЯ!W6^R=6<4E]MFnN7l9is!n_; Ȓ{ݞ}5)'X rWJ:{s;{. +XYgop;s;u2^F֔#C6o[Nϊv7ܒ]m\ Ӟ={kk@SVM9[! SRTήդ] JݔZku쓼 NSp+`sXEXFk[`%1*dtTZ#<ŤJX,w3G&;  >fb먝5*h5aWzFԷ!W+ Y-?GNjMuOBdaMՖ][S@Oi# rz=Lx{uY !V9v,w5NT4鯷ᒙkMapno9-Q)p%^睇(CqCu(TW\jW &h jږzojN_cuc'0C Ag2f("1,IDYa# {'!>KNHZ 6AQv (8 T+vk]O p{6LᙯE6WeE)AWw X@o˼o&fzg`1k1q$uY1[_].wR}\rgwlWTqgux$FGy^5iy"Uoӏ J.sґpz;ra†,DcY4h^'ѼNy]HJ J \G1S!p-q;&૨9'J<yx5ߧμWΛjr[끜?݀g7a7<_>[I)/;gC} u2P_W uu٥42 )O rXG+GVbVR""R9fetFˮkA2e:%cFAmqNXfQDc0 H`*af$8Kp"@5Oz,F-L.PZlWsw14QA-9Md(5☴*sxj,)yAH𥩾~d79]DM_||QT|Q{ Tg~MHG<5w֬%kلZJjb{.48#BP'n4}J+=n!a`3PQ @*|xv׎vBͪ0&jcem]k*S畱RdSUZha6(-Mpiꇒ HS?މ@3ʔƞZ9 9!F %5\RV9f3w[BK`}զ\3V/cQGiۦ*;+pu,"H[;ՖB>s^U?nO45A.z0) IмQr^05!F~)S? ث+fb8'6^{z"8 @rrV?l<=^bR q=8I?3-Y<>Ju9'5λXS I 1u7өQ]ضsd2d#a-]~TTد{ ;VGNN.׍c{z$/&S c< 5o0_+{f])\Σ3bg?p SSd(5ѭy׸q,l6c96Q.m5)v68w 6n%VTq^Ģb% aLke B7Pqt3KJMA9GT `|iA&ag!{%(cDiRXֵid6 {r^ )p$R+Q -N% K𽼤Qx>ņGrt75l"2 AΔF77 jNk+jkgë́ÐZ7\qhUC"r 0)RZa6hkVJ/i(J.ծ\gk &9Z1'9E1~ ͍'pnضrI/(R/8E>Y\|Aճ1R1%ęM҃p)I x:VɴPa-8\,_GXqj O/. 9% 'VoݟnҍIE [V>|jC!~~x1{zMxj|vѱino|,=y-:O-w^~_UZY5[RWDçv?wi |@y37NӑJRPAJUr?j&*47*V17 hlUdkhMy&҄}pK -fC5MSbpz7297P F18 Z8xvI@7tH U,݂fby-y6E M͐$JR 7^2?G썇ɰ$LTm{N2݈ D7[)[aA"mH#BŽp4ѠH84LF^[2U#j<=^19JtavzEAvݿA" TJzƸVaQ!Y-uISbRgvsg5E:~x e+'©y!gC oad`WGpZ$/! tc/'M1'_R;AM/i]g_aRI^T'2:.TN|1 *_Fp!بB@zN_xArĖgYAg.Mܼ[Aq,m"#AcZ7ephÔ&K>W\ȵ V:p]yv18XckLUU$S@9-w9%9N֕Իez\2×xN_o^?zzl^}ͼP[}iAx߭ 45K4ZMSF*(qE}D"T)Gi& 6s#b),KU8=pAжhJ~cTk/n6Pq{yۏlX}{ޖDTM]ZUFM  , Wwi_.w7D*J#1Z6PRIּFYn@@Q r._DIi41-'0-~hk1cu + :%ZdD.GR T@ba e VL& "i!鷤%u n?HI9@Asv@m-;= K{G͌SUfLծn$AdgBxxBPj8*:YJoLrPhN1&m$lQb) z8&miF8N7>|dv9o5뵷'ZiὴNBp&eșp>F&Did42cc"Єd8z_ %hz[|Fv֙YvAW@p q/L'~CH)38$uV[4QI-&o$JηctKt*L4\8ˉW氇aՠ#Ӣ50-λKP)L՚]/q*C$H&$Wuz'W\*2I{<؞@!w=%ӫpIwFxz.^-rWA.uFQ$Ё$)b{Bp+-ΧHAἔO'CRÔ<(v5]yN1lR!3t!"rIjXC:-V" ݩh@4Č3ε>#Q^Β[ԸNdK$ʲ`A͗u4XӴ5w y$Hm&rV4lC˃]Բ$ [_SBNi' ۦNQ 1tZL)N>xDV\NYVθtI pCLaJ7[3.l#1\K!/?̆mpG F ~VsOt?) c rɞo`gX/ O\O_ ?̚vl/G/^~)ȩk%rԔ,qX,<(7Ǖ$إܥ,NC5Vwn..!/l杠R.҂/5,'%)zI/#RdnBgzTKN]5b+GĽ-=;H-|Qݾ.W [7^p=={&3'녬&kf|.v-M\%G?_4,vAy ^(k8Muw>|kG nTRQJa- X!ꊪlLuS2JJUx-)ͱo_'ziA)Zm~_7@_y|k.`}U|?~rsapRZ77BrO@m*n[k299ctksn~%w?((EcJɘM,Z0i_UX/D n|\.o@&6?|_e;.ӭnutpʺnGEx1{VrAst?OMs#A| X}lO;_ĝ)[CZtZO/*BkbtsZ(44tC{wE!=9J90Dj!j)7eZs̭)҅`ƔReZ 4kSSK_J1R܂o9x*t\=RYT.li}j)eZ$hT1ɉ(wU\.f|}zP>Lr^lA8!ݝЦ@rO)_BY_Z lLm ճ*lsW]Kd4Rճ5DK[x.ͬAE^&JAJ4^@#}&P(ifmEfQkPR_A<&A@ 6mz_gݗzi)nRUXX1 hx=,(;ZJU-od<SJ Y%JF7 0;bc=@l19FjNT9#Ek~۟'bEd(1gR,S"] bK9BAPޏ,C9V%2k"k`}oO!fC<zo'܂Q)JȍFmNI%2^*T\mCj}m|E?tNS~Xq‚o)[䜅bܒ)v% ]F!1G&.Ղ 1SqIj{ ZykiF[~3ھ9F8rVO?4eT7ZQhIH @mѢPM3qFhT[czЦU`EyJ6[Nsz< )"㒎\)j, F~,#hnY¢pC Jz)DΜjiW 7DՊJDkFz)a-(۪?^]vcl|qhqCN^nO͡V@o+l U:|kC4'l'7_ofvkoof__rZ\A/Q.{ϧZa.nί_o䯓q^⓭(ԜmMSX)Ԫ]<zq/1Jnjf(ft·z1v(ew>H9!W(~xo>~gہ÷|ng˼^!_+$] `.z'*[ʟHqD<Ɂ}'>*:_*|lyluZtc1mqӫN] h `OV`,{2s~{>/r5V}XJqNk%ZE9N7'q O*ޜ[l,a 'wJfrY5}i@ݏGT>":Y~g-Qꐂ r}[1b c- nT}DÐњn!+4nG'ɉpB?*rJ.'uT*w|/w ‹1!4)QE.H%W.,d,w`ׯ?v"V.$˜<7Hy鶒D=PWW 6tpWhq(%zbb()N/^jAr1M of1v$5D!-ZsȀ&mmmmZ׷z¤0*H.wIZ9H+B9drBSK#v\5s`^]qdۆZ%Lm|E?xS@cƚ?=d,9%5R"qG"Udp3AIUּ 蓅猍p懅[[mkMH+-!Dw>~_zW v%qWh 76+kޙc4fuBYOj||/'>C׬*gmI9_ٹSڃw@PZKo[dFV3uAXa7OM@SNGd`~9/C 8Q7$#DB-jC֖%.{FIBwHqtq}7@Q2 7e={cɓ<:&D1eiMЄUvѧq.=+>TR# ~h9D烥 meTiSOB[>SM刻8|AREE>ziiii]#GO)98I{T\I &qM'O*ljJDX|ԶVGOm*}6cҹLR Whj2eZamvWkj1C\0G JKw)acu@);R(x݅˟4aQy)/Qxzd6 5P`|n6rѝFJ RӺ>&uTaAdիvI@.IDHɎ (avN KIP@a Yp`慈f(IDH"^?ZhHG!5;Yd۽6IvaIG6~!m1e鵐Qkb^9ŴzfFSBUWNgϓ'TařZvZaPǪj]׌t+vxiOH x_FBÓR }",MՌ}i 'p\v}URL讧G_uq>UQh!Yݝ'Z2ݯx.~' 'gwjfj=\. W3'nX9a/z%:zߢ\n1$+p ys<0#X:V,|J}9Ocm.SA^,+ZDm%ѕI"Tji,.Oe쭟M42c]{%'V8 f1!W6 ;1fy. Q|[!C1C>]B{"zr/LQ|/ ):v*jHe mmr!$Uof%'JsvJHv9Vߤ,p's#h]D^: RM7̕߂H6xU1i% (ep\jF_%ml)A*m{D?S%W2%bk܎"oeQ38x=qc{ 33z fGeh_ [Ht8$G+Ɣ>1b(h>: vef/{x;dFmUaLu}m8뼊 #aLْ^z.瞯RĥEG]큊qn|&r綊ҽѽ`|[5:XB+wS6ZОORk]ȉ>%0b P/LT6-0֨(yTĝ6"BC,hrx ч6A"}# j;ݚ֑n5b~=kV(ÅĐs+ 'SmV2u\ɳk+"hCo* 4Ew[!I;&SBnxS;{tuWf FAR>QZzw?@!3fp+6耪6]1q|K$:/jp&vQt<4#:ͨZو=إJQ`{?&TJ2{)P38. tH5#eΨIZTyhG4;E7t%jeVW;W,o,LJV.1![Tii#boί+ʉe(Liah) \IK؊ғr_nzq7oxrEmC&D|EkH6'Oyh5Yl[U7hUK£Km ߳Vø\j%bjj4wk 48wB<6&>i*QG9)wFK4OLp/ D:Oʥ*"Y,߰KsdInObd>l'~>~#"W}ODM©&pIfhM|ற߯}*dMWi?yk^U1S )HeC8*h6>Ys5ib-iZ* KV\Hd `vT&Uk|QMeHI 77 Mw/ڽs]dۮo -搠{& htdP[? |_n]dM{3ͲI]&B.Mh|M-$//|˕m:6Hv]&Fӓp_n> { Sdlb˼|"&?j7w{q'-?B-~sag߭J^]Yue.A$_O.bTEO;ߖu/aT`?hg) Qb;X&A✊a "e=7Ids4$1{yS`&+D/P$ &YH'! bSJqqA(]R%&*k_T8R$N}AE[ :p"6`JMk`9Mp6L(C yL>BGKu\p\oa8@!X35Mz)^gs%%ܾP٥QQ V'GFI`3w#[o̕aT{+#D" Y"o-n9Æ^O*L ;K(#Yb)6ENx\ec 1'8wXPhӔqS$R`Qʄ*V94zNJdht,1_^C$,MX"nno@ Y9__ sM֓6l eV 2SI^Hyni|f߼;#1^>xڙĒcŰlf :*\;~6^!Bx(lfsIȴ$"bB̧2XMpB꥖8ΰ4:YӁe3k:r:{e%ǛQ1(& T#R|&685h8h)BXaGy]>>Fxgkԭ$f2qy,'6TAOiAy0r4 XFDx` IOU{q8| [򮟁nZ ȒZgWApPkxH&PBnT ,QgWNuxvM#%#{ɏUEt1vDHk![Dkt$5(WA/5#O>zÍP@E$b(fJSՅUKNއ44*D氇==l:7|5 -nyч)l'f5<&+aH؟ .8BIIG/2Wr4JqۉSs 9Gb\列-֔yۖ ף>,_PZpFߣKUޏ@OZQ9zM)et /&&xk͞X7Π֘Bjh qTA~@]/>9FC) 2p p>0R3Y~$OMTG+u2h8:xmJ#PV)m9Bu '6"`4w~rI((?AZe$5BcdΡiXis-W ;IaP !NJ'ͅʤ!%"X 97)h ct0*d(oG քjml CUNzhºS >1Ӏ!pL+p, D(6$@s40, u׍|Cf vRk<N=i<'9b+"<"ba:(^ "·~:s3]oPO*XQkr۫O\~3Mfy]zuSY{Pg_l{=iXuwxSx"ץBuf*Ū$U L?tfhLa9ۘ.܎x5~8LGۇT.<{>@HS5GS<\w~O빶'>1>mJvεG`&c^޼i_ά3 #:6[a#1V&}Yt8_Z:yMƃ-|i!}:!5݂[:C6|(ݝM_1T̻vngJ3Qn&)Hk#Q৷dDRĹ؊/O2q3䜒X{LY5lp+|gx 3PƷ S I03bDVwtJaR8ITZUK67t2NIl>2* JCu!A~O󀇤'g@^&/1(ڛ=yYIEH&⼬\HjAZTyM!2|~7:WT]9RL Q(Q@O1~D+Z`b)SR]=*T6IJa3>'H{d! pi [c:`B{ AWh0 nGV?&LտI畕V:X`%VrxNZmhZp$K3cHa>,C:E3*23ɵL%ʴ`a+ OnVӁzaV_b/ƃ,+Z,T6cv;?,}uRE)bA^# -ǎF)2Ld)Ok'8GFGIc@SLs3 Uj9wO*#RSM#Y+Zn" I%8%¤T+ϔ^+$ 9SaUjɐ̝)6DRgdSI+\z=DL_!]8w!/oh@ÏV~ Ma_fo7fTVu~.$u2"Olt6_âB -N]-?s?,3j~fD맇wXah?={'΅"Ɔ"RwiW#D({@ٿ4+ (dZJ9Zr*vq k ]}2(B,E!E.J%4cjA8WV*ܶcPFؑV$)ܔNy;DFF= z̓8-'f R1$4)϶bf4O&ye:ɀPgV Kz<"XRަ ϕK fdrDЙ6jRLdN'yηΒ*D!g|7JA&4m FD*Q}¼s l;gZl1=TO^3R/mHS[4VGVtQt!b[)eY?-dz(XTk_7FhT$c.c(^RG5NXrg)mC$q4銢`.{Jag:eez?a5='ɓ[,S'eꤨUn+i nq1/}J;:}e޺w.&/+i+[ƹfzഐ32d>1 ]2pIJz_eٔIZw׷ҙ@ÐeCCjLFb&>&'h3vƔd "”G57\QG ?A(AU|/7\ͨΕ݌i\YLAKu밐sz‰{gn2pؾ"rگٸo6S; )B.8k}#߫ԙ18GpoD#ԟrą,mZTvP8mEޙ#ꩃTozu:GsaL) W1P+ !먚k\iO`4n3+e6] @a {$Qi)nB̧X"uFֆmª"!i}#LXףqTE!?iI$#a`ąW]hƳk4gs m . 4In,HRb)03 .QQ RXJZ XE kɃ`)V2R5ɬޢ S$H9A$JV`~>\()_ 0x2|=TL )cIO6%Oo)n)V^5[cz6K0&Gg=50jȔ&N-`FL)#0v0| ( t`#֐Ֆ4mѮ}|s^;}fM؄D4.t2I N$n.ر7 ;όYkknVE [*?c{N$d_Fl$7 FFL=4y˧=^[NNC Sg꧱z _6"G MNw3@rt2 !+LZ+z=屏 BXJc_iz5 [-Pe"ɳƩ4R6~A[-T.08]<(egyTx 3n?%s;S(9tt[ Q˧X%zrO*8خ`jJA8ʩT3&K ͤ%Y4! ÃfV'(ێZoĥF[_)&>xв0@VBgdlhNsBuE!r_`ivacBzU 7^C-U+reK(UT0U/YĄh\`C v=Zxfn41Cg<0e#'A% hVAs X7ԦR6/e'1riő#|曋/e\;\e9-N>\~d.GDrqy;矢&NC(=lܒ9U嬊Xsc܏n]ywQEHvH+4ƆK?\$'/~LO[NM_Q"xF$`f݄^tq .HtM6;h+߁QDG jJ5+- )^ICL=!>;ߐmfNvisw#o ׈IF}^ᖆ_&j p輷Î@ICqA\-nYߟwIZ'H Q#_qARڡQÜm ް; ɠzb <J;?Ho٨9*/ I\u`v>džѺk?Hs,E _zIiW>y QP%ؖ 3}t R 'bz9ěLLa"Xq4)MG6U@W jQQ(.^a9ԫ U(Wp@1OuDJK J8şZqmG*a DR!5}PJCI_EUWUR4Ѕ%ȣ̑iA{" :^Hi 5Es[vԮhnΫCł C8>7!rݦ_t-żxO%Yt J Cp&BPㄇxq$V!+՜Z[ع\Hl_ͅ t\SnpţzFS}c;fOrɭn0l5[2*ye~v(Vr&{@hղ}GzE}c9HQwL !3(h0PHmj[$$;H$R>#C|DJY/E GYIlt H5r2\r Z*G+u}dm"_E4+r<ړ0 z"ݥQ$W%T[ ېF@g'- M'oxd1c_)?rċS􊚮ϖf2^>׺IvMÎa?'^H/>1XOhtn֋N$VW0g#yȉJ(2p5Px19hy(8g.H1M3n7"x-t$t"WJENO,Vp\o3HzpMI,QD6+v&fHQ)0$>FܩB{FsF<>8u`9bYd4C#Q}'6>2b[Z\8 N=@qN` c-XШPsnp8{o5zH+8Y' 4Rϙ:\d`t*Gm!sE*4'ot;C dW jGz:4 5aV*$zām4ċ}NWOb;[&ܪK~2v%!ij 8<@ f. Lp+q5u^H@u9#"_3Rv|)~;B7_˘2<,3~2T,;̅};"Cr!~MC[2RD;{FƦK7~ݲ⮸*çjO-/NZ ֪{b2!zeZhq> ]!+kS:=|b;UӭltDm9-w>?7 !6X[3i&nJȣץgqI|MqpU#f9А29EӁި_}\i:~58z x.Jij~Q: 馠=V;~ahNH|~(_ oDc+zmRB= [S؛Ay?{;k2Ml_ {ky0tqݫF< +5[%P=$? (?v~/SʀXW_] / NyK &ToZg= mu֏d7^n(Al;sΏzB|>{7r|e?{djD<\rlcr뻯l@ekv~ohL>}?8kO]SxZ]qU{lpj@0*䑑||+i_Ncc$ %G 2 Wt_0Flҁ=ԕV,v| ?')Z3+==>Oh#lMd1*'q!;M':3TBPS($#j^A.8{-fJAcTQ</Rp}J{$r9JYoJՀšE|,%AJDMq7[shDL%rWjy,19Riv^KȨ<??5=G|xIcߵ_!G&Ex K e4RF:{{xdޑ LK(s`] sRxg&ʅׄ9K,+PYԗ &A9p UBYI0VqBYx\eh ]] ' ȗjN ZQ(T3FA@ߌ:{wK0Lc"QLp$D;Z{tdB>*XxwDD>78yufACy͢U.?1S3:ҧ&G{GJc*-opu'RN~76Mɗ(h}zJj9RQ lFJ*p"*"Q"rCRsF9~MjU)t3X1x/o} rh<,z,4,hCtQ3>840qQ9-1pm=.++c)y|ocnGhG#{cgɮ\ˍR.f1$rIa"qa녶>EeWG#VcJ5E% qDӶ&mu6C =ڭ‡hmL1%'(NIiRZؒ CYbs_b. fo;y}MBJG6?.ClQ?/Ao'Cd%ƷDinHzZ?"ϯ?ݢ< lrdt.؊ Z1TʍcόP!ɨ`0^rʀӡܺOgWG#b_/Hf7yӕqߥn I;lj"ACRx9GQˣL'?KxZij쫬/.>fYq%?qx+VO{[da} ^mUB8M+Q2kBjV9 A@:Kf. "˗&mB-ϴ16X-W*\1rF]Vi)f};XH7ު׋՜)3ٽ\$&-tY* GiՓ&BTw=vLʫIs lj_إG_@{dHpG)se`:kC'MbΔCٕ}zP*ig' \*Xi-2_ZX*Y; ;e?\tdw^Q_k0b@k"fh LY(Gʽz;.*3ӳ9lVk5zR! |5gMbfKOڗFFkZ⼓Ȉ鋑v)en4-j֮;YD{>jxck^1'tKK"1艀 .YC&x0-fD+0yz> 2joYV`?orv vkknVSgqGUzH]q)1Ғ]9[ )ixùpfE1=_7 emX%G9p 'G4DLJ"D >8(B5Ad{ֱIhi.)1̘?ZaT1 !evNx40BM6nAg `l[d9"8MU,&Tbj^t,zYzpzg-4c?)dJ΋?}{E< "bȢ}F~~s|RÇn쫋)F.~sV\wVH0Ij *:f^}QAȋWwNg(BdI U۸,g0&BƷwbB2*$T?u*do :]Ԏjv|1%m9oS{<^r>̱q !&qVXs:]OFXqea}&m=dd)u =Rݱ=YjOy9EO* gײ7x+JOvGP%B2@9ڋ>tW:YsP}E~T p)_+NiE%i:DQ֙4x+Ďu<@)A9!gg`gt̘<p)gTN8,vTy+ogMHzEf-I }YN^ƳxTTOO0$K1∰KEwH=I]r|sL㡗X(r$A[iib{7!bijIyiA)Ox4 BM |CA/v-r x?.tmq./7hDF'ϭZ(V1~ȘI(SA%7{&xEOٻް~x< =ISR_",ϜB6I;K!{R7' rT,}M(F"c0myT<%)\7'Nz?:w~HoP]1'kZD $÷`LwVy:@_pΑ|[K>дu__uv׊";\N2|k1,W:Io, K9KHQI@!D-0~[P!QGI5ҲFEy7#JF4|vD 9`uP{ハ0+Р:ZMO-WӣOP†_v4]s;dYt=<4eY8@{"q?o-w o@b!xsc|SdspԍNclv $:Uҙi`$j-6Gx0B6-TanQWW:(Ok@| "OG-.Id3qJlv^:?GysJؼNM ZSข[Pwɺe)yǩ \ϿH;$/]\!LnX~2$e$-=mLϰ%zj) AZ#i8VrMiT%|uQzcxm|+=JР(/<70Î 07jqF PJ\pbnW`ZtM *Zt=$\mXksTd{稦 nQgBdH/nET侲K-'07<xLQJ5Jց<f;ȓ}̿vOV:-颳ԴfC !%2Th/;r烶\)֫}s/87ϼ<9fAH#f4.=PgZ2TC穉ղ0bS*< C}CxеêqۃkmaޅaS;8a&t5ุXUszB d&/c |?k?-1g$ י ZA dZouA!"-;4v9,+1d+MwSY0|j*pR5-xGyAK1=7U[xy !q-੧0A=][Ϗ>쾾ΡN)UʫoUZG[090m yֽkqGШ-z*r.).uT0 61Zm-ÓH5N@%[{pPMEݲNAxUVku${݁WJ` gg f1ؙM3emJt5Sے0z&s,d&;ԠJD׷vy[]oѪ|o>6qߡ m*%˗UkRJ\ÔrRB2R t(uV.1ZPA&ɢpfa:Cs8:|tPܣ?>t ~7 m<+xA8& aޓ!54⫔nh*1qO(ZsR.4P*J[( >}xh"g /`Z^pu\LyG9]Cr{#q=h@֊^oWI?*YI(C{o_b|fw;v}ς_xUp%R,CH{x.3;t$6fw7+//ֵ_jǵXx@Qd1f˱o ExiowȶJ ~aJ$O%=?}grRyɎ>m 1q`ƌy(gl(g*SU3Dc%`ةym}~i"Oaa{E{9ߢؕKwn+w-~vW54@ b#4\qve 1J{Gα{C]1k=GѳfHt<=GO3%ũl(6K)w!jWW}HhTPAJۮO?_XT2~ St ]U܍b44 <2T";ClV2!?bGXF"@nT},bq^\<;OE߬y5q~(^HTr_7% G{|u^}ly1C$K F5CTj$ G9ORV4ָD*{5}P!A2-\"E1s-z1 \`\2 qY7qT:ڛ.┇?iDx,v[Z2lF26+^-E/;.ޤ>G?Onӷ "b̞.}g7 拕 ><'.7~\!U΍/~X > KFT'GWq% Pb91 E9HPf 2STCߤb Z?q=231:0Nk*`qS!qTU\ 'q֏%5=*uLZvTT1EtIRDPe !NiLBKBG%3- WH蔿nH4cvȑ2ɼEaO=`6veF^r]*{vz>LIUR)%Rddu {iݤ\(-&,]S=)mA/~.~e!i |c(JXVL aBΒXH4#Brt8 $fS<: S\=}6>8r&bGWOgOxvߣi$;We`̰ɺ`9Uh-vesB=}C[/[ԸFmmƘ\,;E)oǵEAGjkX&"읭kjWTa 6Oȃ]xS65=x='|bNvbs[ Wt|cF?CjƗӬ!]O 5<"e@t{a(/\z>ue n28>@_!6^r̦uޒ ڍ;}ݥ;«g{)佃~ %9;fJ}&pMs.tJc uh3sf8AJU[m-QU0DHYD㔴vB 7P9]V/5>IԸU`? ZXYƢC` tͅ~q?5ΔG'/Mݬ45/y+48aVm Ans-\ıҘ;^j|cVpϹUC.46PcL'HBD,;@@ ,.ʿB$(ȹ Sl kc`Hs:I5Fi2t Q6̧2ñ#%"/'[kAw]K/\)gcWNf >ZgN Ԫ~?8xSdM2V'#^pnii490Wjp6C&FSiԄ64 `+KBszq\ 6%a8A*c 2!QHjua$5sMQFW9%..mBThKL%f\Dž@+aΑa+X؇TCl4@U)Ie]Xni.Z|?7eJ@GWj}UݱQAH8o M7#>"\ f 23`PWTjUiI*R+MO)?e*lm2eЪUP7UHM2WAX@ARX:a "#U+N-XMy<T9(@P;Ti(dQTJ!*Vʺ ZT\;Z&H/hӿJ#N9^CB- ST5>V- \H.@Hdjh%6 ji]6[X[2|[||r c6]$QZ* XcENW*U *XI*Q*L:獢dB&9Ɇr\;F ]YriC[2@dӕZA)滩yneBs!:J2k^)V`Ngmvw nWl0Qaqخw%vO z IlvD4ӗ&{?ssp.y F:ǖz=|x>yRY5˖|,XmN:M^,s27r6a[;)*Ŧ|2I5eSX`dCK:e7Aw,OॉqQY:3&Lq hNK*W$ȇne;duńD|/Q$2=R0ߌ9aM tdӕ>\ t&+`n3ً(qKdd7DaC/mQ. Hm|U4CF(d=(Yd(Y8d(1{c˺9X!}*;f3k} z#h}1( Rp  9={y\YV A \$rfuQPqvBP^UX%!qp\(8*o0KNG~3 *!4t3 ,~oxOXFvQWn땚G9?]3_rxм~dh4}*[S{W2ǯJ~ͯ!#7uӓUYufֽO-W.WC~.ԧO~xtK q薊AHIFLtqnZ`CGuL`t룇]4O8Od&@py29䖄2'\GJN*vD>> w}q?8{>md}hdJaW/SZtyޭn!qZ_bOcX[;byU3-Qݺmx{'ɫKmƸ'ӐT(4"]!%׿ BgߍRIk nݣVca>>={sdF\36Ws>׊X( oM%rH; WRg" 7K[ϫzrbf5ӭ-˷W_,As=ߋ77- &z>xrlzksLT͔Ŭt fu-HwFSAd4?{}|5e@t5'o=yj A05~ill9Ic OLU9Pёި(. }7&Y>G,dv}+44pqY,ߞȃ-P:&ROBIl= h|nko/ 7`+&&C~NV!2^*-a(˺dlb&"6| Tl|26-k[5J*6|V)tp^ƛk5sѠU߽m%$:dmDV0ٗm0ZLK)T3IL{ӖY$U/RsΑ/|j:JlO/-j og]C^Sțj@g[z(~ S S z{=˞;' K" K' *^CŐ=w1垫r|Ju`ks-pRʌb<1GC+ís =;1My뼲k]O(Lx݇ (L uGc;kD*S$"0Ի6_5 K ˼z.zl9h7dXFp\zKNwZ(HcBT%*.TҜFB((BJxi*|[Lo:(\IWJs/t b@҂5玡oZ'Ij*I.d)Y`4D|C34KEf\ ~}!oEVͥFҠu{aTsDITOo>Yi2ͨQg77O jθLavK5:$3/b|cD䩄oǠ~)8^~xeB>VמU쬨ˆ $:nuȍl#ﻍK$tq]uud΍O }xQ\ߨ rE7N)1} oᘶ0'@RLL$gSk"4 ʰs̀<Ć0x odbj6=&Fp|扌 2e;N.%I=*!&(R:1z׼潀STe2F^RQ1L5zbawqՆ!}(/JiH$ tR$NUQF-U 1ZlF`f o#&אP2K-fމ6, X{Hf֐,\b=: p;M&l 94xHq. Jwʦي\WUI4SV}"0\b?D/kم ы< qY6+&JU`s8k?!/Q( +ʇ*w1m xՔ`J(Eߔ5Ɔ@q7\c}{~.Nݓk;0Lm7sFFɭ=۠O[_j<ު&ŚO[7ɭHJ P4N@!ܛ9fS H̱4=.%0j9S钸1!x!ie2YBIb6)arDZt4*GR^v^?\sEk^_L/~!* ?rZ]i:L'dX' Oeaͩ9;g>o|ߌ@3w^ˣ="tGw9iиz{\yttalrssvn=: lۅ`Krr͠}[OLG\ڣd׶jC6J5t-o_gR$4O} 0Z'T‹I;bػT>Ew%1my%dD5[̼cMZR]{y&8ɳZf}hdz~#zX>9'643[=`)|A>N}.|#3ȗ[`+7Il]d2.R[Xj΢S+F!Q P2h,L* j(JuAv ΅g Xﮙ{N ~/ #r2_?r ;brjb") t"_8 o+I#y~lr49ZD`_2@TSș=STY觼*LF&4gJ0咽lY&r3a-T{0%t^QךF46X3X^27KBsQx2JHXm{6'LN/<5!tt=3ދyjeKF[+MxR8*f޶#]cOf뵿.R*/W(MҶ'1VtrҠOj ?ڏZcg /,vv?)̟{Qsۣ~ J[)E;ޔ ZHN(;U@D+u'*m ɶK?>]}~x[7o5hB?+AɇPmLW_M&Gw[N?|55-Dzߴm!~Rhz^ж;O?||xs]}\3N>~̂T(Ux.N`G*G`SZPq[4-䲪^V7L1 Fah &N -Ɛu]ݙi4eVIX[ 5eY֪DRTfAAH Zs xm%ibZ»({j4o^%ζn J527unLD΍2E`KFͤYs=wI,x3I.TȏG0WIkbUQ0L?BQ7妖ጓ^@ u* z9'r cGQLRѯ9Ѽi G0 W^! M¨z.f|FZFff﫵!/H͟zqYo:,>kiLi:S6Cry rVY==/wEˣ\1gg1(#9(PQy9Y4`1r:$7%&`_l>kt4i뙎N?^Bp$Gf`l _ď:(l'볒iG> ?d}'$}3`  M[[\7P]( r :eSe'j 5 VxIĶa)u$W|YoլW&w*G%x735e*:~,8H>oYֿ >VOD (G ;[7ׂ<}O4"mzI¢*,!eC~2;^Tp|՜+"-}l")בvsee~z[v+{$Wݲ}*IϻQirEm\=4&_ʽ&?cO$`'_~cem' iW&y^zm.o`pT_]P_]Cμ) /[4N8buB3M*>=8!gESZfA* U)QIƀMMK)Uzm2+~I4172_t:au ]Np  2GPQua־?{$'WY &*xH6\yڷPkM xbE0S꙼,BH6}lc}sUӠ$E!;R)RՌ**-)kI- jh4Uy+Йx7djI8ڲdma- `Bc:LÀAK ZKthŚouI$1+`q$ڌյVeW55! d)ꈔUG4jj$JL |M--fL\xNL+LP6HLPZz0t㎗Ajsj]FQ*:YmΟi(e P?~m˛ǯ1Jt86%1C8+fھMy[ESm!ԗ ɽSwF&xR Oh^#w6Y%ER5/4+x[Fk5xwc]>3{b Z=;Mo彛__z.aL+y, l_af>7慨0'8feV=5(f}O7Hw2y-·[whڵ@+ x҅6qNQciRފ3$X |c9w[._筄ӓ4@.k&9.y'Frw0 e.4(nM}))G/W.YG.3+eHpT ojiF49 i b@=Gpsm7ܼ@L7&<*֬*K,U+iͤQ`~B6hhWcaIZ8߁ PЏop`ύzZxc6K89{^vaۅanyahŵƦ6);Y[uUCK( oPHʆswѡ ѡ?G+}uls4-VѦ㢭E4ԼIVNUHi9iʩޭ"J$~pD𝫁U+'Wh+vm]V ]?BpC1xs 7[g~"MרԢZSJMaIW4kgh"ig&8 "/9whWZ(Wn׿- #jƐFJ-#,hFmˁ tN'r7 N=ݪ!^L".;µZ-XKq9=<$ŵe>=Krx¨Ǐgaӣ;sQ=<*WG'8v;?-)f^ha` &?5AOH.:jxAxQ;@qې\iNg"9(/Pѝی DF DX'\APS<LjAI_U4-^7 ve]LmIѻ[8]p^;E̘Q}aUؾB$VDU)V>\M*ACo8ՉE(bpmv0Gi証-x!#*/)|Kýy;dzW?{gKVzd0yZU3K_r)biY]%#lu[V)찓k'Vv3d~i-O5JZpYS^jU Uu#jVuIʦUc12+Id]Zti ,ݒ% ̌bc3-`J3%iv]/9(w;f|A]6߯w=.f-~ݬ Pi˛yzhY^Fȏo!y؝۱;cw}R԰f ?|6S=4-z-֮Kً0MPQ?n9J.eT;Myo ^PB)oorE_P%;X\˙_q9@v-XՕmU [ VwT2(;V+5JYY4H̸rV ABJU#OFe*Mf(V7A#\|L$Dzm5a:J KVOK»uɓî%O+3"GE`zQSI1'/n/N>a.9\xxN:&p \ 9MVWAQ@xq~RL1y24ث9͹6^鞰O25NlQ2rMeەO7E?n˛/F~2R#Ty ʦ"2Toe*{ui۾xR*n<\e6S"p3PN]eF@Qv.(aax18`V8bEPVV*JY0ouʪ*])R˟ψ>7vޖM4 GJq"k50bݗ$p?#Vu ԅ*]carnV*FFWn#|O^Byp:B)nݽѠRp]-e UUM+fa.zpu$BZc\ NC8Uxp}/8eFXK[O#eG5H}Azia9ZMeD_zS"״iXMKNjL(3ic}:UtZsKD-'"Ncf`@2} KN1u<DXrF{\h}r_XoPhja?$&!;IHEЬ(H̉v5ޕ$BbvVr߇X$bg_&dw,yux6;VSLM5vbıIj{gv۪өh81A*,!!2BVqT18F DDKn-(N1I;f*@Ty/C+0N5\l̕EXOw`}@zEh?n7//(nKsɾ7'ޜ|`RL*1XqbLHD&$vP8w[ J~wN&5oV?*;uϧ ,9& ~٢9}0iOcJ9m7o4n:׷/d[MӘJ(2a$:6XI`o R' IԆ۪ blrIvhR0a$ Md̔6JJʅȗi[hVESj4/ `4M5 -1 XP9Z@K2kֻ XmoVpzL+(&HÊ(mb$22Db Z8KqIrA _YO4*98C&0Mc0i*Fp-4&Z,i9;|:WO3;sf{2l*{' fef xJ 3(Sg 4t&?oh뫫VbnﳩPT i).,i֥ aMHPuчA>AS')6,Xfq @!@K]"`ƜXfP |G#6 LOG7rIIi$& kp~(pFE%T xǺIٱNwJE|ҳ#=28;`'w=!0Ty^ ==A!}rƃdgT|yBq3& %G͵iuH@(:rޙr5(¢^ɔT#kħ6B &C[p%m"ks&iJPmH U~h!i1@,Řl`ҳ\"x5oz?FͱCh=3-Z]o|> |h.&Z exaYUa3egn2sJ_Rڦ5mr[g^*N[•wiσ7-"5}34z-o{R왬@?刏crC@ppa $޽`(H(}!%x5 A2 qSk޴AEE?-T\ ;cWuUA(`ζ*i0,2V}B{?~+%f,`[Fa;)RX :v Ʈ rϵGNWcIs6{],A5'k?/BF˓y:?g\M)lT|6&hpDx#Uj 7C(:x7gyo|i?'0yYy;4v7sv4ErB1 ]c;~t{x֣WSzřr{~̍sIe2VOZH[Dlg9V.ҵ*.edJ@Umn۰ DtʎQdw;&hBA>r󡖧5LU V> dzt2G>[ķI^}.=&*16]% ReS)8hcJ7f(*_d1V ]0i"jTɡSzbD76DXrz7Tx7娝C[l崆md1`sFܹPmpаDʶhsn`'>KS?AuOa5UI@Pȇ?Vp~@9RhM|y=:Uc: ]R* ڃ6i_p;֨N~|LU;CA6 fQ60>Ky@jnuQG胻R"ɅV7VJq{h5N:mjբ ?滽!C[R+1%n0]?ei!&,+_l oVkeY5 e n*mP=*\ӭ㋻gn2*1^+P}"::s,)FmiCooug5Huk+U;p4E]Zgiý 97uNlU/q~Y&`-ڥNѤgF8$\38$̒@HQ%W)%1QΤj+7*fDKdBbG sO5ڠ𭊴#RN}`3i[j%NK netbĨ&9M"%c)XX"xKm?RKso3F,pR3<*wxW9ׄYgHihe "I*07AdN! YA|dN9|KM:%%ynUƇ؟kFdznqyEy vc:+])x>x`LpDT,T 9ȩ4MZ 5#T[i&٩5/~>3@mYq5:Zfm{_)gwv -=r5||OƩs]8U8f]g@Vc3ƞ0-!2VHmU:L$os9 d\OXR`.%1bB`͌XhEs8?w\;Xp*%qHvW dKbq,bxAyt 즙;%7C{'w̗7F>YC<\=Ɠ\-UЋe&v|h!O`bQWɗW+ +fU4_Ś 0i{J4Lj 3cu^s*z0Y/T{y :rܫszQhKFYkN4+Gs]1o1hy[L\G* @aTe^ `~x+3|4vC`NCHC (ض# D1iIQ&m(H" $3#&,NSc.΢d+|ˆlUwF.t\s].[`otF$Z4ufdVêIx"ƉOBc8B1e:8\@!,P1ф'/DlDqA#qRŔ+$ p);І4M6P>`l^)V|J&,!=NdbIS>7H"RXujm%$1Փ016 Q?. Ēwfj'˻tË wgx|#nhFH-}ta=f~܃n:Et`'LgKNt*f_BW3x:n0L?3^a>]'B<=:K*zX0W[?t㮇*jӀ4P^P8)T`c.Jck$g 3KIVk`Ub?>)ϝ׿6ݽqC5 VjY:Y[|`M'ʾt|=)o_}t40i޸&\]NZ)ФNw~R+Fd>+{QYjdigX6a;Cx (X]RcltB3HLIVzf%ꎯ4 `[ Y+aMQLu,顉lg3hh4,X odHx/~R!9kdXHy7ޔ@UG՚ BX5nP1U9sV:}Le/W!]=:S3۷ ju8MV;5VIc~#&/e3F!L5"ף:zrw~ B>=ݺbا/b؋ Le Ca/Ȩv8Gx[Yfƹ4!>Z1ee|Z7rkƹ19GЖ1oqڮ3k99q.}+lFg^a0*_vO`1bM?@>qp4 dJ~fp(QQ5<‚ׁ=L}ְhXȥBX1("S$Fb})j]+*5T`(*'8ᠵݪ6l]됽Tma 6y1^8ۍGuǏȞJ=$@e[sς)||` Aj ^xw=Gl=ӃftڄQS: {NٌN{n6cxFBv/7hL!8ڴ~'$CBWHj]BNy Xtw?H%`zmHߤ4M&ҟƋeղI[תqTg ~p}7ívoݺ6cV3e`N[T>y22OT:p{roÖ ~ڠ t)]v[a8;#v`O)Ue)Jc-tIYNaȺB/F aPEК <;Pي|{5wͰ:W^ya(!EΈ ,K?(r~{A.:#`v萇՟&%xa'"ܿ?}8܏?}_bc;NN#>;0a*4Y-4j&ZV.%Bp 5`ʟE+9=    3 /xe-p`SakwEWlց]\!%VE̐ɢ'lyljw.e4K꜌R7 kFAz@$yHRҗW6'rHz/\gZS0wosv:‚~C73m`Sfv@ 2gܣӡ !oqڻ$$zϩ{cؗy}?]s9 jTy^"ۀ<4@_^&g_߄vH7h36PǖxR)C{~ dra. #|R 5s/r!z3X݊ \`[oE)Un0%e]קG|MFUk-=_ M Di6-bedHL1Z 4 +*R1Jbl]jϷѧJ(7,]h}۪dH+dmLe'w_'w{{ԘᏑRrMHs\,D +7h5+=u&Ǣ5AYg$-#E'ƩH!z$Ԛ-ȍ!CaZ6BlAk ov([E!Xh^DC0*[GK@IU:4_abgGڀ' 2ӓFm[.g!8­HNqR3/f}DBIe%]3Q ֧ \ptI^oӽGŦӻ)5L*?== ӉH>\ķjV 'H~~gТ>3_\ 28 G$0}' ny#> \m ŷGo,Kp(6!^UUGqXŊLۊtBGmJ V< Q$8R NaףU8-bTyss5cq\;jj2\Ɋ9ae{N NdY;wCǦA'A;AdIϙ!@N#PSI 7ݪ_}8ϛ&_=&PoHBNVzV PNِiC;f*L+LLftQa!1FϭgE[DnaN)#p!3 mNlKBzw Q4Gku + &q[ӹ@I_tA5S}VGֵh$i,ۮA(i:ԡ%(% ~Ϩ0P i[S5`.1EnYjWf[F8|{ZpEĒ*0%):HI.AP__ýOZk: Mͦ+?L>DY0sFKs+*A"*LIJr*J-ݡܱ+7@I:Td;N$9[<rW͍Pt6o6/ NF3yz]Í,Xk0e` sE̮F㩰 :9cD)V3"D_r^|*wLAQhL)A.XipE>z m4Tc,H)8s|y3F-^C fXA|)VOi3tLRkju{ gs Ag}#h{x'RN`F@hmNתN 4SCj}YY.]tA t\Ӆ_k sN;i!\*HTҾTk:Q"N,ΤJm61SvN/i!~鿾A ;ijm?ܦjjl)L|;CԨ'C>)Eяp̀9i&Y rj`ѭuCs[]tŧ2kPv)Qtt]f\9HC{^GO/wտQ*uݳ\\l_W)kZֹ{*R)""?㨷թΪ@8xDˮ>[,/nb,>3.ѣEkq{UwXyr%o8;Oz*{'KS_z}?G}q7<__]Q. C2.ڇ]poFMb@}"dQp^KU#1E1VL lPv_M>LQe.X7&.AiwSs3Me0Dፌ:pt&ׁEe!N*Jm3"h[RZJ(Ҕ%l*`T)%Ay TV{B ݡt{[X ΂4K.U$*yaKG BH\+$&haVV{z%pt?%-`$8*B1YVkN ӯ "-σqIОh̥!E\g^Uq M9l9j5A`=7j\HO?}xccM_Z436"_'R_U&yS9)c/l+a4 }wFpf,%SVDMX@`JϜ(E ]8JWmSRw><= QpqEWZ;0 +[Bgr/uŽ6V}5N'4 Ld}[yBr{ Q!_'A֦ N0c pEAf3K 4;+L6kZZ;DhFô*)ˈSN pwqH_K>8$;$X'/M6Vٛ]yiCcI cUd=Լfs]Zh$Q-a 25﻽X.B*[ fnT?EߛMT2kcQSLz{FC1ʝP32gg¤^Mt4ٜHLd9t&hKS;}M%a(dL>M񄛵[@@U% -J^BY=VuUbVQ#+8f;!x "2 (l_rZ6$= ~3ЅaQI-D]tqm ɲn Bޜh,~\}0F W-n*VP-D!3+FI"eg5vl$(@(|hF⼓D^RuwhIF%+wC( T+~rLd%7HD4v\A8y1N#JVq IflƇR1C>c;")5aoZ. =cjhKuh+*U#}$;ڪE,PB[l"@}Cԁ!o@3!߸;GBΠ_{x0Rjz2NP$"j| /΂ָ0x,*u}ɆVkj^vl5kIkֳq)zJΖd&80v =%莆ܝE3HqgRtlUn͔T݄t FYK).`iNҋ' fPv4 p5-oSKJCrX+1V)F;ߛx13(u0v!խw^]03T'򼪶T7M fj7Y/:Ma܊'q(IAd WO X,YAeE2| EWK&| 5D_bQ.Zҧ+?,}Ϗ!2d\Lf& ORʸSAت.i)|JXPG0W)&70 ο" k0j5&DUO{YJh!D cHiN|D2u.Ce޺lo5e!5PiWj-ښnuh*Ֆ4\AIM Zt@I2:2- ɂ'2tf!TqOᆫڟ"11E2_sj,ȪAX(I΅eqei*qYln;SN?&pJ ¤3 DU5Z^TZ)`TIk"JGLܮq,_N 8660Pd8Nq[.,K`:a1wkQJLTQ,*8ӕ]R~[ePrP'\&1h#0EŔWfvtPW19ZCT 85r'D9IYv OdI>m9j4l%j-CD+.ykÅ猯IIJ8U'e$lZTh;) UI uJJTfX^,˴R]Az]wh&s4I4!'ǭURqG:q:p̌9Iaf ;JYҕզ4hsDg4V)_kɉh1]n13SFɣ⢸%uMc5hs/=ʦ4ҢR\Qe⦫ Yъu%aR"LQgRA'.W9*tڠ%T[crtx\԰<„ԧsN.;,LNqHpsZ6ж64t#\k U\jLpAqjMa24S^.Y?!9ׂx$>xX<}CITe?ޜ@bksJN``ec/hNYo=@aȸKD&e;=v%zƨNl'Pi g]5Nwnw Ȑ~9-aq[%t?$`:{p{yO&Ah ;(ӥ5q+5J2Gi zx87djc2opvr{Wq8޷?OȒ dцh,9dtʒXersUMS3WBR=Fttn2@TQ{DxDOn&D5 ѠgmIDvz %s %%ΘcN56>{|wfb&e ?(Η7{K_;6g29޸?{Kc@ftVl";٥MΜh1ØC:+BbfB(gV75]=9aѢ-eV(VeeZ}TVkl_w3ъ@o"+)ԾQuď+w߾:~ 5#T',FhǨud#ۆ*25ZU%*7-I[C0qCh0-%Akq1G;EݱDaz C=jk㲘W =1/t-h1;>1˧{ڻ#QC !!zMd (ㇻChDOpw?JS!xPG3BiƩ:ڥ%6zN%sD%sh$s?Gvn>[%'9n/Γg099" & Fq@ :m4ɺZ.p\Nh\y`˪ٝ%RtB;jԢWˢW8.KZ6"!T$m=$hĥ؁Qu10f5.BF9乍˼E6Jn%(ce h!pws!7i$}x.#z#ȼp?T|eŗ]J"1,^ޕ.%fxo4?XOBv2(E˦HE3^#{]|y_T{%l <\Mh_XFJ|~aH!qSڠPZU¢f9g5O>Լ 5^afƻ󩎏j4FnKꝠߡi ]Aq^,YHG͵H(髳mv409rt?} ^ѬZffynovY 4::Q9ьs3S :2#$O }Įǥ`L;JO`AAhѼ+&?ŤWpp}}uE(X,Lyr6nE7wV->E[P4ܼ h\L>s>qKc zDø=jA'=͈۲@&Z2ȭhdb"%SZhw ~ b!!w׍O/Fmo&V诹>Yy;=FNkNqq^7>kk,^3V@$#S-VJkDX{ܨd1e͊Ƅ62}#'}oЉ,ڕniumz:e2DkSBH-gMR:7h84D=A:zrB3WZdF&.JdՖg\+Ј<.cu\4cj稦!nIьIEGՎff#׊XVs@#`'@1ꊒq(![kǹH\fgWLJCڊ@V$-Ɗ0\>s2{8=pݝ&%(d6tl /H!#azJ:k'Ѱ^ ;xAFb]H;Q0Aޟ4v'J2-u: @ބo8,0h {:r׈|O (oA.]yl VYVQ=Pڂbr #r"d%8*TKM~y+ɛEqv :I$=&V_|1d7t5?ߐʦߐFnt5NΉHMNdS4nrNoʐVW K&++U083t>o K \L__1lv88RV!TY.dwf [Ѓ ݙ-3[3[2]6P2. 6F[ep]qKbRq"6ĆEZ Mn2 R8IgcSB$״[OZȈG+ui̴"2tqD&>v#iiגXM!v-YU)XkZ5 ]y e!M{Gj&gOUӌyAO3D,(o<}V1@}mSč&Rvu36ti4uf\rљ1,5D*ې!Pn?Ұ7L Vt[G{,m<Q ;0׳)M3/E/^-S3R/K W0K[-!Bc^F i3v{`hIMtRNѲʟKJ' ]@]M9\$5G <8ʛ=wjK^Բ M.l 0%ظ-rݫoLO L:SRy\f 9J Ɲ*"q Ԁ#rX8H~+/hQZ ABC ^]yI {XdUPļx?mA@{wg7}ϗ7AA}367c EA>g݊AX!dd3iBQ4a̜7{۶_-Nce.nme[v%99ARDɒEQܠ$;cggg}F"8q${TLoC)3MkF6S8V/%Q@Z wjM ZoAb#A?L0pAd߶_TX)@꾱ǐFv~ W ;y!ʔ:yoZ[BH}!ndJC_s\K|Q) k9E͑hZFRY<8QRJɍX S1Oj OҾ7邀)![+Sj4\CSU*bE,"Icf('I˴+s|Z(sSŖ1Buzy*Q{jT"N XSHiMx xJc#7Z]\K!kϮa,zIr2yrxKmAեHT*m 띇1΃}QU(jP3[8nFITޤ2f4 ~N?Fôk󃗜I.ۗB''_goߜ]ovYXaկEg\OLy"@Hƴ [I$+=ĤYKf08n./kǑW\"V0 F8B2j{ sJ IB TME@2 x=h2gT-$Ѣ $_L 28"N 7z4IՁ7A8iNʈH⚉ZD%}LLjbWo>?+\0oz~x!LƄ˧? D,ʆtΉwH~J_DEk;~7 'vO^D[F Έ\~; Bc%ljM8nSf2t㺫<דQ-qcyzd; q ,s2u5NBs4d7%>O+CugsVMUVL][yJjC8D٣9999d%;G4 u**% ɬ7:ƅVz )aHnH)Ssz95/??ޮКtG<4 bs2t@j~"j9ǡr߾,yA YKAuʹ"IfHyB4لsu$pKg8}}KR <׶ @8~fX Z\tBK列 uxaP7(+D5%2&E.EK*uГ({̲aV*S V4NIg0\{K2.@TdR,V YfƏ+*I$ae}/a;Ԋo9X>\+u}amލūWͳz?%'u T/lv@9e Y*3(Dv0Q*TqFQp+#(8 mێ!^@@A'JM2j<9ZQxfb5(=27Y Uga4*zǠ:OkޛJ{޿{XDCہ= M<:SΫGgJ]-:TXfFfTov ff#9?L;ߎ.&ơy;;o^kiVG?k@yfh( o[фv񸿳\zqvuu3o< X~_ëWV|\px {U墐֋'ZHY}5sb[!RsxE8|nSy!etLofKHߨ;ƌ@魟o 'ބOae4if%Ҽ sj3^ ϾEfofO(O ih)ZkNvѭAZuQ9餌)QÞ˯=Vs"t^B&rr1_?z 9B_j.TݍZ< +]Kq=X}wWqP q-)rsY9s" فMC_ޣg'эM$T!:MxHliV-.Nǥ='K2lےm9 SupZUW=Bnᬻr A4-7/D*D6v~ >BsMw?RȝJǞ%TcRl-+?cks(M4;Q|XHZH;EPNF._Uן\]\ċ߾t%ONzY@{䑗bXi 12?Q`{?ns2IAC2[D6 IT!FyiZN>h5'sU #P&n;̹1R@M7Ԓ ^9MQuTt$YjZF4ʆΒDM'?,(j D㢬Sf18cMVk.ob$,+2n DÄ0ѡZR;ؚl)w.}Ir{ݧ/z/q,^򔔆mAn8_L=9 O/vdUvK":J8G؃ͦ35F 3Һ$<}t#4ˎBv_Qs슔SQgٓ` Yί Iwcw ~aڙTtg/^}W|Wa#TW<<4Jx=ePArD)u .J.T!T?8VP;u Br>\_? %r_?. .>octn1<>e_B(HSI\23 pe2}" yZJè@ѼX;?ӏ~=~|teb)I<7~֦Pf6KLH bͩJS9Ʌ;jm2OL2fUN?Ԧ8&5d@l:Kħb9Ue_ֱ&!eN%O^uPӒo KO :*Lad\&I~썟eBj\10<هs$Gg`,&.$V=H I9y{#2OȜmOjʓW|˿ gbfPǘ!g 缕c G֙ ~ IpqOَqKhեGV\ǸE3:rk#Q@fuNd]:ƵLg6B)ǤV{z ^]{󵒖ΛoOco!e׏7_ sV*#[:+Ä8cn{{ x\.8jFDuwY:;i0%;(Tp ;͔J5jMY6 WiEE4\@Pcv`!0JF>0aT@6ưE{KiQ+ЌY&HT ;RsŅwE{T좼;so;R:!8_Z[c[=mE_[{^:lNzV[?՜7 xЕ(#OT 93.NTy@p.^tqic@5mzkp#XHFAQF;R)xPEW~zrJQ耭4m'؁\{h@< L*/Bi`$XH,w.LO zsWʯ̔˾cj$apB%~SPt3*`!l"I 2:)W"(Ղ8jJ03)_`S۵)6lLU:ESVok[i:>gG ǙdL6M۪l{na[{DVZA-ȼ z)ɂ詍v^T{70y,r- erXbSeVxFLq~6N$@| /Xa A!Pnfcgbv|_jj7K&LYltPs9ñ5y;8VGVט'HMhlQW<)1㝂!6IΤz5&/#tsΓ5/^̭U.}?_٧|_,1.MoPI2x"ꙠzrC;=b/>yKجM>UyByM4䙫hN5q-[)9SZGp(SB`[Rօ0q?p5LR/6$/hw+Ovp\Vrz+j. ^) LDḚ)pP9kon҆8,b N9wC!DE&γQ@#bE#Q}C>.zA3)w ̫䁎ݻGUfo)F;RȿY琷AKJ_.b*MG?ե;pH,J0=x<%M %bXtkaZ1ZJ:-J$k;K^=*.(|Y㗊 ZFԆX XzR[{2uQ՘+;uصDa:*Īvp`v`HMCxp.]^PwoY,V}sȭX,""M:vz dnvbbwjF `6|M?Fybl{;pFJ7Y9^ o5*574'?/OCy&-eaPǨ P{g#8Wb~'C?Lm,wn;)\݀ 2Eeb~$7,EwWwy5(dng:*Zsvg):eZ],c_,HS"#8bP4L&o?O{ L2s yJ쥾y1|:<\SۡH@Vbg}<9OF[ QA^BIqC\po( 9\d΍7\4x|΂߄(}ŏinD)crdrzh;6kRۨKQ|› ov`$<%Ȅ(D ]t+N;|]cyCy!j xW2S@՜-x?,ͨgiF=ϨkApV <h=rD{Pt[ƘN䰗ׂ] VrB:jq0*rGŮyf8p {K0 A!*5JN[ 0\0vXIvDX1]bUHadW0僠{m gFD%<*a^^af*1Ǥt* V3~eݜ%+l~X.t(}XPK,e鉻[l`UUp_Tm`ɡj{'oo}OҘ˛X&ZSDߡޞv6_])7(>PLzSԟгKgxwu˧j[*%ej;SL",EdbQGCG]͏S"Q׈TMyq(_fٰ*x]^~XMW.$SR_H hNܡzyO'eP !AwG*iYÉu)dy$ ܈`i ,xU*r˺&\j o3>qj{!1HݕTK(gdwNsM)zY#X=Nrl0In+(x'AJ(ƷX 8fkȣ]dy_GPaY`+E'9:YH 3Bl)t͵}^~jɧ{YN- y%osX>hryG3[LK"vaq9ߞb4"X 'g Td|$ej׷1]#.}^Bd\rV&r3yP/WBcőjꭜoٝx?}6Pu^kR+ú~6=E ?3%IKQ7ڃW:EkBQy65^L!ez)FΙGe|7{Ѭ{0;6fG{MZW5rukq%*F!YYRikvq!J4@ gLz0 _t[- x=f4Ub&3;P9` (Fo=`T*UvSJZvY QD!Eg:הxbJ9lQhk; zBiAwdX qBD=m 7ܛ O3hc\~H_1LH[EA[Fc([Gjũ8MOa'z U}ׅBk֛ޅtNv8G#+q\tg%zB)y|Uym;K@s\&0l'B%$3hfC ,+-9E!jzt`*ƀ}b'6h;*ouS>Ɣ/c+J)?y :.ZÂ3(rvij:htԈԜWfLcx[L,*9C^wHs#mU#F!~2d9C`Jhp*L-kHdK $D! I[-\B#S M(kK]k2xn,&ffg3m 񄋌. -@,$cx2 Ρ[梻 lETt#pOQ9(E љfݑP;5{]S[Z8Rw-@S=4=2 9RBR"61q6veL8+XײT;xP<&$E(v_$Ւ`iQ x)Rmё-KAh:eȒ]|4W7wwORˆU{8ENTAOq"9n@mEdvq0,<͍D8qNh]rOhʆ,q+\`Aaÿ?^ i+em\ uaz/ܡN^LzZMe#-7ؓhĘm"11@q%5UI+a殆&Xʹi|>L"$ Ű=i$eyy+aEuz֨ꚱlGP0K{9cփKaYOTFn+*مD{1g!g='7Ra>!|eԀ.n.b03SJ?w\D>#ڙ]S5}@h#jș/ [_e˛:>ӥj=f˩凫~\^W]G-[}C=R";xzS}N= Q>s-~s77nMqCҨ;n1:m*@^^ChV<[w dz-%Qn,Is3f1ɗ\6b}k/[$deeD_f>/o̬6#J8 >bg{3eOߠ/@O?^ZOU.3nA6zȻiB116xO@t¹j=[M4Ȧ:̻'g.5HtVīS[wawLmuQqi.uS[  La{l#e\*) `g>"* +.~‰zh̝^YMe#琽H_C} EvLw*L⏟:Zpw"9W-#J;\J; 7{JGSյ%H-r@:E`"3^hF(+$If,r,L&H0֏@R7Z^|| P,1 Sˢzz/^Cx9)%TI;Of+NM+-]3[o'bԸƅ 5.lq7d qIV`J(Q\ spbTVp](b✊TL%<`&M c F&,g(U&GDc0JnDF!V* ylQS&\SL7Z,VW[ )uI`i%[_Złd)Q ƂE K{So{FVXz Xj pS9,'m;*c/VV`/zj㮗ZO18 kS|d>o#y%iÝ9x>oQ0qF6faXɠF*Vau~bu>?ٿϵ%SR)Zz~OLLݳb&KL* ٻpJ-fƪ!ޢjMU̥g!r r[ AA=iWZlGJlL]U,OzӽOwʉkIel1/+!,v\g g4| ̷q_Nư@޷%tz|MeDn7APS %^ Pޠ6Pu^~ W|Ygf5̉E#Z`Gn b6s9;KyX\/5S1I@LMju.!\BXIF)@N **}HUR&s~%|ֈ4D _Ϯٗ!=L9|ۙ(d_j'IhVU̞ El_V2*=5 F U@t2Lq 惂8ә>>)ZLmf@}]+쥚6^QD4/=~Uhܝ*nfTp:NVˏ^5~T3kS5W '^'Dvĵ:kb*%]{kT,Su-=Hҁ^wmH_a 7'ۛں\YTHJvrC<8XqŦ44~n8qE.WP鰱y/za0%#*[qxvM?4`0Oy/Nb_n2$1ҜHs`m}9@ИuNnV(hA%)r^ *)"DJdE(,YuN{,Rǃ"a|δA"Mj@ e}ȩ$ȶSޏ6W:QrVmgρj+"yb"0.ƈQx ͑BZSJd"zr&YgtN-"5"čAsa09 G m B0mT1`hF5B2+#Tq3H3 FPŅfBjeQ.K.pR/ >>_AftA1ɛu3SX-Eu%-Bw[&eh#oE 6 +3ZDuKkLQEl%ְmW[S#ҊzۦKrz~u;HB 0p]P@K<Jh=aoƋ _ao“dzcBn=M^~~s&%Pi4>sv4ͲP0z8l[<x~L[5Py7KE)q^biL!xGr\RFemZۜ:?}"mb"X'[G ,"ګ QA(ώE4T8>lQ$zn>:ZO6ʪ(],ţK߭R.[|F_w~T-qmBm"6V 1~)?nobִ]]_ņ>vdck-:@ta5ַ)bxQwyUN0khTlúTmÂr<0Zg^`D֚k]SSy586}31zT&mT?g뺋UAqmOeh/,}$5DwV'ؓ!B /W"F):=&ߑ僃]^o^]?\/YIK˱ "B )ᖩ"gJ_2\[+EW^AXK!y!g *Ժs[AlW)h9XŘ`62`s6u, +Sd xplf@F:Iq?z $c~޾)_.mzg2l =jGXMٝ\vo*F>o?PBRuVS`~1:<5ðKZ?˴4Ok6pj+pT7i:]R1]0Ldmq?J/H^$6*GgLL2c3'` DȞ1o;g'Uxċl|E Qw| ўUp0R ZF塃}pP6M i3[*TѵvNϸc;$E @ 9_"/t ~P+E ^j$rk'rly&t%5\xmסZg_.k*L@D׏drD G09Xp R&W9jR^iEY^a@jˑZ˭ˮjP-E)v- J Hއ-e)D5bRUjNY Bb &$䕋h-bcK nu1HcTn't:n'ݚW.d**!EoLH;x)q {Xb{[\C>7^o~OODNFhb5CL%,1]W~?lKۨy^}z"3sR`}B?=LÆy0?!_nٛk!zCjA*=a.ojބiW=*B`ٟ-`{ 6O\<,A,eTDfLm'BǓ,2́$gK6VNO:wE} fv:jo6"V(&1Y[~b<(d>F7WԠK<5`< [6NWޙYzԾLry֓V*ΨqVK763S.&RZRYKYK#1zMƇXq3ͳx(Ɗ[.,az$B+2F|ixF ZJl=G)/Na@=2wog,|n+ c(; HZ JCQ*8=6I+! `}̷{A$|$dvUR|Хz65G1͌0|I ddX #s wلiOZR|~+t!KbK5σ t}=.+`7X z=N$Mx{?(W-*Yv z]w#P}.tҔE.甴 w -%پ\Fۨl_|'/=Z%!}_Um=izȴT]\XsC!AB9Er{ZUse9Cym }HWMw3FZ*|j >?-wǔPtSXR*rWF 2.Wc8xړbHv*&`qzq:yD 8D "\$niOѕ)m F:҄k lHX ';V9ms^Y"rR!r D-EqrZP5" 9YK*<J S6Dy09(ERkΕ#rmt g ZA9yX)B5ZIϧ#;7zU`A,9V"?ϧ:lYxzp=,D bǟ0~|xSb0VlEWW8+ zS)VYvsIdqx>R={\q*$૞ۛ_㮗u[J6JmjlޏǓ,9UiJ(.]T(9AJ U< k 2,lu~ v+)ۭzsucZfESA=. Nz9u#};E[qXAL6l4q2+dIC@Y ^%֬9a[+3ʘ`(ӆߐ!Io[7ΚyTAZ޴e)~hJE (vqfS k7$rom pʭaȷZb0Ec$[I _{b~A 72@l'$sGAyj HA;*04hn[T\ޏʒS#h\ h~v%q@3/ȧ (m]ϋ1ҍo}28amȑ (sIy.C"|DM\ X'Eomr+!1;ߒ#4Wd 6VzZx T lt\/\-d4A+ϊ>:%9ori%އ-XYr"*fTҰa>,x$jIĪI.eÖ͑ƙ ca+ZX*Z0))zm,ɩ )p cTIhЋ8\YNj0 $bT-K\vXFd[vEaD樂D`dgzȑ_22dS]` ݗ=0XKHg2(vKnIG-vb i#"C%RIzhJA1zw{!s$(.͖f4ֹ1%6mǖK(yVR{D@3r]ӸcBڮTJ!Z :)(K٪RY=ß>(TF0TDMPJ2lxs@ȁ!u ܂= (%r o8=(⅜}l5}bvΗ`4_akm])em(;2ŸNe< 7|[u]mpKGMҤAC6zvR2|A!*Yʇσ`KytdVєΨ@v%>D7.Dp&^wwn0*nntN0k\vF u߭FW3O7Rv~ST?]~Y^ '7w GS2H甇dZӑ^!63^ -%qerO9yNpļge6}Q_+(t_3Q$gi˂]~+3ƥ`쉰=O4kgyn|"w{@Aj޼;dCkΆR(IXk^nU '$V0Zqv-~ yfWVXazȕ6#2  @K y/C%5h;Y=yGL&8\G{,j1gTQCT ]., QvP d zyd@ad6KyfM @.)T3%K,ɈGOߔ؎8T}<6ր0_|qm ]qB ̥`ȩTtH!%3JAzɬֱk\a^ s#rIUd2 13=<4E.v:˧;'Z AN]^,eU5H@+_ެ(Vw{Wst+Ci fŽS3aɒˆ"ۼ%M3,QGGQ Vz<4;.^]N1AqDq&k՛_;^?*]~7[OZBQYe*]Ct#j3kۊ6'*nGhoqԇuM׳wcouKj-$\D,O,@3Bɳd?ب3TNc4ꓱ~q{[aTSĻL0x,3iW)~iLI,-yet]_Wi4/oEP&lpmg 58ÔS63ШAdm~U"LWlPƀ o~2b3Q6 @ރ"I-"!ћx8@5V^u),Cg=iBBԶѭ,旯r!JPF5oxoj/,\iowDOJbq#1ƚ_J ](N"?.U+/`1(s&]ͷ3ϘK?c.,ϘU o(*%\f2K3Zs -' r--͸d[N)tB yAb~:YDk u_t1pGMn5m8/9_Kͯ7G`PdCN arȅJp`IkL3tON@ [P":/% !Gs21 Y&@[fp.qJ)8PBZegL!Ŧsph!;@ Jܪ~Pq¸vX.=,^e&?B.ߩtJ02A3h>fF92R)C%JehbL`LW[l qzD]z\'1^aR'2r6ai45 .^՞舳i֣lAh뚷˫>ˈluk3ڵaVvmjז_bcWp~~έg&.[xs+dhP{vζq@*m؏TS@w&}zhx7{8ZӝsX-.FQiQP ϡZS=g6G=o  /q77Aczϗ|y5M~=]6.Ek^Mќk]F(sUʪ|a#T(Š̲"ܢ)J-x)c24/ҠkaC^PCzIђz,/萃qcFO;@.c&B}Ѳ?Er_l jyѺu_N#'Sr; WV4I?CY(s߁_Zi|ə>p@P1@u}.R(hl76Ot :WiD8i-a ]݉.E^p:T6FLM k]33!𛌲 F0qE\N$2ɜ3Ii`6ڬXw5sxӽz۫?&S&Ku NPD/LEl터j𮝫][V}12 4?hZokh+iE&4U!:n 䤼Ngvop7x]2+e;->R"j44iv-ޘ+bMN+m#EFfW> 90ؐ=M@S"$ec8=IyIl"G1,wYկCւ5"42w0*ظ2wƭK|%6ڑTThTA-||o(lI1A n(-xpL&H/Qˊo<ܴ_IIO4SX gqbs6F<X+,0PfƉSFò&%U!`l9kzVj޷Z-˗1.kZboJń{$p!#5N24TKr:_8sɒ&%B.rL.܆OS]-@d)e)bP;r@.W'\c^nh+̴S8Eș@ ]hZ aT)?fE AUDlhEhFjY€%6q@j&M0`MؚDM4JJ%+ecv}y V !^8l^ranUcRkanU 3*ƠjIHTqG"\J!={P,apdD .%(`籮tU{Jp}qZ vJSMB8B G m &aaNztޔ6'0TDIгä gl(J`j$%R3 ޅ7M0³BW7܂0N*PNamꯓꀭ%}&tx/w[}DZv\QBH)uF)uF)5xYg뇒Of(*  ToCc25 6>.NX7#YϯŹ9.9鑖Uz&UEmuZ1_Pl0<( DIDÍu'WTÌͼ|]{39lWi#4u)AJc57>fGu6yB-.괆p.B]6^~aXH}=>Jm뜂*De{m=l8*"0N9BnE acLsֶ[1j*58,1뵈% XfKX.2!8`.b6B2hcʜ\aac=^B-6~KeWX3wr&5kԝ%d=oT`{J1 16bal&Fxw2a),=ag|A1nJ5Ejl(tΰܵ*RO1!vݓ1IЛ*=R m;?_NO꧵כjQ 1JwtIn.<{3^rܟOsi.w\ NȺ>oUcv-Gt@ɺ+Y&I}Z);mfxАURaP++@S8"ARe~J&Ŀ u+t;W E74SV6mf>|[9qchHlGx`{kp^/GscZS,41T v`h8 |1 ,G),PPDD:N\O|9p0l݄0ñLq>;DRA')|\8 xlArvũ{V}KK؞YD~>t쬎^vo8:_ ߿uA_'GoRɓ҅ߎ;*/uWFëw/_tqG>9]gǦRX0w {0A'Vy28IN__nZû^\箿lM||=5='9 (|ZMg 7hڃpK')[ w{*iڣ_6݂A&#.loSŸ`ݥ׭οV'{/0 }>אL orrl鉞0KC O^\j[ְ<{Qg" {_~Wy> >V7D ml+D?w;p2L=he0 ϓw0g7߽e_3#[㻟E*`{Ӄq5;tkߎ #i>0_G@?2A&}0.R΂4v$55=<+WA;tx=Sl2MdAލOe6~uvs Cq 靻νus6}>3xcmP#Qҁä80`Q'b>fVbb >f Y(_&aZ(Z^~8\4pi-Iyk#U(%Ec.HiwX"XE#QGҺVX޲V^Ҁ%u/>q{ccA/ =rɊ!HcwޠR °Z (BK/%v6I| BdKw&B#Vp\pSY)&(Oẑd’ަeo'#E=@.%O؎۝[<`Ed3#i}[KvtzûU2&?DIiS!>['`?,Hv: +~gͫd[ٯɋ(}ǚ;c,,8rPXBh0^zYxap8s +uhO1|_ է$lϺE[diF[AJuc+wqF,d4+e]0D',#XAA )ROp^yXHj Kok j-6Sأ=e hSD?kLlBn^ /3㗙 z8ǴcݲS =-fsOyq> ‘  G/fp,O!ߠXM=],I"a2rt%[Y4ATNbNE-:֖9 騻:J!``8 `.'^UB oy J@@1 `4,(D<EXY*`^XW6ثlxTaS`.{2s)+&w#%doR9^Q"ig(8zi6"3YW Vو# (%VZAQb3@bF N7!\Z =Ci -v٬F3 hʑ,*pFpv(sdթ֔B{%UVD#MƄA>"Qj}z@gbRPTYuIuh qTKT8/_ВiZVmoNh6$sĒY ݯ_W;!T)\#*쁌jd2׹0wϱ1S ;P8Pb".-apKSCS,fA`/tlzJQ&"`b )3ѡ0BQL=E:|joBAH'ppbL2 NybC|`:!3/yl&Tt\ԆaF5PļMa= aВT6!#?Է^KT .=qV,Du pa&ed7enP8\n\+j` +ո)7$jz"e=viw[z)x" mqq$DPG>nlsUǵpq̨pH~e=_S J%6k#-ѫpc2q<1m䘆6!xXUk] 3Ug=Fۯo׉6w6 aƀ#tb q_%vw*qJXՒ,`d)#Y؈X8VID4Y+5)Bk;VtTWUn;.dI S*I!\R˂"翼cI1Rr.27syHfe;n{~:$c4 K*z2v}f^N=#J 0V<.ub.&itzea3T@r@Qio{,(RfTaP5¯HSYsD]/+a \_58RyW\ߙQ;<pk=%&hj)ja|mOa .CtٜKiӢ/Q9pivt=ZIW$_A{`7?MlZH" iV# N~/XJ 4uĨiOP,)<ӵma98OnGAJÆvP86w'(Y Pc۶6gr> ٟ&tJS`LU ",2"ʥzsa #ȡ%`c 7F5(ĺtZ)<0he4_CH9wh7+֡P=OUV{MR%#f`4oC/v)lB)n'UP,\X0laxU*Lv+B9/ؾ&85"aX,Y)a剜$g“?tJPʻPP`u9% -w'tZX]+uh05ם8|-0y٫1tOE)vueJ=$m6W?4q`B"-A`|`v{G9o:HpWo@ظy3Qw "Z@f&wl:j΂beQt90 !?RY˲&5^俷YTI3{n%"ڵܦWC&.Uz#\"ҡײzJL_(C 3HN;j݋vSJkF6?VK X,CCFi?Nӯ4?WߘSKDWv$/,&nWPci@C4,2B%%q(6^z OJUP0)ȌK(4D+ GȂF ߀׷m&p3\'o/ϪCv~toW1} Q2%$͟JTOa$iȇ_TC~ܦQ \T{ ޏ?7Llz{CM;7f2Vp.DJc ۷HҾ䒀r~>q͠ E4!An[/qB29!};H0k7y:o8%Iis(`:R,bjar5.ƈMAj5 'HxiΊSXW!X$!?-&H4g< nĘ 76o-~ާ_w}mo6 Okm=D7Q[|6o9@@'Wyn|,gM Jaއ>5K39{o a.'K ' .pLO'W|ϱj? ~?dr_&~."_ϗ1ub?a?~[XCenJP_%WU#`XX1RHUBDg$G!opJ%V{τh!{M 4M];coBrS҆Vop7o[ 2,"B VIl9aW8ZMjQK"ÉqN24rijTpNh@<=v].\w4^Z[9ɲ,z؈l>%ŕu;2jEfԤ-"M0woݏzwrRP\_f`/篾L'+K'k(XG$ t$6D<x4G/Aylb $C˃筙5X)$ziЬ4R8;OVR>QRᆓulWr3%8o٧%z22JW} LzIkHgyLidͿEwTADMբ#Ӻ"Zn*qN4Ֆă+ !=""*P0眈80CM1ah߫J V+"SJ#~K5!1ʪ=f}ʔ e"D_`ڬI[`Y"$4☭.ID-B,BeIsRcHt2)WȠ B7OphŞD-5Ll0VV[ityAM`bx(]ct2[rQ*#_.JָkSN.H>-k8u c?a=~&q('c"`G ];t9U~R{gY|=bIpG/" &*%pAj0^o,Xm]n)y$s̗ >h|g. IfPAXMӽ+$+}1BE坈41dRqcF4v=)M Gx#h ϧ_޲4@KKe'l6?,S-H%l{OLH?MnL9 =;)&|nek̈UbjnSw'M4M,hKe&JAQ6d,'L@}sd2b#3 LJi34*G D) '\򧜭KN638gF<`Bi+)pj!"a{ s(?JдPī1 M-bdtX!U}*9oXuy҄Ln0IT}!(K#Q )]ALJцw{?hNx`~%I*W[qq qPqxs 3A>"DKF=a5D;HņiO0f^%K\k ge\xYxPAb$YWJHbɚOD7ed{ִ?Kn&tɆ ~ ~3 b(lA bn_= )#_E/o֪&{QXmblon}l~Ok~%Qpt{.4i1v5}$L%ÍǴ ]M6j mO{C9(`!WTS [>>7{j:[lKu)Owz11J^F@2d16r\,. -94nܫiԹ85݉*X4s& 81%}g2k +g'R$jӬӶf)oX۬QEI]s& :aҘX0.1>w9Ewո\F*.~jQl| @24ϲoxCa^qzKЄJp?50 %)F^I3ATN }.Zx _XTt2TAb%0uOnOnu ՋWIOߜ>g,R#w3OnO7/ֆ;HJ D!߽Y9\% f|a|*o+մMnʫYj 53$!Cz_0q!2+FSǝjEQ?+Y5^qǀdi[D{ΔB:D -=E뱖TxKc0jC Ԕ&ݒQYi&,71a@3b~»5dv'z{/j b .B8Uy#ulxlK>D VIV/3 o-]vhHbZJc_=:/W.>/&1#%v`ї7h@|imnI ƺ] JYRzY+KSߏ+@[+(d4x(^ōF{-9h1 >uJz|,܁>V\0+5B ]irk$*):& Ȯu*٫>j$vvܡlQئU{o c /&꾁mx!i9`p,[Nd%bkS,XDQ0>0Y(^`C"Iږxm)e:q$> ջZDd Ǥ9L{ Va,JL`c:h1% RH |g$kZV O\)[2! T7yZqCCٻ89:}&`hK 08`Ap(e}l}ckPIXT1w0D j(C]}:0EJI9J> pr(bP+DeQC-"{YDJ+ T˛"p_ns/N\<Gl ̦(եq/?O M7K~~NyLy5oü tQX 8&q\!CFGzfxX0-kyc/@o'$LhNҶ0a4}dzo]?fT`{8'YCDΑfžY){PI}R7+}3.E9&x"0Wc. %",ҔN$feVimK fAq>5~dPFH v}X-%IM2r!GK.'2a*b@L\9uÝ|be/Uv'ϒ>7Cz#Ӆ|?~bH|R$auϳ%3oTݕ+pwr;w&M<}A,TIeC6Dʒ[Jaްi$q,1bv<)j0w8%ӿcح@)Cs}ouIv7< O,խ*q+2Lf CJT/pQߤBgVџO>+'n?߬/lnnxs3Ox&L{,g'?\=|Ն!d?WFѓ 'ѓVDOãGuzy9z}s^?[Kۃ^lIdl72z11JfF9k=Œ>{ H4/C :bNВPw7szΗz1zЊ+g^a2FJF^_C}ͭOb{4rzQbA裒UL0iI&﫷 _kա覆Hz{խÛrmuRd_Զ ZD tQ tX^q\woq ARԽZY5vk `g(n% J<1+N4X` ,cH<[=gJ!ָ5 B5jHjbli` cEBֿrټj*qdMLF=xC.iAeP4QNz[s0w91`,w: 6wf ô7 nfk :ixWO-+z81Ev?IyL~ n%٘0/0#;Q)`虈Y\HE]א!PS2$QA5_JGK>^阍xB 4dDfJFCU#u M9!hTud-[3O*bv{*Ϥ}^'~r{}rE*n4df eI&r.Zpj> ֢ߚOnbFr@`ĦB|!|wjsQl5&QމɺRO fVUqUcڊ6 t煩=3\/qgr{~y6!鯿҄`5+/ou J̈ņo<"ߪ*S֦\HծV*9ڻP"m}F՛%'C{nf1 -23aZRliUUIL ;l_dةWhVzyHٟ򙧛_wGU$f[zUۣ>5mFLj.Y15\11$I͘l2gPjPD`6T8IerǤwKZZUy(ܥ[wYzĜ(C$h4ŀww k9>$0hc/-c68fR24vw]KTsuxa-SqLoA]P?fS A:ypábE7FuV<y=d8imSB gǻJ^hD̸chs/ܨSΕ'Bf)Ggrbג<渹UxGx7ߩĈf֧СM!Si8ף@JC>\SgHAn踗ހ,lc\:\ʥ[V(&h? WbCoN){oޞGw3}̊oo inJ}ӿ;}{J23f - YZ@L*09ח;~iam]K$Pi,4 ה[`*L:$By_#IB/٢"3#Vص{aa`^lqZ-g)Rn A2+; @Ω:tJ(.&(#GEDd@b(Tmj 31z_&)]- G($#hҞ%삜M$T-Rg՗q!P@yʒyQ s* 1!7o% 9^8$ł,Q^b,ISxPu^VA){|r_}ݼكʝO)[S3]9N@.6|4p07ֱA(5C ݧXךW,wݷwؖd,|3KeKOxy|*lDk~s0* _vjg,'"A?=xGát챵;ٟ&瓫ƓrZ)Ck~.nǪ[5^e*>}x]Ao]5(jn@Rbn^U5sWTU 7u9mFѨ]qD9aBI6b<>T6N] NSg 1=N[&e Ls F E^&~?>`8bY\_tMq\yu].y5_ 92Z!;Qj@"Tp"J9R^>On.N'!],4z1za9KfAGUFbt3(칶-K#UnARHEZ/1W:X+*$Y9ȿX0Q~TR>9+~kT9QyrS~)e2CXcF*2τ]tόE.y9_B+UYG}9Lu OVzw:LUPKݩa}ъ>&,BK=2 ˬCsJ9XPM8߾s!fA%Rxȶ^&M2HU#eƌ=~H56z>~rxv)MOWjuoKuf:=>(~rvsn맃VgA?|"#R/õKLif-z$=ȇ梔WinJ<^9t+_i>pNZp:)f]7/Lt:F pR61[bfX~zFj>ǭ]|)]kTXoQyBč9$B1K(l-N"a-? 0Z GmdqӺm )!UtqM a03p"2>#j +Y&1}J'UѴFGϬ\z35'7v4(ԢߚqŠtxXk9;Eu\#=ֵLvܽˑYZhjిbP9N<_zR9|O?l?KI[wqu'ȶɶc0 @sOuayBS~K݃inoG]GXՆ6謝6Yq܎xi r$_t2šEgqFEXesvmGd(YKʕEлGeSM(w)1\Pq{DXVm+ꥠ}Bm+zDwЕ 'K+Aӓܿ0_M'hz}~?'2 5d/OGD6@O튷J<\^(E=B#{(yh nk}v:|EVÄ>uG[~W͞4.g?mʫCc)X79Duitgg'ybps:gg>iNjWzЊ)r6tG䪹#/}ϞSqh2uYj%,4i0tQ5sZ\.C/ Se{v?l)Y>m5K 2``w Z,e^< `V.]5??'B<$ѹ͡͡T؞Z2r^4JEy&htMsv5XP TCƤ֞eogDf}sGj^o؉-OE{VĪZ+,/7|!T8~Ӄn۹j3&=: olX:9Oo}mkTnk̈́Ape:ެғsF %V*֘fQ i7n8 t)X"57vwN;hE#j=mʇW|K˝-mYb S%efN3. i->}:{UB% dE0;N-_v2~0i.WA%VX.DKŁs&()A+<؀"aRKfft#ZEEB*JXRON: tif Pa@dU9Zz#hViv>N=$C$_Sʤ, D#DUL4)Z Iڛ,*EH6~ٺ  ̟JJQnjPҾUR!:eqЫԝ6hs>= 3] '1`N>TYl%D&p(6!S>.]\9OaN'XZs {pfs֮=5({)p.1կgkTk恖׫ÕZhO-2ɛ4w6N;%ײڧۋܖ9B31$bA|s;,Unޮ߻ﶘ_l4ڄ)&/4hT_f3GӼr?|A:DGiciAjpAT5} `^-UJEI6TP%E-Zf$"{_[KjYXuZDE8/m*dS"9!)2c q CC9_B%ɣ(Z S|ȑ}/@zcMd/uje)_e6Q iFN=^zA~ڣbqcz ^`g1jhv6t,8Vbvv>@Dm$.ԋf$Mc~m"A("rU"uǪ![!'m#v(xW cWL,Aٻ d.qebJc^D RYRl@-/u,,5ʩO|}0  ȩ6.FaJYeh#ɜ8SL!/pƑbI,ڇ0"rKi%0bvQvuBmujjiTOXG=kȲ5={$8Y {O- 8pJ) ٱ̄jQӸ^~~FwŠX-2hJls36r mKvA;qKh ڽ<~} ;dFx=v0hͲbdFdy ŹTAmcPf%c-Obg`d I5bd$O> -L FHm"D L%׀ht]ca-?HByWGv sc\tl aaMKZhew&6uq޳&X+$'_6myPtl: u{p 59:aFr [.­R`s8g/׫p:[F=<詳;~rq m<]4!{ʿ UM\m REPv? rNagaJ`7@;fNaoI%$].ܼ3%{#^k_\NǮ}=z3Yގkf|Q{5Y'%j(2wjeߺ Tl[ r^WS@`V\t$mZtzJb-0Dx`JhKr޹M@j1JVK'u:SW98T&ӻ] Bp50+)kpxr#bR|`ի6ՔQy^}իi]/JƆ3PA/s@A{%J)'[ `t8B1fqlTxˏ q'*ywz!DeMJƈj) OFsv ݢ6V|ay:p]Z:[EM ղ-x !CO>tc Pb+޴C(\K)`ɹ}Y&SD( mk P5{B?XTyX9.(dz!KNͳ p``q_{__O'":5u:>);KJ|L;] hW)nU$lvI ǯ{ Z?MȞsٍX::Y t kFGP\igRna+YjM4vqQ! C#6`K آ" VBLsQ H ZR)DCOb(ihlEr I TO >6'É'Ad+`XQ~,a+ X PHi_@;gJvH- .dKM)l$-UC`4:B{L0eb[`w,yP]=y^7VC\oiJg%Xdpq kc0 c !2.x[j8H,> X75[idKKhL` ),yy.aB@ @uŲM!,G ;Oؒ%/ 66'Ԍk|J(Tz-u^K% VHDPd2`\}l[] ~k$Rm% ڎ)(9pK>-!6nij&S-Cw_P-*&µB՚o#?ݍ &j)rޢFRMX":_BtA2MIt#Oi%ֵ%I$mHg.I2%uv VnM2wA8Fli3Z~rv܂|"$SwۥT\ηg{A+'?.6bjV)vCWf> ueLDڮwǼ!}AX^ўPA˄9 Os%2hu[~D=,'R,j4[._bqk5>Ꙛ\| QxzSiYXMF s?yI̧,3Mj=$"Y!BI%i6Nz}Ua9Ahi瘗 0s#~NMm)/ a$fOxfVwIKiG_y6ȕzxN>i'T_-᭴͵ ʊHÂJXC,,]xzl\Ek 3;=Ia a˪sdS>%SOaK*E^#QیaE;EڎdϚ@l[=xbF1p&0fΏ+<ҁ#y6'yIg*]NV0(||;>ltHT޼6L@m[.Dl$SB\ҵM]{7D|6 _rQִip) ud#'Qr n~m.I}Ev?Dw3wZd˜6N؈ al DdPEB䞿x`V?yu׭5+^\9CaH^kZ.,4pq8Zrڰ~H=x ߎY"J =2o&)"3+ItG![&}bc޾*-ӝǚO Efb 6$16$oK^W: %RZaE!+h0,SC6t"i!l#T2w( sW`yѳ{"cU%3:b år-rA0ιuA e@/) Xtљ\DrHTN`~s!n3r\ Z_ Jpr.P$/~tl)ژU?R! bXZ3k쫛 K`V⏼C85}okE̤=>=R4pBƒ߮]7PcWǮ#eўrXGߍ\><̞GgF~<X<:1oxz*| şw&aΞߙ=t G[1c,hGw/ yW_|6+I]V&qjB! qu MP0 8e 98+?a.c+)k5cX' Rтpⱋ.x,>QY_VOxUT7etpgsȟV||q\cquAٲ$tߞ=ؐ Џ[%V4 ŌT/Oyވ< )SvaSZPfRLdINq`Tj)2-3BI'!DTA 55%r4I"9ɛ:&Tz5 J0T@5"9T ! 5Bkkcczn,$1ԫ yNh!r*H:m 64T1# v~5= Ì7mşEBq2=46"̳6NM^H䬐GZk70?[ JuroyDǣ4^%/ j rrт7Bm^sqK%!\HQ&vNQ×&6I@*R7nz4?NêՌ bQQU >m/F\y=t5Z|vUew>`9| Tsjjt% H|\y9{gJ_d^d3D"J°IVk#a% Wj0Y.HPB(z^J&H fxvWʀ_!#2JFsY<Ҋ 66睼;ӊ* r4V8TIM0sA=vE"Va{E-Hz(C-I 8}M1JFx1JF*hf$2 13jf$+h5@:&y(DO} hzq# )H*=Őy/>@Z1Tm11g4 F RE7w!:-x~(7/<=!kjͧbh>uťy7b)`_ +U \:~sM)qjꅒjT bD'u&Q{ƑB_v~0w0qyAL,^$J&)޽W=ȡH='#cÞ ԞVy֭ y*HHN1?&Iq'&)L&Nb4\yzQ`,إ"V'ev!CUX.$kUU)/jhysFA!iYnWUC#" %M'V 3w\WIJ ܡ5rKP`ݞOZEhaFr"_[vr,Clc 8R}i{i˧[_ɨ+-DjbDO'}2dzm̀Eꁩ1H!5!B9E`Tc #AdB{8òNFJKƭcJ)iڬ>T]iڛ[oH8I05a;C]k'%ƍs"EٻM6>-01:Ms#bg5BYJZǨ R2To~J0O>Mo|4U&G,JR2~9l6@n]DTX ׽YEǕTh*?=")tŮA$%K.BylWWM4h:ʎ"jqW t97#;V lT6B'ݿ5GfR߬rRY\yn_yk!~>`1f1%I j#vɗm%w=hڸz3`!":/iglVQ{zcК y)/x%EՀsC kL"K4qdFck+°]s'rǴ5gdJIG;Yb^32xqN;bcz#S4{3W_ B9 aovuek;t.뽟 oqYSIt*nz`!!dha]8Dd{:l.R^z DЭ^')fB֟Q n;#ҥ` #d x̜@2h"s"̥B:6&QP$Lc6*8`֨a@sR]ܨ(5*i00f0wیC11j?bi8g`owEܟW#<]*+f3*_\4x],VIv=Xػ yv?`Kמs#QDQȑȓ V.qɞ3@7=>NW#DB5ɧґ+ h,[ d 8Jb¨AG4XDDsq/)ժC%)qb V#J0Qv?\l)i1viiګ𶈵t>\(֠:wUCNbyC59ҹ|zIs/WS#K2tK"aC DZvm]{hxJKN+BAam@h--8ب]JFKq"]{EPioi']]VaF{V] ;=wĚFxu~SRhn\CS${L"W::cRr[n)++l{{Rs*3߀P ®HtIW"yb9hS#Z*)oZE [F5yЪ c;_BHnAPT;*)=ܩRae2JkuLgIuQS=e Piǒx]b0Y_o>/H6 >+;4rZXYR.dw}Z_eB`!sHNޱl V"\o.6PQތnqfDåvp6a¨ yB?61MCbY~QA$CvsY1:$#r9UC 9/dkxǹ߮@U; 9cTPV 3 mZyV1%imD6FɘN\}vUh䈑V. ^t`FjZ[[ BYYP%:p.$ zhe懌26RS(XGw0i~W7`gGGQȶPv^TS<97+V"4e8>$eK-s"yr<"[_`ǥ&"^JILDѳ oEQp޷3eb_u/4ǘ"-͋~S/ṄC7?+@(eC>k&#/YSI/݉`Jέ:k,)b /+cRPZcڷ[gz-~1=3Ma`DZ~ g~nA6)Ѭ?u0(RGS^Xyer /hR,.% /Ufwy+G6_O_84~&8~~(dBq,dBqy9}e^ ~=x?r0.&8snOlmtή!&<[?aXVWI)!(:?#> n+],!t( [aIJ%Ȏ1h٣#&@ܗv1䗉[Rlg׃`2͂V_{罟&w__a~7sX{IИ@džD& oDlr_z6dXc7 Zӈg&P b}p(3 'om`>HWdL nPg3\~=qGqa`f jw ls&0j"OLр =FGj#cm[P:އMݘƱ8vcnLb7Lu.Ĩ3L8ˌ.d ۜLjWT>c("yͰ{&Rb- w"pd `4'F*Yߖ헌|}&qȐh2IK20"l.bD`ŒΌch-~wR LK3 ^mx 0083a.yh[]NL\Kp9@6[*$Xe3li|?S2aa2+ ?{z6&H~4g5gϕ`͂yDfy`^BeKrZP/(XsFS]-}/J`>nVǓD6BW[5* 9BJ~. ꫾W=3=3.fla4xV+8p22l@ "77|0Ș6GχU -jVcA]>=_"8fK-\\KsLljX(kn! }w*Zb*;xxW)c߁+cGX#@-1}Z69zc2Fyݥqx=j٪ѱ] J&e9jT,CgUKQ|&rc+QŷҸ e%ބP%=A%Z,SF[#Y&=|2S)G,#ʄF&'/n]ɨ D9%%hCCвD`>-A"W4U|yv=|޺J=)=Zϵ*=Bf7efXf6efXf6.m\pLsQRŵ`=%(S"fmL2!XFPp7_TFO_ǂB|NlH#_XnHLW+\g/dט*Lć>v 2 6A69FtUB@=lk؍Y5Tz5 2Gfe^CqDQl7a%͌ENYr #TYp`k4(|mJS –H'xnײ=4+d͸K6C%a^c+$4p&w b_N(1ƓXYP"RgSUV-0kW1dZyT 7`UJI"d/tVZ8l6t`6/VFfB {uXnm6$ԲAdl x;$SmYQ<]`r8k3k/_³x$JhMwۇ/ !ހɊ s߯߹]-Έ 8M+Alf` X^s> ("W0?֡:X_m9;LZFzL1[\>^TcEf^^c mSo/[kUoK '(ry_T=|Ăd8.W 3Ό܅\_\6,|.vake<iϵ^n* e6&AD82Ũ)j`1>')Z}2*M +gѴ>H n)RJ)$?%mfay:`N Ʉ^1&l6ḧ́[mR ZP!y.D[eIg[sen GUޒf҅f+IEe4BXYWg;AZ#!g*L7ג'O(N(uoqXc6Zh'oiݭ bu1$X$2K!B:b%3HВfi  ]@ c}׻?VS߁koCx^>/W͟sx,b|NrOu 6sBߢ1plfs ^]:o:$d!aYjcFr o=ҰꘌR;q|y.)rݹ 6ZSIl/aTѪ!@"B#K,1R/эUm/.Ա 3[Db('%HNбmH("G-/Ѡ܊x|k׽KgZўKyvzHD*y#pt\*Tʭߥ`~uȋ,GJ4nnYV9"G sEE؜k]a^Wir,D8װM/!D녚z%CBQ0^ԐR N [Ti!:(T| M蠺N]w T_D'(t~r"t`oA+ʅE'89lem/^(k{EY;[wc _0 )DLjŲG;y` [5o#&VWѫl\ CliGd\.}W;1%6^ꯕuSVJt1[y"؞#){Z*)k:Sk)ZjVK%I}WZ!͖r~bA^ u d 0W{:Of4_ ӥ70Du1B܉tOxMHАP"7k{u[*NR s+Ų*&E* n{Y|wAp;vxn cGU#4iɌwð03BM9RniK$ExS[ጭؒ$<𤽱UO!I3t)8ap_ 6ݼM,qE$h9IڜD$K;p}RD@*ɼ@,ŋ㞎~ 4>y~& ŌT=qQ{Q pY\$_*V{3Ei:`.)fG*?,Qyl 3 18W6%T*W,cK@ű4!رi <ѹjoНNPDG<ǑΏRȣw6?ҋߖH >YD'͙Tx˖2Vp|s$P4 :>X/6^U)͝Ě.7)=6,`/ݬe{hblWiVfA${]J9ڏ?Sm?:.u ܲ)q(֞X'B Q`,A)*:suk0x Ԧ>;3KU/{e ~7No+Z F3a0F#wTGeJE,Ö T9S uז\CȊK%A9!AGE*c!q Dig5Lhi"zdL-+6A[d$mj-E݊Ĉ{ ʈcN 0T2Tδ"9m))dUWž(W-%1*}w5%*Ί a//{W"o=-w?JZpqNx%͡e+rvi`ů~Ƒ6ADt<6?F?n'3yw7nl÷m!*[͍O~p~0#k`s=u{gA02ЈmEx\h쐯$3g/"WQ7%ƟTf8W7  X+!e]XJYyz2Fw6 zmblMpT kt61F!,vwJ(WD5$,~%nnʢx%v)WɄDLoN&Aau҉BٛћZ~G;z;%Қ3Ct4Esu[ X C05]Ǿdn $o "azs#!_wZ9rαR/Ee6L{fS\zMþ7&dk r~z6ݠ5_/}oCwfr }%jw85t;N 5&HH="'R wB:E^.fI-.;G t_aĄBzzW**j=^Y| A$FVrpG<Z)&F)ϘH|ɳިҬ7ߙ Fs"#0Ե[7*\xP4 ,(wӜ9$!YIQ09#XYf !XFPphv*;CA*26 6AJH&`~&Њ-\J nQ IVARđ&69R M!]꧷""Vߍ>{`fr}پx0~v1E^R,bv#898ZK 'D^q`x#aSÞPf!oU'6kNG,ąU9m>pR =V{Ԯ \zªԲ0ԕ Ԛ/z7wb(64 3nv \}k?»晩\rΟs;AHy=2~BvfsB 1xo9[~-q_{O){ҏiw"qU"Gce?݃{5ˠdê](*؛P[I#/3enEI0 7y Ǿ[$,RҶٻOvXY*V}N}ϦaլS4kNr7%SLcev}A "k|z #Cg@)ؓL Iby*V㾗 ΂B;Lo CKKπْؐUƛ:Y¡=*aexǀT[/A.Sp An 5֓AH-N_ۦ7/!lvL*"uXk-Ta{TӾ*NC4(S$t.Ot j˙b\r9CX8I=O+4oxWz`߄\/\RzzDbzk9Kj0MN$Z(u>lbO^Ȅ^$XPdmXԇ(_z1>Xp$Mx{qa !| ,MΘx?OZo'҇~}w~~'z}Fn[:] ηۃC(ޝ=<dϝ>xc8Ǚ/gw/c֘rj'$T:D ZRc"^4a:Di=@J%KY*`sj;_ᦨةAg%~m:~Z Y/mj %TR&A:TߥS4|~AnU[Aq4޶H'+HRtgXRiħI|O?0 M Il'Q&}o5C1ke@(8Qz^4&0e@D2UC!3ɺ-Mi;ҨF&Lr&guֆԉ r:qՠ:8S(S 2Q 2xZ;\6y&9[`"ȫui@0tIQU\ AR6eh[6{15BC_7BHiOvN F*(P8_] br0l_~zqB;ruan˥;/Wk;y}wHŶڑK^^T+c0 \p"Q[=C)ohh݂.Y>HS3a~s9@eTZR1fzA3G0~~K}[PX7Fa9R!=S 3:H) >du;Lݢybis'y5܁μ>IyJwgXRϼ+1kC6!epVμ#e[ڠg^9ЙWpUμƄp3=Us5ɺ-Ņy`e &8h ӝuՉӝuӉlbӺ2 |FIIE[iwn%ND7'mi1pJsV>V2E,~i I59i}!Hc{9NMl?i#08Rtph69`^©648UdݍBmͨŦKtIO=avw|0(IPmѢskBTB# Z:|L+ 5މ,Bľ t}i56RԷ3xTF > MOlr 橡"o啼^+߼Q*H^-;8khj0K<0'uLE*|zl6W?ZZ|:փ .&arCH3$ $Kr])30t I\i*|1].Ի}*@Ρ`uy:mGP@h*AϘ4+[zOoZlxܔWqlq{^>k=iz[lb_>r&rܼ)7n>P`;/zhkХ+ ͜Fsэ(σ֮\rsѱ/FOa9Ji76bQ{C8=_("fS۫F+h ̐gsȈ ҵ!%R=G);yrѱ/E({8sQsx8@lE39XNwd̼t\AuIwSC*G2og0- JE2_kncҸHɂ}%J=r)Xi!74p` zCO'V> {H-xAy=E}w.Ovs5OEߊAb)%ụ7תJȼwP& (ջVTz$ PxUDj^WV%%3xyVEB lhFQ.ؚÃOyC1qv&3(9,ϯ֤{*y{e5YTr&~^~|g)뻫si-pF _DW D[]:&@i h! #fOaɃluM~A%(yZ~af^Cೌ?yADˉi=;9tqNx/C(Z]7I߅BQ'iܼxGƒ}@ă b0I=\޺IR޴If. Qh"8lexpP v]㉿ Y;`RwH !'!2u=DR*bI^˂Th=a,rfUGhb K b`l< vyLǙ5  Fn@Q|+KF2W d0ʾ4$j聓mCrFZԙ+o~%- UXyM~[Ky;j][/mxv[$  ~釃gS&_JQH@̽m痗՗x򳼽 z[nLzgC!hXhyxbghʮH-: K"ƃ>2 !'p"xgh<XRc^`qx((9ᬙӆ:* QL!bow;NvLž;1Z9DSȗR;S|s/8R3o"DIڨ(Y^R&jbHL3[ 9/N CT0t bq'rU9酅w`_%ikYgiI$|Úu?+[cT{n0± "\)CBY]i"(+('*/4!bAA-bd'|*&1,)ڳO4!# Ņ i$Q%,֕RW O kAA!(^ajPG@}L _|<4ӊE{RqUQ,:Zӱʞ$%>K /1@-dfVxWKyCCMlLB(8holAF'7x",-8Ώ(TXㆦ 1GV(*U_Fji=WM\ |kd-:`v%+DUBK# R*Od7X َٔNk@gj&+*%jQե\y h,`g Pc)Ř)~8(餽cX_:ɭ$DUY-%D 6L:Rj9i/k ӏhƓ4:Ĭ L¥d2!$gCG"РۘV~,,aUt,UZgv5q: &934 4|kaj~ScLC:a_駛!)g& W98hlQk!QGpʄ5N$$H y-#x7y UL}jw#jDbS14̠D7>&heɘ #*Sш삧Oe^V9˦Z/ f2k|*6a"/gMT_<|"9`NڳgWi UӸm O9FND@K4}xJ/zj ycZ+0q ĝN5D͍\ BѸV%rMd{Cu:-chцʹz0Ы0E룗4:%D"DgE4]QMDín\s3)d^ Ջ@L{Hq7 ) _Ag M/1PEal4 98"QG7ژ:ZQtnBw,T*ੳysOW/~jn o:kH},I)sv19 cD)>7{UH 5c|Z7}cr?^jVAB?}0O渼^uΔ2\:x  Z!ՑeC;|=?ldl %Ss]Y%j'*ģ̉p-NVrU  ,䪔WҩVuVˡ&Q Y/t\5`Hn91wLבCô#P¬ $.ػjPwT3CIޮN4._^: ;N"iB:9$kH'CEhWgd 6SG8 )^)GR0Ap.jPEhpeX8jD 6Dδ> ^8S Jͅ)Y-2ԟ5SP@fJ92ITpqwŠ7NjW_c%{V<[qtJs$jH#QSӓJ)EבE9KUb!!j₽+E`eԯؔ*a&/R->>x#jH)C2e`U74[1iH+Jrh5V"?ڠ>mI5 :;=%HFr)Э_1Go)AR ѨC-&3@%ƩsоעMk:VҼ=ɐZi8Q;MD0ram>~5='mI^貘Q/<8g#"w˥;WcOZtBbV_4%L[JVS-[tO0in&|BxVb% 2J j7rNŪؑvDa B+#p_2PZ>/ @4t~hqO!W#|H3t\7=-8QD|ЀpA)tBZq"S#U`|G| ǹ٦TSko M(j]6Нi)ތӱ6i)w"0Ky2Z'0J2Fȹf16܁C9/FKv+~G0N!(QB5dVRfw+6)wȽT5 ^ R};497:F[c5+c "V~؉E4"Nh$N?r,\Q(+(Yn.4mjك*IT9iuַa2Ln=6x0'#p$Q0yA|d\6'7ׇ? F|j< Oo&dBC9H鳇2 y@l=irU+mX˥tU!(Ÿ>Ce:0@S qwwDm1 V[.t+o!]yw]MJ|U( R^CWX֯,OGX{ #uɈRQ9 vmɯ٣,Δo01{??ן~ jJ8/:Ö~}rrdk*kE,Ku5S+>&d:';:IN((WƻHej]e'N:%3 K2K|9mxpJ߿z; \a:x6|M4sɕ|"y2ˏuWXWlh]%Whh#ig޺w%>o}CN?< ?k%~YL뫾&yWxhN=X|c7WcfG_G?kVsb(0w/q}{yudd憐+/h vxlaC׈64k|ŝMޱUk1\A&ghBƄ<(ئ;qo!FT*.ʌ7f8*X2 S~Rg &fjИ%:: cySpڭ'w4^&LB4 I=P y$q!}aшGǽu&AgbiԢjlj\3\g$RcBV5ؚCrakP\M_{W3R=>ŷƬYzX'~yt=Hlӗ^HRkvr7!/;=bɌއ_\`<Υ ʟ!ʟwv2<|>5Y!wEA*ffwc>nPNऎLTƷ .u2}VA7 KQY=TJTX Qu#yZG Tu|#rE>XLU"f(JI't *p6;岬DY}-x73/cRʗߛ1pR5?ܖ6m{Yxa&I_$"hRxʢ`ۜQO %IO ];$`e>M*ZK2XP,(kРB ) :!Þ0 \XHXr@Lܣqsl:GҲPH~N\0J*ITYY0Vs[bU:+TC69nM_Fc܉f*?Ke ϼtW1srǫ)oWw_<<8ejʛFOw!c)vScߜ,4*Cd\rtsƲGN-VHm-ԼڔIRe rKGu_ ˾@H# IzqaV/ Na~IڣzDrqF4DeII(9osR)yА(@ G96g2^rpmƄ7|4t<۠xcrT"WLD#ʁ@sп|y['I4hЭmQn4-u54WJp`Ok،pB5p\O6|u2ܙL_x S9Ҏk,,)xO =k墄w@ϲ؉DqmIۨ^Wm[˳Y|-ׯ[(4qmHZ}GM'p_#@H8-B+  X.jd)ڢ=Do6ph:j"Dy ft I! m$M{yF3b9+ʡWP^8r4 [|dBq_3yXXۀ7ՄS8g/"s4 [ |Y8@cX=`qCe{\PM ZoFO@@r=4ŀ6|K g0qE,PɤXO 7{y\A GJ=.|OV̱dCAUOKYI}:(ւӆw[ܜȴheCYtؾiOtޘ؀A2Fp΅`'gdĭABp HsB`s ֊ue?QWNE:9+H\RG/zi(㢶 ,0!Xw8ՂJR4h/ƃcZp_p\@CU\iPI1JTH21>Tb\_챤{9 'ͥ \\aQoIce{ Xhaiaa {޾ݵ~0̏IO N*ayQ$H[M,Ѿg_j%4bSBWqjkPYhmbjH":b0'$ ЮZƟ3[n°OUZ6 \3 +"n:D| ϔ~ }';Oe`~oCqOpIu>Sԍ Ӑ'bKU~`QxF܊t;Ǐʸ_i^9N@ u߱ň [b.CڒP_aUZS+olhK{.+dK^췍sʚ(ј0 HBƽ/lr SJ݋1NX;Ͼ2m׵ Zb\5bz?%1JtqqL}@NL7y(Q7QnDT%zuONsV앑$XG8kDc[0# n“`) E.i=u'uuOɗAQtAlSv뒔XP%t`@ZTi $\Z-$v錽>WqW`J}bdg}" ,xcxOV(DKD<8sG$9# WJb':蝡Z^>soYnKa <DIj(,09J&11CTÚfZyS1‚5ndEpb "2`tҤ t)gjE}P@CX_$"2BjyPiY47@H2=fr^_&>ceܧ?ԾaJ?oB@s5z rf d]%勝_<ԓ _&qx2s3)g!Z3+ g*G*g)C=olD.hreggwIņ9΃p;vԻ#-;o;>q~~D~x/΃ 2ԩKo r:ʁSVEv/N iO(3FbzM;߃Oapj%Tg(e xn' >M(Z7QnUMt@=a K ƹTJ-Vq,/QF8bq,9iDN~I4Pvn+?>Bh;VIVٜ"HB44!hsmZTj0FwpMbPo8XKBjrF@ύ[xvMm:W~4&&k(o l fc'%S `2BsqAAuIy!៌u;DŒO}%/ݎ$yWI<0Ǚ1F2*L^XȿN'_3Pm}hQ[^hGmPJ;xnTpxe2ÿa4x ._ sQ0eTҢe8䃳lٙҬ~gmuk)!jޥl}&~ysp\7pHpITil߻ˆq|m~ڪf5(ՌWݦTp&l/w4b^ðY{O!o\!'[S]$~<=H_Sq<6] Wt6bi)3-\ܯFϱ Kvp̹Yڳ@2tx+k;TøN$p;Ƌګ'Ʉ8zf\1|ejV:CĞl 0tlj{gbA\6;&i-f&ԥS=-|T.dAgxD|$x_7#_Z J\ ; @f س@JPP5@ƘҸ°( p^Pl)NIzNJ{:@s;zڔ^M9E̥lKSVrHm eHt+ s.hˈM_}_"qdz?bưl#hN{wޟDF5\Go6ڏ>Ⓡs &mͼ]J2;Ws ]ݑANQ8[j{1clCnk!K+^P5̇>-cPunV֚G;а͵'f7W$tr.,|Ym`ͣ׍2`B6{QDOpuJZdin2?:7X%ΫwoGUFHinCmFA0iapN{1CSO/Vi  *Wq~7vՁO\ َ?}&M+pOYbT{{\&^Մ ;;g휳:nӇlmEx\)2'(՝> ×asf' 97GG6K@#;91??Swv4>1< %~ QV ëYN)JΌ=9ד}fbhQ#MjNM`)|GbvCת{0E:%,y<"{*SZO/ܪ<9*sw8}c| H}^d,55TN (١\ t* > '@*x; ^V1ؕҐ-(*xbE!NݭCQG I "3Z6c U~[kqK7vWx[a3?x}/`C{C,[~߯"ʝg-&"/IZ7pV1RPt{h>P!u3m,S5TRQJ?{Wƍ0_H.Fgp lm2o4~!f6b-)ĞQTu*>%Y +**B̼au tkH8җ4fFVJVY&F!*bZp%󐓵cō%vrN6h+^.YyEo#- t:MP ƕ *VF8M~ Bc\vR.Izy= C#N mIIA{rř1`iДzHB) Vs90*ijmlʬ6 @ay)IԲwx()1!{AeS/ɖA5]gKE4'x!\Mq2Y^y(Mf'#?x)&чۛǶ=KE9"0z3T%9J2DZtWJځղ6(+l6P׺V٠bQjHjQiqin@l(8߫Q%{Uɒ,RweYV]47.x1Gnhr0J\˒,e@G;GXPZk GQ;1/a/S=7'{aU2QRb Jj%P:A H)H=J lBOYQ$5fJ\2ءs`X3,S\;k66RjZIU-% ⎀+Ey!Վ\v`%e%*\v%CNxz=t&Ifk:~?o&:'JC^+%A@WqIh`t3+d!>{tFC4MV*yХ/YP}Ex,FZjc5CGWU P 22)~M9hx8(BUVVBzC jn(Vp PL٣Pq)!C%.dş m)8"}2ij5}<\O__D0@ d( rC.˙B~v_X/?φ?"َc-&4hY;?&w؛kr5R$UQ)h #{=X {0Lf}_>=u[Б&,35PO?b&PNRϏsUwE>8Ldx%ߜj^]żbP=UW_E|U溣j&` :ˊ|ptph8I/5 k+ie3^:ICN%m5~tNQ~|_˿$5 miÞs}c#cڸz=Xf<ԥm+mg袏)$,(\n*Xv{M7gT7HB:3jm3)S29q!:oQm;KFa:q*4_ 08+]z3]~7_ʐN iZF vV Wpɇ]p^:B2%YZy Ŗg<&SMgJѺG$hTNud>Mn{RkkC='*aׅT=<lր&!b1ٷ?A A}v[)m>"؜[=}v fub;( m@^Ij:~+K i<]zzA 'x?mSσ3"dhg`}\gm+N_<6vm*U@@55\Q$xp:2Txzx P WW1NL/z59 6G/0ihӏϣg?|oRfEl-_X$PlF1/{V7Zh1aBBA25Ɂ@SX.elC%"*I peYLk(sV gy%|E50YB0(դ*yf0\]|(U{/=B" Y-au 4T`y°R { ћ'dvv=_z5ۘ~SDm9 O 0L/?}k:ÍT߲{B΄e ׆)|(Qnc) ~כ tw1;10Rq]=ypK1d3c6k;}_\Vz?(pn[Wצָ䄂kC0R\̃oo2f- C JOm̵u!@ܖZ ƩcE묿K#Ԅo9n ߿}GR weԱtPR) SZ"Po3*[ <D1F(yq(1;l|b4C\Z>)Zc1|39Ht9to5 eN]s7 )^J#"x@?)Q VP .|*H+a,ty)vI+KG4,V֣2Ңa:j815Օ SCxhJ~ɫ`fROׯH㧴7 d;cIfl⨕ZB?pݠg΁pS-pev`@zpz&MtglE}n*/9Jj:ôϤQb+x*[{HQ M{tSBqgh,m/JxQ g tE5Gz?~@hSL}^^_SFo|)`\)W責T9^K;'w`KFg{{h:Y>w: !#/Lc}4Q 'qQSĴsЧ+^|g)E(JcEma$u8Ot036p.}}`4ۏex\rs4)@o>b|,*%x&͔2>5c%|`'-8>7gxhuFr%߹#1"7B?50y4+09q*R2Pc?#I>w QHRDݝoΤ Umfճ-)6t>HCJWN00tX܀ JI~0H'⤇J3deU1cliUL#lUw|vGJskI)XMZ|.cKXV& |hJD}Uxe"*bNZ zI@V9^,T@Cq?PBjCKeU’ "%d(ARqYʃ4 05'90 3N_߂lJ^Oc,F0#JZyoLJ* .Hh|H*N boGd9NAIEI+)X*+a+B[Ő/ɪӀӑBP"b^\T4t aT$hUkXSVβ7/~%β,`kٴB$veGӠ!Gl8PK+!V$i7CYռV;eYPJx%8Hs<HYZQ@PtvKNlZ*!Ξs]-P@`vH7MK1b3RQL|p3wV16⫽NNz͏g3q6_7O~4NvV_$!1srA%)S}Z Z=:cD~V[?_8 b\OxIGtiĻ7L[Oe][Ϗ>tsҗ6ɤ/{f1Z"}iQmFIJԟ$Zj!MD$8 be_98 I] J ╮(^%l+5s&lϮnloA݄[>G坭$>@I+(K  /8J?{Ƒ 2}.~:{N,8䈔bV)y$r{f8KⰧxa`)Zn-=zBN?$T+!W\[nHZ_iAA`f"o]# ճpP$٨ÁU/~`LvP)z::&:څJ/V!D#ݬwIa6TA8)]2F_B2i)Mbij*jkHV(utVO ]&VWRX̭S(:.˄>zXB \rRalm[̳\)]ҳ Lkѓ+ÌBZdҋ24IPJ́z17pzivpmVa`;iC"wDO/%LfmN`SZ"u0zUo?˯3ŒȈ.Iߑ^1Y)CC5 )DTid>. Q]E`gjPde& {n9D$͙Luo䠵FP֎xLR%&#APe$LuW}6 Ys &}ZSw5#_MiYgW]KZ&K\X֜=I *= v\)hZ JD]ۈ<6OxڭCvCB^Viɝ| ڭѩ}G6"O%Q)6vT.U2mfZ[-%S.g$(M7Z)n}H уe`j+r}߱l*,=wg75?s̓\cGzO۫ͫbEuFݼbr.Gk+' 733RyʒK(siJNDt<KBƜ}.+,Z9= Ù*--=4 xQ lY4FURH,(~)8A;JMDo8 '^`$EǙ  ӹ?m"n[Wc NuVUC:T'{u4Z^7Xsc uv.Z.?m7ukn; Iȓ\_*"k$epcmuLsZ$߬sP|| ?Ca:X;!o\\m5;zA Aߞه(`IJ7+\}־YP4{: # |l5'k' [:/.fٕ}xw *ʖ,-gOz&ø+IC!"F$oA,D m]RIerV +qBsURˊxC V &=աe꧳qݩJnB'@ѤOI D} zo vۯ T@w6-ST!Hjۥdܳqh%BS=[㙈@`42t4YqEBe@@:Hg6cц(KZ'"%(L*=PJjq'cAz!;Ew>ǻF O(\GِӦq\cP>?Sn*vBrUxN v]8"ՔŖ#j=ûsPM>+2fR*%\)0Me]H'K[gTR2`MB$EP" jdQM!%N9S"V`P! IgPV>( B޳{0ALjTƭNTۘ 诣%*q&RAEJ'\{}]jQ!8-K)>$|FˋBSPȅDq,\׀8{﫯 F$ѣt|w0h`-%j%=ի#ipsl@)CB y"w&O[M8+JBM$j vc![ٳiVlx'*_nօ0-Geߴ|,qD٥P kWC[s&$Á:K[@m1H;m,<LmnQ:BCNFA51D(51ֱBeὖ+T@^dR`WA c n#HӲ*[v?,@CSq8@很CoP`$J 0Kؔé:Sݢ^;Qiu;9*;>+nl~2;f`*~Pq4~0lֺQza"SK/0=*g{ĹԭAD8~u2~,l4LEZM"fАڀéBzl{๿5s'5 ;r1w9.tnrOySEj zo|J P `wn"]*:Hnh^]-y"!`Am0l.,!s}yA#|{hl+}EA|/-o]o{1IhG^s!S\S5`V}6_FSVt(3VsiZz~E6&kн:e|Kw\Jܳo7)=w{R+sm6=y*齷yg`r*{&+Ujeѻ~֩Fc_jJULq ҆;͍r%u:ICq\$$[y.MSdh42)aDSi+xN":Y| /hn(Qzrp.;3JtCI}-lA!LjrXQ'%0A[Ǭ$q"V@lewT5(+PՄUAzi[!<"jR R "6NN4&{=ZmrD,{qo2o߮jr W5X?I,S{̢K諪A9GAs*LY.U؄Ypb8=* :}eT{ZFu`^Rr"d()&̭Ȣw:X'dRIBTX[]:$ SƜɡpQ@KN%y%ZV6 RfUiv`jKjc(u \Ȏ}rSS_i)ʅ)7[rO}qmU&AM`N3U&0!NZ]9Y)#Ŵj+քQ[ߊPKf`9;ݑ*PN/ZO. -a%\DF99O${q^Ww]"c#3fj-+P'_ĚOj:qT-# ~VcY"Z&-BtVᬠer, Fou^l; ˈ><+d*q/d6TI%GV^S*lB`g{UlugKh sZ:kR5;m:?q?[Ϡ-`z)R<}>Q(pM[|*I,8v~JUytX{Juĩl r)&*с{HY4D! ) R(u:X]pĴrUW/qDUHd"#'Dc &h{]2ogy;log@[u5B[ 9#tIrBs9ƉʠEV\;%(q8&(־*жO 1E!Qy%.oB4h Ho&\G)P$Ϭ'e#K5VyejԮTLhc\hEcUj*+a/#6pKAt2ƒU8iEI(I/!IsM@%FP,(F6d%>QpБM)#pw/f/rTryє<[8$J w]6z\k7V ɻkyz֧]D-C-{/xC.M ?=zh+ F:o  7ofXS}oPх00.Wm!83fk< Zۿ]_XybǓhpM)1BT=}4@cZ -QL|"D hIPI(4 gRHXtITӔECD%7p,O>wY axՠH j#u†`t Y{H [a)(RArNB6;j ]]G8TC !hw^ntcI!/ ,0L_oxaqFy/bRHJH8۷px349=_UWWW(,f(qW@+SgoxlB!Un%0adox*TÎT*O6/:Qei  :+;MնBwh-x^9)S0^1=LB5/,Ƹg ڛ/@UdVt}x_DZtSe K)}?g05WjT$0-8g HcGo*n\yڬrs==:wQmoʔk{VknӸm_}[OH("y&FvyXQfO9("@7d=8 Ygm;Za8q^,|`0mE ﯍= v 00L$'$-qwRXE$$hCI͙vW7eNuݑu RxkekjD;Pjܲp]x7 YZiFZ_>?k4 1!GS U>4iҸt@+&}Y:K?Y˼Pu3Ag>6>%Y9)&0_.{ńt]D}CA#28wTxieԋ $k;;X9)sxsUn.hO+6-gj\|ZMh[ *SR"a 9*j wG:, UEaB 4rYөҌ*Yp>WԧkKum~;Ξ@1>-@{иEAOѦG:óeoLJ\>ğRIIA*gAN?1ͼJ*%'ox;`v u`[ ̽p=Nס6DB.o@- ,™dB3MK7\,;>q)%_wd`@X@эBk~rLaB`' 'ݴta ׅ-g.l\h dI-q0gdm!RK]-ӔK(VGqc '>U ur+wPf%SdMŔ`Z֒ki#NRL*kH8-3\DT@{{DK6a{ j UucJNTʤ"@K\Ԅ)y:-=Z2*Z5J WF(*A99D:% QaQTYq=MfTa&ҬƿF5D.p H *jGfҸC5 -^{kvONhCgӨRxw%je5rH2W{#dC`0=npHn`&L'9T&سo`R#˶SFFtoVTxY30̔z dKӛ4\ VZ76 Kٟ_Ó$C?º^}5lz x ,cUM&G7[S[B7캺{㏵{k"uM)oݡ2rLt|QfҨu]}\N'JGZ:)(ӻG[psC*_W.)P>9#0ɯZ'Brzp?ߌT}D4`ncnTKB A"ҋ!9뚫4^RTex MU)TJU_o>$z`G>ҏQAZޭSC.v,8թiD BEUE8*KB֣-$U;^<{l8s޴Ӂ<0!1u:KC11hk5"kGW%uV3(:uE1fklUueRz<,wL9#c:r>T߾T= 0C^}یPYa`U[]dֺ@%ۯrF(nsϪTY+  &vsF(0ynDnVe>ﯢE}~~YOO[K?,na˻_ۛ &t;_eaPc ιb#\l<9\-X\h`` >\Nc{p†IW.(=.G\0zМ˹b꜕켝SwޚNh ـ~g(|l!zG7"2r_ε[S\?5zCTtK~^'ylϵ, ?7 7oE-JzN[e[75Սoߘ>D*NdaզK\R k  -8=D!S*TI9JSz 0;Fka˺rvVs7 [+q) ',p7Opg$w:N[r3țgDzny9C ,KfkO?7B;>7Bzn2DEPWN,-V~tx^ZOYY$SIqwӫ_lfN9<^.'%d0(B6HH%^M?=ު=~ PHo)؁4 $@M*k4 S*oFS kIWdՒ1,Dd~8~[a9hM߿\.N˟ty<]UF;zKD2]!uD10#Jpg!P5w>fwV^O]}Zosv/;_fkGKԴE/h7x~6' `X_z\j|ovk?(2k>ͣF8ΡZ)QӷATZmOװ|o AIÙRI&6T[yJ62JyWR`&RZr#NT+Lc/;2dѥQ{k_ikE,*)*ā9#edvfƔ.n[W jEEj \(1BQzy'}޶{O>Tzb%Iؘ"F֙((94#\rR$1n[9t0gqGy(bVڡƽ2X2K9C;A yCM[g"'Zqz]ƳbD04 hvy!j*sPF{%@QeKS5k96oT6^Y*yIûk$7ȭ >T#9Uey:>Sx)P$r, ŽHtu'7,LRc8ȁjo5G"U VzP捩,h[.LTzyLP3C&FAU/Q/F/)^xi-3{|Zg;l}P-M0K oڎ![7=P:!Ŀp9S5߶Jb%^XLVB~jݢ'2gmGV|YVdeXLh1L;tg Vgu[9L:,p76Ti6:K%WpFGקC>̙fRpVRsU9*ܔ'2 tN z;b> U<.FQYVgez.!Β<}Mܕ "9JwW/rr1>%7Q<}ԣh(NM~,#ppLBt>=B7ZQ~f딵0f-t`YwcSN|Ԝih=Ю 7_npJ8?`[y@ӣ:zja'w=ҧ:$cl`e[LGhIG$ea3BM-(8IQ k%PRRhGzKő(}:6 lBJ$>@iĸ4A#E@zL^kRAXn}FfW w@ Z&2#X;5Tj"~? w S=ڛk<46޵'~}> I#+v7jO {,О1w4K\X#16ܮx?/[4NBV(^G^OxiOA6v*KpeȃR7BNz)tQoExlrg d Mr_. =f'Zt ٍ%^'HZO}K/U^JZ~BC7*L{+=Op4W0ϗE6bb7|BK t.~3ևdIA o\bm8̇\O+G]+k*0*9Y/7r UcЗ6™r(dy8h 1?"%-︰{nzZ} Vo@+.&)j|YoDe}܋U>ļU|t5gYlOTQeq\ en~\G(ۺˌdt"A6/*!3< *w&9rSBV:T~^ ¨HN|\} + `)ۊ-Yt&zFn&„3JjM8UDux- QHt#3L6P*hm.NCi_XEKW⚍4ED+i=GsFh  0ku/~ԍW[] a:3&̎f+}PFVH[J@#U4Fh @ڄ)4&iV0J,?$9/ tߪڀ_1 v#ȒL&)"s#Eyu|XUuuUwWf+Y!PD=;FK-5IjUUcRAbN.^lFxu&?yr6<!:VY[_Xq(!wXh>&+/HB2y`Hby"@ƍ]4!6Ll݌y+Y(HrjQY2 $]XvŁvʨXAIjDX1*l!Q 00XA[dȁg IhK)yiqy8ĞY&uv$IG8*КHa`1*#-H*c?c Z%%ÛCuvhu%D%!6KL)dYPl+ xcc<q<3 *Xt*HmDi:$`ȷPS@kaG?pӾ!鿑SFO;ݎ^O^t0<0l` ljm,u *~.֗6vVipAkC[эe|t ݬG>^֜yɼ|R:|3O]\{9>K\/ge1xr~3w3Fv]sT<_|^~^)r}T8|GeWˏoq.s/Z:NOܔIIm&BϏeN[ohs=o)jc5bR\Kc}o^ |5̛cfC- M!{KL1H=cRCp%%)֟"Ncx#ٓ|_O9 jY#߼[s^ٵQ|vnѾc$g%%=vjmG.AQD0_~<"'Y'aj5ƔS +^1My<eGp8EfEg_÷ h08!:Y .턮R<:*}-)UCZ=p"U,wP/f&f 4F,ze%䣄Qe%`"c[$r^"A bO[7|ci7UĊϛdGW_)nFV aSxmlWA-w ]R'i8HGF@r0+iNN)L(s/ZM^kຉL֝\'9䁗$ i=W';t\uL#(M|8LhOSv` f߻J}(/Ρ[)ͷ5ɐRRߍnBˤ!u{F;Z=_Yzu ;Tϴ=yq<{anGIJJ://VFy@׼1~w0Yrk8iyTKdNm[ j͕[sf4(ipA4RlcIHsl5ϯY IPrBuU kZ{#>9DJ$H)^egr%(˿L:(H% k0'h6ir<$ n;44W-d,@;jIuklOH;k2`Z4,}V)k5DӰ6 wj6STlu>7El}K@tDK%R}n{O%ks/]`bTIQYJkĈhPoŊ=/E_j2;N6\$ S*٩Ȗ`S:.ld]I{v‚鉵eK p6If YbȀt2xȅC`߫:7B%T}N.4URV6(o `?BqFD Rz"i)z^ՓQ)Ȃ ,dls"PJxM4s%[!RlO21*,9RTJ(`9b8Y<9 /)irфFqylhvlq=/c/tv{XqOz5FQЀ!)M/$?_/8 o(Kp<h~rSVIn;F&@Q} gY38\.ɨW)>y \1r+,;뱿o\VsUpgX[u Nݲ2k}-./Ϝ[ -ֺ?l{+Ć..h*IɆ.uDmg a;_gȎ;퍵Brh ;@nqu6Ź$rv5F $ rςsϹ[ѓ5X$"֩D |NAtFVJT,>g "xǹnIE/;px7q|eC7 XƗg 6?~%?WCDv΍p97F/xvmf\IIiI2q$iNHxJj7lqmM@`(9tOfP B%~mްkr40{ƻ]= ;'vAZԮuL:KR[M&j.X\Bh%A76oaXeg#U7IowY."3'Ke%f v0JO\0דzw.3-_+>ȹ;d8EȁcKL$&6PZUX)$جH tϾaᜥk9fEY2B*^xE a7UP@>dT oe>W WAQ)c_nb Ta [{sSuZ\Bt`{!t,1؛,{]x c`2hĞ\R#{)캋$i\$}A9*mY윗j6~=<2"·:rxTz6d|[s6q ޶t>v|w5 7w_Nj8ůgq<օ+L_o*֢s15}_e$s[]fY3`e,o繿|ϥ{^k)>{ Z3iC;7/`|siR^~{WG{[QmTêKmklt۫ v{?lJ5:lYrS< +}NR[ݸ{uhfZYJAIXsRؔ"곕n+mKq,ny 'ekAK؜g\BwЈ5xz y!XθMȡ"e@7҆UmIb Xh Z,dm@ mEJ2Sg|2ZLbwUvX5Ye =>lݾ`}!|B>,@£9X*Z  EA$_YgLy׭~K.B[?#Cň?]assY6-S0K a"(QJ'܊J&ҡeXPdhQ)x"g\F7H~1Mc7-!4Sk~.#hԙ#z=Zv֗(4]F.@HO.,^Ƭ0EUȰմPDN U|㜦dюd) fhD̜;idž]iJX ذWzDEH}8$:aݭtZ+cf -Mqt%m8G;*R.rglW18-4{|Ȳ+N/gT0c[ja浞,j)&M- i#GY]TgwoR=qEap节]]O8_E*m+= (ƣKY<[K&Kmh{>?7 κg]!4qT)U8okG.y7lD {‹5ptj$.G#VG#H: 0zO_c nk _? vk@n;/t|078xƆ'SݎzP>Brz~$?^;6Ǖbllok8n~ǫ?ϮW?]/_3?9D[V9uhmwjnGv1*jSӘ+N]n>wt-n pV> /]M3+PʫrX*5מ?/K0`o~~@^4i ;!)&eqnԓmNȻdWzIF+*:lc(j|m#b}t!3d^,pbbE QT֩]i,/$y3)H:A&ypșyFMi+pAFSd48``lz e ʚ4x<؟̓yL,*#`@Ϝ[ WxR"ԮF^#iAߡ5}k畴Hf ܤZΠsuIVWD++Ci wm͖?i>s7 6wwG9S;ߑ7߾·m?o3IΥ1Iw.(tGM!LPKYhn|6Ni~߻ZQwhRZ&ڿ!ieۢ7N_96VQB4*Ʉx] x[*+?+Wt|U~g˚QzIRj#q*.k jj4Qz C ϳ.N+F&$Od_e;!dT'`i1@ 4ePsqy^IςV9}qd4q)vAr[bLlFb&V<(*T&[GƑDhk]qUkPFeƨ|#7X[bL,3,˩ f|SnF:Q:V+cjTaUUevU[TMxbVBo;Rw'' >fPx23S 34yYrؽ%1NpsfH%BCDδ(@53Kd"hN2)OؓTRAYVB,+9¨KdhJAVbJvlVJ@U,RK-+ǃi6NQz%QmejjCd Ȧ 1%Mkjmj(4l.E5HWri%UkU6dTDo= 4&|T']Xd^ T'.Em"`CT_nRRsgWV9^~s9Dj+Ow"GNw?TYءN+\*9kbga[%b1dMR\'0a2:s77 XNLⷛ?e@ڊ\8#5)S!-%X,hŞwlLaw @KRgʏ(rѐ*Iّ;YX!JRLi*7!vanV,HWjXeٿeZ K{#:q\^n@ _aLc)t0pֈ41cmhI ִ98S1إJI!(Uck7 n'kpSCuc!){rPQ_$ԧ'-&,PJ}DaIa)TFaQh@lƢ+7MiAh&%Vi4+`wIPQ(\ #(X/TP2!ʕi>Xa1P%DD61ZDTs:[6tuc~hkO\ێ>W޿?ͧWVqDVEX@Ц0ƁF+er&Yopu6!p!|}7_+׬Y ~oֱxH}6H߿z|*bzG~oooކ ?6k\I8+ 7"7Ct⛱kao(/u]bLnH ̣ittV6J悤!q5\7v I!Iu~\֐ R6<8`6SKQk&4a!NaBnB]`eL.^RZ="Y[ ,qŻ7 h03cbY:Ņܤ:q DžA:J{JʦqBX3R+"5LBu$R84~n(h:=%9;;Y"7͎ʻf/?alX%5%!3ͳLDKr2RyKvy`  Ϙ?%amS I #S[qB0% cɵQ0LFJ(@9g aY %u9R[GK(& XfЊY+c6й;c+tq=8n9PY~)z@1 Ig9OԠak bf}$ܟ% [b?{\ٛQ,ޅ\nR+=m!8ߵEϗ©6y{?ocb?]_Z߹pUqI\j >W@u/Cu|x}o~1Bi/Zw bv>UcyoSanqfQ:S6pbSY.~nfm RLg;r|qWЩ&5w^ưnCl:0K8QB. A }GwaTmx[wB^>Qs+%XAtR06U H5R`DuÄ@X<@J08|| |\mK^,KU5#CWF< l^YSwgE')gG)%l]A=.٫eꆅ!"\9̎2sBI~k#EfM8:O'&V/C*C?6%CNϋ|Ɍ;#{1e+yeջ@/mPMu_~674D|MM4n]X_IP,!Qλڣ#Ml]C M^#r]\Q0y53E/0Dky!CU[Y H,@b6s/;1lE a(+um)!QlTU-dǍ߿'T8w$=Ձ7X ap3')-ϡkqK+#bh9{_VJNa 书Fk\Rdom]́՘#+ <8XfX&bijVpcҕ`@uXm5Uр-a! HFPpjb,AM{@x %B@N ^CZr*o]`B sk8R4VA@|Am"û$exc⵻lkzBVk8ghH> |)QuMIwRrS,҉aOZCSfzいK˶3a24#p`d$~^ aYI$_ iJVZtC{WyFyuF^=&C==}d(, vontЏyKM۩><8oMi'?=w18oFmGD@ڬB)&⑵T%;Li@ޑ%;N, dKd[ 7A62&Pwx umD6pbS2iYt݆ t#ǻMmK4"eoHncX 76颯TA/*kq$k_$k'y M#=՛X)ȼ\xT_nR}.X>eԔfHZJҖj2(}V*)J%+=DFÝ563]!J?P(}VT6VsyU^oƃ)XfQY7u=\CD LzЛW!Bڇ[u_/? !/9y 2v#c' S+s+0Rh3<όW~wWaYQr7M]JkQ(Fѓ{G-Edkri`KT}CFx/эekl!GY# W"]Zt:Hp~ EB_S jl 9VA&>9 /DӒ+<62.k"CJE%Y#Bi|, ۨ6<>eץ vֻaE@h*Qj:v~a8gI][o#r+_G2,SvrpMX % x؎$bNbK[d^WFߴ'˷m WGk>9(!nFv g#ė3lcI9Z=Srfϔo7>Hɹ9z!g$ t2`T o܆_ [vk2c%?r Mҿn=IF)ŽhvХfo.*#58íXzC"U$p;?,VdP iR=5]4X>,wHZUw4-DwV} r?.^\tof*VOԟ^H (sflͷtJzkν`4&6!PnJQ*_&yk,HM0zCe&)6gw>Dy%1OąlViRKNJ߷L+8hF>J:2+AEᔯV n,D_3tDMo!5u,b ZOePq`E :'PTe+Af½NA(rԖ+%[/Dŭ`\Т tpjSg/PKJD2pj֕+m[ˉ4E֮j:#G7Gi41ÝC׫7~e]6v룜wROQO 2Uߖ*4oquZՑNߌEx:Muܥ1{>6yOd3Jy`]WZ'R}<&| ;OXN AeץcEIi:Vyl2)ue΋#s⯵WJ~0FJѭI) ΟF~$?j_k\ka#3;bw;bL1u"}ZtvM0I%ZpݒA S9ɽT>T.Ȕ4@Ѡ A*@8T~'gf:7^ǁURJ LZy. 2*YM+FwHEeql6jAV@/>D,S^*o!Hة0Sa\Q~,?$5# ey%T y#qK5cjr4u!i.Rd#{6L/Y3٬Hp6 ^db|g/7")Ci*R3sIK|]{' j%vvf9%>Gs]>Ӻo hTx6sZsubOn~6-gy:_P33CyM=NrK:;Dg5@_zuEׇq;SEȎ. SPB3 \!juGkAq+blFbDpq,!X4A \{+4v # sU,Gj7eƙfu3_Z^/3fi׫tyLT)#u9L?V7TCwlpL^EyE\AN2FȍqFQqv:B5f {B?孺66R@eg٭涩Oxv/;ķTg1ЕUslc$뼪RcH5c:1M>␙)HWn$xdlNvrC4J9WR bLVDEE_U\%݋"iJE)qQPsjgZEEj^2qQʅQ\T H<08PqQmTJiyz*4DpQ=I ヌQ2#R癡L0״Loqu?O,ڭ&Ap%ob]PB;N@\q |{joEW'arA&pNl *@=4)f~%%"b8z -{6ӱgVNdg'gU!9{rm@*v7'g]T /(C 3n! 0:7|qЬA^߆&Ǧ^{r NڔhtSHXdo!(cWZ9y 1&)K%}Sƶ{3-B =B*zT , $JZ*BꃦV3 (Z#kBrPLp>cс xf*O6 0tf`W·J*Q83$=8!bk0wfBHd,%:"*IDfpƨw)',%xRI*D  Оn pwƦrlOl, iwh7bue䔉;xb&_>/+q̈;")*qRpd5j0j;FgֹJz qqm3_ʞ_#fq. NHn.n/5kKwf|ϫ_ZcJ凵wKmU{WCh&ԩi0Wk6g6E\=9ݧJۯֳ)Sҗjr}w֑MM5j}u~p.bc:mQN\ ez.,nlJѯ8_z7 0iv]@(o>=ӻua!pݱ)iS)U'ϩdN<6Yk{:74HFAQv]\gjtuyn3hKé)4sWE4]E\zD^+ l[]Z>YٷAi,=,ausgQ\~]SS?pLfv~-j~ \a\mjA֒5~]WtS#Tm4ͬz{|^5s9ju!2u!.(4mE 3({6 ^fBD.dos\TlMw* ۾KQg_: wbZu "wGYij2`i2i-})Mn>u]5"?ƻJQQ"Np~@?{WFr /,B̀W,bFϛ,mD`uiq8"nΫHTS]R]Os>'U]p8kmw+ P d=pTmByt't4JXE V^5򯞼i])XnVes#OaC, (!AiC0 UTPTvJhAm֬MuW{lm(Ixi6;rw-|E]sy*=4YsD~W{"ya矿4Wuamfwn=jd\#:W1IApmHMv*w6F+'EBqa75ORÚYB=o%"V=TVznK ?V 7~GA{VDkMU @g4a=f#Dm8Ij{Nq) J㨏t`>j<#:In.A5_ߐ'% G7H. AZK+0(c+4 ߰5i՘4EVqC@(+H! &m˜&0$q&5z V2P_0_p x]Qо2el+Kn\;LX[u/2t}MVftB'Åf h+S6=RMj>UoM%;5YYY̲2X,GiFH3xĉV3l:gPW伻 y[lf0}Zj$Q& ˙"ՅQHI5PJZL HZqGq\=< ˽/7˥_hYK4Lpu{D=:^ X8WXh#rc~oHE[Wera;itQ'\[(ʦ!r(f[};/c}{ؑG<o!~ǟ.p:pXJQRj6Õ&Pd>2K&n:43ysu"+­52Әx!s& [%,S:қ \ߦ.g؈a^GI}UZJ;K_}Ux-(9R+0yR8 @`(TN^8B?g|cK}UZr)'/}^ ^ ,(3GbM jm_bnx\ړʅم(ԥvM)VS+={Wsvk㳹aZΉ}aa=,P⾍9D b~RD{‰MH$Uð;8D@l%W 1[q8M1Y5(dl5h{*)w_c(2ZEZb7]-Ru!}/71-YEmnE u>F^WM Ƭ7ZG5bMNbu9ޞ9kiR\ J!)YebhfXu7lV֥?\7if27vOǺߗGCwF壣- !\M=IFZZT-Y4˓AiN%ݏ)wrmQYe drofE" XdEuE}HnFQ:0]\q3qR0/ETjc+bE$#;NG1`*IdF&;qr/~ͻCD+¡Rn漚 bL-~\,M9 YJ4HԲr)B$~j !0)=▋JfAG \]3G8HI):V - eedlj@(jG%(,9z?g!WU)W2d@(ĬȌJ'ᆣr`煴9͢l+'/ߖ"P$ʢ*1DMMMZ4Js BƳB9;ZpnbKRU:XP+uX0A͆RJ(t m^V`\ZH@!K3ŧ-䥆^[8I@@ "z?Hiރ/u(Z<'|ߧ} .I}UZr>5yٕ.X@P9bVO)뻥%nf룺^~MU}%F(q$HVy\ !;ެoթQ)_A+hd d<(ɋ<{m yw}Ӈ9XP/^нkDGJǬ_Vۏ~fquc_DFHs3)bXg)5+^)^mYvA*{7iipL-hdz;"OL嬱 _I[N+ܤh kt i,Y&sk*iU€F `J"M@J$INn5o l{"H>I> (EL6cLQ`" )2̲4J%Cn+OFTuI)Gc( e`Q9sE PN@'9dhR j?ԢL@Lֶ<` a:Fcj i\ ox2q( X^sY-L*E5fTm7uGJHp2]0ڄ'ӫdz9A̵QRdtgy XpL&K$q%s}JiEnd=JzM֧eF^t@ydsR_եv)"p(ʭ`oM?^?^lEYz{:l}t PpZ[x `“jf/(@s7aP=j^utv"wk- P0/絛v%7L]'Vngn_3~Rە }fu^I0߾l(c[Y=Zp]ӨhТ;32a'MLkXw%\FhfdZH}:@>e&WuMxnH8ʺ{b˳6Smy>Ƴs̊&WA0MD|0QQ<,FKD5(3W1bYw?"QKg*2N>mޗA]jKgQ$de_s:X:- +H1 l(Jf4!| G6J)>efϖy"3n||phy# "v_gxy=&},z"J-˅T MNKZ̟m6K͏ߥ~Nogus$>, [?]'1Z7vo˳|U-VSmNӾ e}N6I 7kzvc@I!smSƒ<֙JnMm:M\}9L-50߹6)bl+M >!1m]dޢ[ֆ]Sްs `Hw6һӻen.ц4JKiV>LݙΪvXPJٜ=;/$r lp7PӻgqS#O=wvfla{Փ +`PeçӆH=f;ٻ}ju>k @h j6B9((QE1Z%zQ)$o:TӣNզ PA8ݢ :R鱏&5VqhxC79= TAGDR,݆rD6)vl儻{lw_}\M6t"sK컥v?o&˕BغU[Dd%~{"ya矿4>{[ .jQ%ۦ7Zg_>&%MvNFs J!u[JfGcl%]mH{l${܁kEV;7 ^'$po4ȚA#hlĤKݴh 5U= i:KBq)޵M"T=Pb{@4Ѻ(D9(b#XrE+8 3rIQΓJ*kfn"̘0 ue(ɦ:/2t?ĒoBd;YQk쨝=hӀnꃥmpu`WM-]4z5G0Bg>a <>#Iqg (pٞt2lIg%y8+/``U#8.r*?|8^Z{H/}N꫺nN^TRᢒsGf_&|BZxgv]a,\8JR ^n7> Mw֭J/s:%ʅR ͉9729#<`ö^|Ez!1z6{,GܼXp}Z|8 7^6Pr0ӐC$4>GE+Y4.n?ѸVY'/t?Yյ6:Ns@ BJ>v`]EI1F;\X޵Fr1_w(Q͗[WXb]?mWmT`7gj(")?h-KY6m%wѷ2+KBVXVLs2P*%)Apl8h G^[s!{ Sh9<>2EEҎW(4fLi yɪ,WeI*1Dh^.+R/i& ťB3޿k)MIҽܢCHe ikl^ϥ60Ϝ΀3.cni/aK^ [P)M;8x%L|㍈*1; 8X5؁2s-+Nv)6~+=q ]Hed@wh1,И:6YE,b@cs1+:[U9(ms͵erUQ>'ȓ@[Ke6 ނ9S=_I@Iebp+m+1& E2QiW%̙+@KV=,@6V$mi@šoS`svle6AԺ:@J%ųhL)B#XE5 V$װ,%< %Bg R&gM~xxb`-@zH+eOVJ}R{A/#SY"o#P+I+`8 )ڞ0FR@=4l Pr6bA{2D2#lgi_ n QG _!3ϬMM!IJ ,!wO3- OǂNO Ha[em'd0]^zQ V pz2ƷqLaB1!4<" L owr|G'M oo'^-sAN _8ZMN6m'Q'&QGJT$3Z3$hFcFBrGڊW|f!yo̡ڱԍk} N^]D Tx 35 ]P6>zv᭥7(kP{JkP#ߜ$DD`B˅25i6 s^5=|K3-kߺ 0,J*zsLC+ ggˍ Aq<MzNŷJazmgG>8A6<` V #Μeɓ G< yx_/%OW?V=f\tk^==wU.:\RZǠ92ľ-,Vwo?7>&`xMc&]{Shem`}8HÇvŋdCg`=C#@rO7Wj&ՊqOb'Putc$#Z=b'}Epkɽ[Sm 9}F56ԑ4/>Yu{}9[_<`5&q}>zm}CXw9__bT߻훋f@#OW'uyW@wɸ[}: ??)&9=Jk׃/]q׾?=ƣ\.\]t!_j`P꣟[7'ay:߈n]>uKnj:А\Etm6`̺)XTN7bۘ[Snn[N@R }'*:}(מ͗}h_Drݸn_gן~ Aoo/^.^/¨IygT}ls-~hSUi`wHz-C5\at>h"S[ܴpkn@zb'saUa [q#8~m\炛+Iހ_;xf:Mk'y7Kw~^LcXBcYRgÆvlˀ#첼|j:?|Dwwwe]'3v1+:疏9I"i\K!kGIC$]hTv!J4I *Ilcl<]9h*tidDT̟$pY4/cň1Cn!ȿ)rdS/{B!ltL_D.r)MX-R]rI(G_uGث/>}nVיG̟]Y*wW&ѫiC狀K+׾x^<oXl-ͅ*箸=QJOܕD79Ӱ 6ي7G1Nvq)b8sTr|zd3Ya61$#51`޲#WO.dtL( I&qI/qTV;M=~Wk$DŽ Wdx҆A%sӏV:1)a##ن/ $H f8AuW ߪ$1$c;nG7+-xT?GO Li=Gh{!bnvbs#CGR3$c4ѣ,%c$5vvVu{COsq').@u,J݈A$-)h(L@%xx4m$RkP#M4Ey)\0Z(3 flQϰ2I{SˊQ»᜸eHS:=^N jXӊMo;&֣rLd\TV$at^-J!3 ,2 J1*dH]qښv(8Dv? {7LK.ߔ*et茮dF X!L32٘!YQ$}B)h`=%W]忝u9źguJOϊՉ$k|o }q"pßz?w+>Rt痾? =@}*?Uw˵`~` FO{vdBHK*Hw wv ~}ZeP>rx7^+z_؎y2U`A~HK_|7[_~{wTF )ŭuܣ%OVR{tvFG*hI%OVQ~CR RT GIfSϓ#!‚f#ĭ*ܲzJSw3#RV>Kx|.=fzP7KS;eQ6^ߞua.]ph]c$jwF+ӳWOO .O$5S9o8R2G5hZAT\h$sLz-hܱ,]u#9x{fAg|qMba̼rݰfc^ݳ:TVZ$yuIvx8aAV: +/S}M=,,qy- 'aET+Rjim^]^nVL3u#9ZM&_DW:MtSGF_KA@QҫY,͛QE,xt^WF-S$h$f.su6@P!fm')D#JNEڣOZFz_SA-HAVaP|IlhD hFQM .#7`8ڽ 0]W2J0x7+rȖ5j~]Ε;"g+DںJ__bmP#Mbwy<+eE\9g^cϼƞy=kj s[z°UNc '9 +]ʩU\uX`O~=jYOf{3?|snNfVC%[ba= }ܞ{^$|K?tŻ_YfJ- Ar{@p$MqS9REaqyVʔrq,\W5 sAbyMY#nL4ej&/S$ |$&ze$SoÃ]0 x&6B!7\ }RR)^ T(k[ӌ-~Ԓ;_ `Ô,}@O%OFẖ?Vb ƓM3#'2oE:R0-Eߩa>gL-ER\u0樥'}T d~P~$ ,3pMTCׇ <ĠYqET+ 2`hraIl 5 _Y9&\Md!`5|8A"Wdt܏(z(DxʮhN!#%@NJ-z1 '.{1hꄜx!W 2;9FWwAjw|iog+5KgzAVj.N+Gs`¾L6պ0=,-HIhi|܂ 66"Іo RȘUL1@2c:rU`2)5Vh역*[t/D0 qd\ ״|eOia:G%WG]I;xTWfw.R=xj%,0F+Ks]HV4EgdMx(b9V <ha^:eUdsD5W0q3S5(g >"Wy oyArΧ HrAm&-\֘;0qu&5lP"#8/H#㭏;iS$V6nɦбฃ^zuU*<At#-G&o~:6_Hxkpv=<^J0x|mX+}J)sa@d\+x7l!$3Ȕ?jI]/E'}A:s(\Ж,H}{r,xVm[F, dZ6g&PH<,ZdvĊ?%|nrS9QŶ)kkjdGq#D鞩y9/afá+ml64ݿ~SeK* .eJy2%˽] tOk^tSNwNk1HnUod, MLJF ܚ ,!c9Qo=,_n_}C6dN|Qy=KYߒ|ɾ^^#shwwXִc\sOl6,9V'N5—т)D9kJ#[[<2Kޮ(0zL7ޜʰ#@L#EP㱂D; zy)9mN`/ jBo@gыp X8l6x0~f˿;*sgc X3Dl¶:~ yrk{q]޲n2fm,nl[>|寻qnߟshޱ"ݝN|9'@9lmzG"oч.M{(FcjakE~HAx#^sfa4 .l.$Qnbt!Zi3.sKDWd@yQH1 dK0HSE & H[yV jq>Hxn'jlz!͵/Y AZu"EOJ˒JZ @IG1ux{nUg5w-`O!*E`U~$I{=qu|.=EFm>WQ~.>j/H=-LjWi?BfǰC.xsxv2 9Y`0I؉¾=̓{Q8Gnbّ#%F]=wOv"}0鍮8cb{wx?,?5*9r-\5WWF!A؎ / FKaqN\t, p]frd:+\Ȭ5"}&f|իJK[Pn9$/9a3fI,>4?jZw24ޜ}ho` pp]$2y;GQ%q_~[{!{Eg>wx0c0NC \=?5d7 Frʽ_ϋqw PHY1B ol1Agq ֶd^% 1i}ӛ5Fv[7=ccOG /8b_qcG^XCG7~`7?$SNwnQwRpbSN%KR녚 9$%ch&t0Q[w-*xG2*\fR7c5/>9A)O~tus?Wm :Όp0` DeZBK, Hj\ne]dy>'wx}VR0rdm2N.ym$LZm"|^'y[jZʩ!˅W͕"4_h=Ʉ y\Ы1g7_ofd)}>/¿N/OM#$q6ُ:t>i EH[n5I2lZXNdoX^U;-9]~[E{ SSÚ'{^ <'k3`͛y 'vtJ`)s/x.imoSp3V )CF`fZ02RJ&!q2k:C_9~- }?y_AE{7X ۴&@&3UnXno1# $cQtfaB;ku޿=GɣQeh Dpp645MI4_ϑ31U,3R, A%,sBCpgW#A_P}}MW] m lrW +a!mCNʨXOVhY koZ08|A}p>*} mْ1B@C.Gt]_*`_bOp%F,ђV)dAŠ;dlZDf~ڝ,sBXnR@8܍Y ֤ a;5q`@rZۈNaڒ1BCrs *lqƍ$'x0ƃBh^PoVdkFv :Z6bǁ  Fl=E5.^_ZCy L+,D(LrzoPfn{SJaݾb-W-R*!HMLGѦ?+yZˎa{iТk;R@\ >"&WW#V#M966>Phޅߵۑ6,S*$%X&#xϝuFBq"MW~oVt0]A@Vl}SE2ed8pѬu#W73Ri]KzDL;Q@0`&=8͎\7]M Yq> \IH[T+n*S*<:2)aK6XmbcۢRx_CV^!Q,̐yB^!ѳ]e6P9 Y%rOb$BL!$c\j*DIeb# 48}M^V Jmjdd-Z.+'u_[',cX|W~vS`qzMo*Com;<\ɁEdzWV4Zh]iV@ZU60g*\ 56?;a왐R:~$x~Hp/tSmaxI^R3#ANUuKT%^bll7Znh?abm",:J stt?g$ Ps&Q 9!)c NNgS̋aN^.UArRdvEa9G27cd9d`z>^|BzvAZ82ϟ^tǡ{-O]LލJ.xMFA /RXUk?RxlHa{5ӛy55%1dHmLf]0Eαy}o`9,X'Q:|DJD/ sg2ڴeŢECHFѠ{a- qgܓ2XEc٭Ymxnȧ09Ku!'(hٲ/ !%3ڟ> ~y'rH DvJ=GYK$!ӽqiC2ucm"{醎yf$܄FPel>DC3ˁvOЖ&P;o%A͂`%2e=s餷hy mСV:v;~7:ǹcPQ /B[gٻ޶#W^!AM2de'8D"e:#Cs^UFZ&fiPLHK_l') ڂ3ڔ.G\aB%$*9LjyE+v\~}<{V+;杗 A;0ɶ`/33xʼwbȀB>yxθtx7hÅ$`QaиpuPrZΣmW !'i}! ȳeX17UԦ^[࢞kK菹38B5>g~wK#+{*њ ӥ-jsr;b6e;TO(0bYaw' Q^6丗,T)|h\rtFA2²e(̐5p<]5ٜTV M6qV.矷%^cQŽpb# [ шJ vjtb0.d:gQ.)jk b[Z /8 1nn.o ʴ}dc$ȨP4Ż\~xs'`󲸼`#vŧ·BUaGՔ S5kHn8i'g i`ffSkQ,&6/'Fg%Фh&iTqg%B ӣvR[~CܴBqጪf%yhyR1i#\R\ wA|]qZ%Х.eҸ ?0q.b5EV24"Aǐypd[L-8m 4y2/\Sk*yɤiJ* NhUOu M쏼WޱS)W覥If(-M6zaFTD ń5pԬH(k)köۑX릏w9M/B*tAR !m(f39tҖRrt.'s^scu9~4sҕ]wS:ʕX~(1}t:f/G.J PhZ;XC@n=.xp̭Mfuȹ$A u5c\( ?yhvȑFOPӤW&S[d ßI;|Ogn>/Jܦ&Vr“f1TLI! aA>P~Д<])yI냓EBMWZ4 Ъa|j*sDL㇑,YdƦrC+)!Q6%hdrN/Md|=72dmras4 gl΍1BM)@͐iݤZ(=jUI6Ѻjlr>͚.AWTM<6% <Ё+A:sN'Ej4oPbNd8(dy-9`pr n}=7O{misݨ$4lN[X,V1Db7W=+e&&bq4h beP!3/0hgX ;mdL4zF,,9 fHUe0 hOT >p/I/O%@@o HF&ܚ} tC6킟M'=1ڡnMUT>p0e,\h CwZioՍy#~2f؇t5i +d,)_]efI瓰ro` - ?hYF0{3M0ه AB/m~ʱsl>:B_`ưì{t1ȡEYy$T1ی^DnG_ɻG?%mb("uQC`ߛ$7SD,do1¡rgkXTpR%.p(-6O^M>Mղ$'b PɟƼ}#7br]0+/bcمqͼ>s콲.E Z(?%okm9UhD)0ld jy-|nJetlfiPJk6U K%yPE nXp絑l+M9B PPybdVNGE-{[Om<6]͍BFTh:(ɌЧ:nzh0Э)$w9^iFQ4Hf,nQp̹R'cƗEwkblWsu5k##À QE2puȦf`@:u}3N]8 Q3HVc3:; PGMjovnQHmT0y7]A|2`$bO^+yT!It@0|H3t6Vm6X{" PhHF;`5ے~[EmSւFwnXpGMrU_tuz- U9e}btW'wDK1h}ek Iʪi2CFr(rɗo3iWGt}O JgCgPw种AZ1RZNmP# rq>z=~:F;~eRfD$PNiΪ N[9{sdc6Gfik 7wfhtYcd%mCmz4+!N3HlNU)PЕ7o?߾:O/g~ؾnzJD]+vxkV$3tI[So<Ҥ7%]m;urm:`hRYt+ ՏOdO=>y|y:,?z?-/(ƗㄲzBƟo8׿&2(G];ib[Sˢb΁a؉CL7jxݠΖsU3%qzԆNM"_M Nב>H_f4FiQJR!Pw<[顄FB)Q8? Ŧ 6b?yK`zqo> L7/Wtqܮ3T$B(ӫ" T}Bk+.^\8߅Qv;UߟX}Cl@_)(smCykͧ]~(-b<V|WbzC'~>x:NyK>=#}zbޖ+&Xct¢b7hn.Ue IeOKƋ (>Y(X[g;@>uGkY.O)59 +vٽt*oLQΫ*Nsx]^x3ww߻R}A.eCw)ےNF6-ׯ+nJA|H^ 0Ҝ಍F~i]:~\Og@Oqo9D@P"QsA@D{C:4CZERo_-ם=fjGGף"H(`9kyz p;z8àt3ZJ {)ZŻJڳv!O0EI#]|=^mE%l~X؇_w7PZ@Ic"AByk22^r޸(Kw$j'('$C3Xmt@J裤21yA&W=(M{mPɛReẎkL_}'~"(@' X0:.V{/M^enz}EaʽXaWXD?@ meJ,Eð\w/n.ֿPE9 @sU|Q;liFF@YߵIw冾gYwmmH~]PHnwf1`13yB5q|Zs[lˊlr4б#~UbXCZ_sP1I z C˅9c{X+b"j0KpD fuܗr)0qw|sn2uAwW_>*~ܗx/dB(e1< aQ$Y~ҴNťhG \q Zg`wet'\.Z]*MD ^A#Mv^Άsto~%>)#Bk& Dž:W#x^2{ ;Nrh]%phR#п 4+:٤ŻQ/z5Xf JYZ\Jjdjކbodf4v~m^wD[:'ua8-mPhzF kmMn #CA59k࡬Zn^1MY?9 UkuTD.;!r0΂Ժխ62ϕV,K1e 2sZgf0tG)0Z&h1mfn}%oPی(4d}Gs9{(]Hn> |#$(PjQM }yRzأ 8\å/O\vOt5`Kp}W6S|(Jɾ/CZjV]RkWӬ>B+ּbU~CynLfgi4lR|ɳpoxxo6ZaRQZz?$^Һ[*_꜊P0FBrer5vYqTJޘ 3N5p=f. t]tL'K@ňN1Sq}[j^=WyB}Ye͝.NӼg֬=0Or읹7t;YYlHR[/=7WZq~ c.FjƩX+w<y=er^9]>'[F6q#-7< ):X\BsjT.i1sCA2Խڒ@pVñ 3U{,a31BB9=k(MT i7R0vH-A(3 J8D|CEQIE1BeiM8Vhhw/̅e}iq^==WK|Go\Z )W*ao??YN_Q.=^'H, C?Q3?B+~z5ۃ2TɡbJCK5s7_\}`<9#(6FpSZNV%UpmJ J35"K״6)xWxGB)K ѕՕmH`5d ޭzH+z!uPZ񛌐΄d=Jӗ!j?QgV+vi t}!SW|* z吖~0L;{=&Gzuu\P7I`o}M&*&>J(ۨV UߗP~ꇒo?Dގ[5Rݕvir-G߆8ҨJdos}-@tr=yQ>5FJw9phy_h% |>'ƞ@+j q;2c[Μ}Vvu`YOV z, 2^VLt|}֚GlΚ3~+h66Hktq &Qj%t1fOfӰV^gK٪Ѷ=L#TZoV4( 7yi2,˙@q?5TUN-<]+XB73Dzu.oػ[. K7ߥ+$.2Fj76'n]iP#:]F=UxvS{ڭ{YvCB~p$SVi8 ۭ+ jD;h"6`ڭEJև"!SP)S0T=;u~T[ K!(S lJIR!:g9&(8' iV96_Fz"C1%8)LD4wk2˪ #:Wv]w W&]lY f̙.ǂ423ebCv$0)d-Gbd$rR+s9T+b$ e1*06VТrrpۋߞ]^,Q.9'2m"Dk%l<_U@ u֔zޚGbyFCn}v #.M2 {9f2F7xA0XF5QYЎ#!F_팒e`"-faN Au~0Bf +IM,6F V ;S-#uzwt>?\Ȃ lPgq 7z<:P')!L"+Q1|ǐ6LD*ԋDgDf9qq rw$*#Η/(kx B<2CČý*D&<=֍¾03ʨe&glP#(E#j)bd%z*sPyQTJh&h`U3J BL# s`H k+Q!Hn%=Tpt3K{`2aу~aX>ّR-i@k(>aV#фӂWtHcG7aՓ\^uf]B30cc :;]F*jM\)j-Ih%F5fʃ"1U4(e1BHb3$Ӂю*0UA6.~>M A  3}. ^ @$Ilpq'|"xEh`PUPbZde4GgN{9 {kRH^ D'`}0Jά&C"{e ݉rJvԏRhљRIϜnOgBh9{愘^w'Q\9LJ EAz:С'@+çg ,lBg2ʨuw;\oR!3ESU3'DTtP"d( %Figd*eӖSu9[GcB]~~76F.yS`'ZxtU3P<m=o GvowtƫFGwUc;%FCW19a^Ix\)_K6J ]'-*tѢxA}4=K0G/o)Z4*?Mǔ3G D3ѿG^r 7D:ѶZWe<$(Pmy`s&tLfe[mZű\- >+S!iXlz9Ƭ$Q|LlhI6ԉq %ΎjkBWє+ | \/ e6$Fix 6Xx+k ,$#ȸi_.NQ%Ҽ?xO/a.]]ponjp.|Z R!wFjӏeX6jnP0_`r^-^wZ ޞ_,F#֌}@ʝx0},|\@YWr徹7L&Izw"kJ*ܐR5r 5JXs{%} B$%. 9'!iQ8ܖ铰udWr!Xs[ |#2ߓ+Z|E[ |k"+^H^]o 53$&liv54FOMۢ3[p0ѽiJQ =:\9(D&oL^I@e[;ыLwmev5s~Gy.pŬόw#[fu;] ^\sCtZRdvTn-uߔE9*>t:N6'=iDHۓfNO37_4pKΥ quArW&z e_o"{{ۓ'z~.AYKr4YoO,վTncۻo8(jn%D:`ƕSaӞ/j Z/hKf34LZcYa[/ Y+lAT! qRЬl(3/RT:)H\ =>47f>o)URQ%|"~$QiDUjI"כsy5_a,TY!PKΆعlk!mׯb/9E)Mxy`x6_VFd(W#CP#U`Z/X !Lr-~)* ڛ NNʀrZDd A1:ƶ8+a53. 'WEKa20แ,c"hs<T6ĭAk;{~I͝&Q"J"A(sXI( B'9d !tn%m&s6*֫ra XE_<y&s+"'&V~ٚփIe_KN ЭXk{&Q ZA%zFb{O;@ո]A 1HyEEbs4/ ށ5,T|c'@>\=#L$Gܷ{ ber/ R oh D($=8Ƭ΋c0maW8y3&/yI"O '! U.ϐr$<{&Jgpkbdk{9;s7Bx)ounjaIǰ:,Dz&D֌l[_VO <;'@O 1G)L Izi!{ܤrmؾ7)h VФ),X@4MK_IACx<|M R--+/4FRLj'2OCp2Dc@=M %446r44D+oҐњic>. 终%T>OER_> XZ2"h4\2QSͥZz}|.Dp`Bh8BB,\mMM0K_I'nB}2}jg3#ě|lH]k _` ΡB|N}I 2\౒}N'q5Ӹ\}0|}T:g5͛Ne >?|[!.Zy'!>NLy5/& SL'27lhKڞ$27I6Lǐ\t\b"U˵\%f񜌆]g)%xR2i_\R F|'i"^lcn -B?fj|0_Z .M'-Z 8Y(x([Hp%@iPgDD@:> ,'q$dis^eR-~AehFW4Mh9UvyCځ8s4E{zŨsN,1d ߲Sך+} kyyavX0$K)diMlzÞZ{9Kjᓻo&L(PZD$tu!? '"Reݨ4R>xFMTrNF)uBB}8m#aMQ% āp m/|qzW ~gBRՈFq}(j@* u|ưa5ω0v~r1]_i^2:M !;òЕng˳ _(tRgۻHMQ]Gj^o]GjQ"Q]Gj^Q,%>H kne:RJ:yXXT㺎 %.:RJP*-GvԄozB.>J>>`FQ6GKm@y+>z6I%opRl/}7Dԡ6!ǔ HMtn! r?7pt}1XMGWŨA0qҁ7BˁEJօy6, |'4Ͳp‚J/~oׂ`pݨn YRGď9)ΧzA߀mI9VVF)dC"[]g B% ]I3q}fdHdriN7 `uO1"TO-l]֯<~4>f wL DĴN<ڪrBW5q1}6Eb;gP>}qpJh"=coX5E-&8WZD@lI3or5B}h~g˳¼v=\4HiQ{TJQ8=v&Ƿ]{9(g8yG8ARn]\̊j[ R7tSg˓UҌ_Ϊm9 dn3o@RG=l4>JuA9N_~^d n`lB~*7Y.żyV -N;v9^A|͖j]% S8)SC%ՐwW,Y:ڌ%%-B0K+L\y`J zoPCyn^t|=_QQY4 _n~wM8Pq |y|K,8?ݛ NZ;ޘ ?L60es)m yOVSJ\&NK2PñsHFd4L ̷[ <5q =xMR1OCLv68~Wܽ_|4MQsG6m*5ev0QJr9XIgLǘȨU"&#N_q[< JV=z0y|ү2_ T}<@cg00aOEAۗ6y2} [d8l w~?SW;M+쾁Ay^Q ř|!n}TRX {/ǧ=f$\19\ujP&dt4L0P*l$Qa[X2/TZ  8QS.C gBqqi(ӝ0JXUI~nRRF?gwwE9 \ rwX~"'D]\RT=(;I~ x ުߔ2ۿkEQL֌'{Rk&L|=)80hA_gEce`NjLُoyYs)j POm?*?cLWZIKY.#2sk3?$Y-F[]vFˏx]2wxLN^j3cgLoݬϖg8{sgz6OR^Q9kεxRZq~ g}y†!dIRnmLJ- }8vNita-^Q~ #Aj{>T3ST ydVPʊL&K2N_355:9CtC]xmrs;]G:>͙BgӚD ܑ{1fqT)) A'RI$<‡ZësQʊvuYXx>êHZRL%h_d>g mrI!9J`2F\e`dh9{W=ϐ o)fU%Ͳhŗyk@4¾>񮓂Fi2E.tJa %L$5"yfoDi%DcBR^IWA_k\ dJ0I&mF\nXƊ"q$EYe J~E# =qQLH s[ތMt_lǍ<Ѝ:8uI]{%(Bݝ1"ρ Ԩ yWGdH8"KrTNE,}EMT zqmMT t\p"5/`,IJ V.3ۏ%rHMU o~9/IqFG'a3H ;O |ȨX,mxܧk MRS՟lJPyòaU$|qR!:4*.Ba}"Z s?(5쩝VѴCqms?aj oҽ{o>.STf){yz 'Iv 0=-I$v#ɒD/2֛BY0Ba1RR=eXȀ./nҥ~^O֦; v}1ҨժvW7 wS^gք&[#/33{8, ܅5Ʌ S%pmEm1^r Q3t6$$xhRq JN"%;tqhTMyIn=CN$G<% \S6Ho3&"C"֛l ,':F([B) Sl[VҼ&&ƵRXMʣ[u |Cwuf)!\2Z+]ێ{s xw睇vq(wȇ_Baj }-7}(KXHK t+7ڋZ.io0Ȗ*qUW0\r^}_?zEa{y^hdձ 7hC׳_v2sv}q_>׫e}3J_>5hn_WvHtLbu,]ƪf(u|v~&A@^PVh!C4S@a=nv-!Љ}F6T`-f80u(Li9Nv؍b}g4nc"(, n] CP FU}ZZN 7W&q蹿]|,-/Q7Wg$gWgn!$halzu`K)WD1CӠS{)J:>C4l:|@ZXk󕣧,CqS0.VÕs2folߧjdi`%5e Р)f0p# Ghr M&$]+j4}&fJ=R 3@6Z]Ivse膆f.!ZT EW w\TElTIJj7 %8ا>[*b"_QDW-4ĮnDPz(up(8\VKEOlnjR T~p׹30J iVkZPzl( JÖV ;Pzl(e< ə*q(e< ڗp;Q: 9JQZJC)w6(uY)Kn5xʥǍ0wCj[1JZPq7Jö4u%J'^jV!Lv+"@[AQ (Յ" I IE i*Ѵ' (.hNe@(LkrAZz\}7NVQ-gWƽx)ϥwno661NVBtrKa)34~,b[H9'7|3'KW ]cQ% ʧ;~TkhŏuIhS9u`ƶ>V] 8,v k'0w׏4>^m%Ы+ws=ь!j=qoM$K%   PTs;9%k+~J vp}\۩s32BjI Y(SM`+3@53KL2 F)qp9Bb4fyS9ҔZrf5' d (Э_}tJ?m~Tc"n1yŕ,mhAE:lr_=pD 8 ٛ޿O%HMN I+1M{Iɢc2x#Si;,)jԠt{c%:%mLH$c$M`OI^FQI^ѓWo c$.c׷QaD1(e'4ݵfH-n88zcٷoY"8 yۯZQ*ECjrI|Tr2^yWck;Z5O9 p5RJ^(_Gr뒌yX h6\P+ sQ/=@uޏ[߅s3?XBR `}TBfSO!yO8u+|d79OQh19wS6 z\ xCGDDr=-L! P>'OC\Mu~al_]-zWs$%_kҚqWs>.7n/ O*mT0"W)RƍNI. KmSSn,ɔMZI$ךwe:⨰ sbi6阃[^ tG?5Ww#qj["aBi^0D͋DPS(Ҷ[ib!4ƄI dB(q ^ w =W3?b\( wdb/ >/C 菳Mqk\r+nlnf 3\:>Z]5٫U_V'])to |ASܬ,f666懦BYͼqA ff &Z.gtTyew6FA>ǜ'o)~%=8yAMUg53]מDkzraܭMk%a;92yAD^B[)sCWJqa^U99 y%yi Q#B8{Wa'Vq~`_I?@g) :ށ$3|_deL`u4̹4s b-/ԯRW'EMR&~x{Sά\*Hj'' TkZ?77wII&稔uŢ4*/"U)xQ*^H+u/醸>%5CW΢hdYu`\|]o\ǏVcLC'^W 1(H3O9+N%tUJ4-\q_XʃsLE5ڠ7R7Tռ-(= ';Bc=ݛ6{eR!0@뷅:_{":1 e+Ӆ" Jj܂ŅQh豴e3)"m$<^|!'tu["L[pܽU gV qѪ!} IP\+H9ޜn\ %I&񁒾G^F2D9->1gFLJB0{8-']UQd( !;Ѕ᷎HLN>Ib@q8OoӠkZ/u渊c쵘sHY̺ ||M-'Q};poa?ã#מ|%Fug5Pc=ox~j¿J+?|s=b[pϿ[=:c}ެ]5wǃ_]ai$_z1||`8K틓ef33 ˏ cqL©beW?~],'\!uYPd"hIj(X{zY,ϋͤ6Q\-ZCԢ%揖_39JW62eWʋsٸq<HQwe;_ΛoxEOBUqC|Fh4h+e3=xiؼawwvj[~u#ԡg&">nDzF?ݏO#;߂@3@Ug( 9X6}LU<]QYGȁ%g=7|e#g_$T])n#I$ޤkwrY;F625JA /B;+WPdA9 ?Ī t1NdY(^ekJIbU:a -x"^ʁ)-BªF1'H[0FNNpj]=秢SI eSeaP8!UaD#!t\!gTZrZkSp?it9l4ԑ!p?Y j_TQ«(- bh%w0;eUE2F6jsȃj=-(EҚkҶNwq(Ǵ H 홤>=GR >DF0m$ ~=GΗrEO?R{pAGەxR˧C_>R2xbh⅐ T'XvA6D9;84!gk~^DZW\\^?[37{㧈:1\FǛ/n/nfw!jKFF]&aG{'A&W8NV"5@6fzm#|Or)03Ahk9Jюno-J8ODzx_%˧ա=sz)Z=ޚDW.\訚7AR)pPV<gJ(,Ffx&l%c:hT9\ +QܗH@ n$ szv$Oİ5,i\VOk2#Nb#a|9£ 6c& Z;d.I+/.Of`m_v!jhW䯺Ww_/V'v/岝q-ؔ;Xߗwoݲ¿nuA}d#oX:OA7^z\/ww=.&jLIneK<ɓ7I ?ZjOhGhrDHC޹f[֍Ѻb:Hn#F_AAM}[h֭ y*SiN5ܶnu6u FurbEpľ[~GuBC޹SHOˣZ~Z{_=n~]3W珗~*>O?PgrWlumVv.x:KP[,)mމX7|c톯d,^FkFkAw_#˩w8.߇]ny?\6-Ze~؍Oi#?k/E忪 =>]Mc~ܭȵFWb?\|֑kO!X֡~ U<No!\y: DꑒHLtBs + `lua"+ .@EݘR OVdyR->;MC8}6اmaFCT_oR-RdHOR|ZJoC n*JfwWI̻_ރ :@Q?Edo/^ZjD j7ނ({6. ju^+DC>wb*Sޱ0Zt I2@5KvB4 y#bMW5 ChZm^ fK+' %I-K[b~=BjM3=uW+Qؽ])_s0NíQi=RS!)PUʢR V'Ȣa2,#V? !bFYоvTx%Jz&Llvcs~`wM¹h=J6lfYB> vFM\'1X#f-A _9QFƸaVuB>"͵0YKxwTШ6r@#fNdU+#4 7* @ɰnU =\=BM9fuwaV.z0$riȱI5aVyɁC3ujAVŠǶϾ u<kW(HXے-u]ňr16 S0@(r BU4KNSoY*[.16^UT[Hօs)&!o[7[.16'Ȕ՗_iݺАw:\`N #xUaV1o9*0ES5+9CT_oQm9p]sM$zzjZ ZzZ U(*tE"Fɡ"AC$@u(~lnds~cXPC ([#o偪KJ@\i&8TffY%:uQX9Zc>jIOΎzߓFssIJ4#4zr-ߧ$8rYٛKENM;] t/-40x_KUOS5+dρ$bԓ$\C'1ڴ%y` vʗTJL^JS!S=Z/l(Pkvzn-y%mi=e{ǵ62',z*„G0a+!5,CкO7-Ͼzޓ/[=uue* rS>Uwu7?~A&EIs` Q B`K6H;yI,9kQΖ04mv$anOP:'O2I."T́#j tAז"Cy``7-$"x|Xy!MYRIUeD)FKr#]!A>TwuLJ9iRhhw[|pl EX- Swy[?̍uis?(]o9rW7cEH!U ¹/{0+l$y6!{3fF3w-UŪ">p#?+G/a?t=T´UskŘS&Tt#>[jtAlдZ#3z D`Kal " "%c, `F>~R+b!pK8bÄ8ɑ};c%YK=Vi9}Bz~i|Z,>fh.r+ĜpL&gZh8:M T#NZAqb} edJ,ݤEVOY$J7J u`>3 2%?݁NMK?c%X`-O'w"*m pK|b(Oa:v'owҪ#Ti6\Vki85j~wbtv0\{ڝj Lm&ĜʘoLb8MbNА3WgZ77OhRy@QoXQ< 3uKY%Z>4U4IZ1ܱn { [*(6)BAp33uKo#Z>4UtK]^ Qg ! hE⍇3=D՜i+.']eJmT@O٦Mڧ\3S+PpZzmBhIk)4-徛D"Sj)4-m ih(Yh@6T[-=m-Yh4-+,sA8m-UJ5Öfjr?e-E"R&0'9D!)sM>qғRڴ\aNJqZ*j "6JIR")k^A!l9ZB֥"BՕee:S(1/ij%YeYY%l@%;y0^*^TeSW㽯(<*u̮W_׳=xoՓ??=_}k+jO딷SqסuT>TG5m3?sLFX:]+&-{|dʐ dsQY4-/qá?ZaH7۴ Nz@eAH25D"9./{<,bXPF,{bBa 'xl.4L;2ҨAIo\. b! 0ee)–Ef(n炑Pqz'/ -fFM;i Pv>$oUX(,dz؎ vLd1!熜7Un!=n-ıW[0(_CF\d$ mɒNbBWp4ZbDP<% upzdt=D.;NH΃wکT#LbG% 5(RLvޛ- pdG*(S3~xTڧCА3W$Rz~zǺI[ZTPT'1mh-ؒhА3W$R0yǺ)LhRy@QoX =NFn'DևnVxL@ HDK$a [$nTE rB E;d&#UA)xP’W- q&o5~W9Bԋ}OE]KwL݆bww>BIxgb%&+Fmf[ Hd %Rɵ$>#V+XYkX =5H]2d_r;[ume%jhZpKseXA&.ׄә*aUd_blbd.! 3v9TvP8&+q?kF)i[>`TQ=<.yBYq07 t&0_ʀ&=" 1B| ,bW,JوeߠXl &/>`vYS.ÂAuÏl'`0а~Dzҝ9LsG"w!U-$|Q- lmDY{n+Q$TTPu9h E/' D\&<#ID{zOY,e6|2j$ *3W!ēNy{@Uz%!Ps3:` >^X*3VgskW4%d^Sj˪ו-*0’kPU&J2bY`8fC6$+HI<q<ܝO\/AԾi+WRH+j+Fj3<֙b^(~%rɬV |uA'3aF0׭`Ts=-?:Vr 2G4Y%l),\8e,;hsD |xD3zY!$Ž$}iM Ef$ HXi8b"ec90/^)-wy4׳>=!6j;Ջ.М!]YeKhʥT.Th,"3]ȼ@Y2àE%3&*KFdLs:-zRa˵տ>zpZw]5VX>:uypUX~w?;|"2[\ Vun UdERZ1 @9 DSaJsg\?nȉffjꇮYۓ;=2㝵Y3eJ~:d&. R++#-s~g0y] P]Cg(غ7"w#Ɯ$w|K^La#熫҈NX {Q4zSv:"=;ڱȊLUʲȫJlR9բ*e]亮A喁6^Qs,o8%t|˜uS_P9gKȢv@%s%TE A<0Z%S86XOȫTئ%.v`As:U{* cHm8ϜDzEv}-[$LV{އ%(2/Y={)iS/AX#ۮݥF 08w#f<4Ka.TUײ(ZW2J{ZZ'mŔ;9_/(X躪u h|X ts/ L+c#| |̔z*l<ơ%dʏ@+K. ڒ=Ph:t<s慷^-G%ܗz mj=lՈ Ѭ`  ;b)Ƭrr]5',xwZļK۴]xa<b蓯Knz}%BnxV9ZU|V kͱBΠn,|8OԀ;C6;/3 /Oiv Mz ӽ@vGJ -ՇeiG~2I+JT FlmcӷrѥM;RQ#hH7!VN5ȸ%ܖcW]CaB߳x;"ݦ;:"X&u(bBWH!5dђ:դ;9PN 7QXӓYU/Sk:9rbM!g):AkzǺ`jLhRy@QoX/'SÚNo Z>4U4I>ܵnܢLhRy@QoX)Op<>hА3W-"v&qtq$; ^2=_>?_wo=߮?_)߮}ڈ0ee u'v43&/ DH&[!q0HYl#%wҠ? |?|8f[QxqUj#}t^߽īW:0@r^5\U]'ȌOJ쭩NVjXwYS"k6нS˞ܧxf)\l=~ TjУ3OW߻> |?i܏۷KV_u:rG+=~힀Ǟn?>ݢO} <])oD) _Aapcu?VtZQˠzQQA䱊%Nyܑ*Ѓ{; ErV<3 VUF)[1 ή&y]q*GyZIzՇv,PL+{F$fCUD1KM?Y:BxO#="p YQ8z'- Z3BZv&ѥ?#$t;&GBpn y_w%W\%ڎc굍_\ nT+@ƧR"{>Y 2(C`1O 1HggoV 5ZoĔN%Q}hșhNI}kl&n*l \znբݑjբU S<273];D6ըԮt$|?^Yg٦*b BYh!o֊닖kF wsK-Уb#:E.(.Krv-otU[rvLF?fڍ߹@ᛵfcӒml xOf_l$l4 ݙI3m*N,._po6Ow!.BUҍyfוmPˎm3 }Ko.~p aTNGs2E N65}%~mL8n%Q3N J@hM)pB@8]&3[Îh{,(3wAsh!mjZJIK$ 90  ^%mZCH).X؂@Jq*>y 'XFS~Us>&TT1]8LҺcts`(-i?MJuFׄE&͛;y=֟7Mn3j\rV%Z.#fBg<-밒 yu} }S)=Ztp2΁6Sgk^mIhG4d0}XUPU2;/{{'f %:.auͺqxϥ;lh ť14v) h܅ a+, Y=.Wٞ_߼:lz-O"lQBL!ooӃ3Jzkb>>#GGȏ {,Na Ghi$ smnj4 p㴉U~4ߗd^.6מggpŶ2EͶCo edwX(B!**VCHE\iSot*7P17inUs A&(HAV0vf0| C@}i a1])j>z'TiFUk4a~vZϴWzn&EPy{#wѭ7sk՟O WϫN;_^8J_u1V`u+\0&;q dCl$p93!P|7 Ïm0ƶ ++݊ e*%0efpSw ۹., |no{WсfA^*'N a> Rk K%6<ƨvztL4N}jF#zSGZ]nzb2i%WlK¦-cIK^}eo4jW DC%Ha`83W0|XNwYTϨwPzvQ27е8[!^`xmQQc9wT E2PrE>.ɻhYJcqȺI%+rᦴ,ܮĘ4"Aas2M|k)1ZTwz=_]8 7c)ZW?Ppd$C*t jѼo$}Fq~j%4yYmo-ɔ{I\HLxWx=xWn* ڒz_HY9XV=f]NL='rL"w9|53 95xg-"&壽2Er.cNUo=| ( MlM{U *5@>E5{>Asw{K6Qmm];|X.#7߲ĉ9Y'p5n梜B4`M9J;KP.;%(.n7S!G}*:-P%g@cFA`FRL9@\eDf{I(T)%qW1R\p#Km4)ڋAH#g47ljp^{] h@N~{7Jnq:N3yoS[`/Qbw)xX@'MۈgZ[>[|UCw!_8Dk@L sq]S}CpOZ)2RtϦ~,/[[, @7S JQ΀ h od% ? +JhSpbGW9%[Q|]|.6)0g xN"vW+a7]8q[޲dwDhpToO:z4qZ_D UH|wtP =]42uTB*8@T-xaPIa"@+WS||^}B׭ IGAnֿq?# AN;l| ;MA[1[<}H%G7iF!c%ffQ2!Fђ4JCp;~lץ}ֶN}yUE)68b.:Sj,L#EJU*γeI`jjR.r71 \#IoOu3A&:wb.0h9 @]pA5O32.(g2 P1Ul#׺gV&l^bڰ*H "P^O;jR U- ҂kL2i.R?dʲt*2Briɱ]cHOq&tV.ZFPYp/)gyjdʥ7(S,̌QꔪXbZ1,j'އSE2GATV@,LG"Ȉ5rnrDMbԈjĕ XN-zǝZf8a$ui~o#*Ml:K*%D0k2pDH.d$Q#z]?\a7e!fأ c)x$I}&wR 1nTƖ( ("l@Ru-(A AxCẄ́G?jO׃GJ`=#QTF07Q\OO?G iݟʡ>擷@cЄ"oEY┈ۦZD6T9rEÑ7j8{L%<x fҸC>||sV<,F', II>>{!wv 둲hқr@?ldEX ۻH#{Wh$tFBHs:#0XUfU5>IVJؒBdd-7gS0*lk a5 3#MHէ?tT>X˫Hދ usԄhz[j%$U7e:]>ͣ^o Ke<`'l# 4>X]rv~Pvr{vt#h v 2RTj1olmٓc1tz<3/ (-t2inEbq/Ju^܅4]3 pҜ,c:g^=tߒe@53}&f~C6Vq0>= CNnØ7t@BMp4$S^hݐKq`uu"s6"%rkM,yi(NȍvPF 6k X{ ښFPSoTop\hS):GQT\ B2kUЁ#?Tfq׌4#$ڠ1m19*2V)VE[\aBΜ2*ϡd !f4u_o5QӲzSI sHo//o/ͽɋ묢͏-.Ixfq}yn˃Xkl["IV_X!W]-P=6&w%Tw#j4@D;ЭLxx 370v܈iCJڽ9()3›T )T"ҐPTi!.JTV2ƌdk7aqT X8F~-DZ8v~vy-&Q#J6z uR9xWTBXi4(E/n*bE 4#s$#q^n85FK 8vDq䞗 sࣉv xt!6 '/Gt-R .HRDVӥjN!;R{XkN];Bj@Op3 s`HLݬnbmN;BۈVc[53uw(LvHS]qD7> 1ڀwD1wLwtgbF.!)ĔJ)h #mós#LN'J)V*^D)F>mxW8fjl-`6mvcӃ{q4 wmb/mG~S} AW!-LBci&7ˠ>^kgS,n0NT ג=ȋ u #/q|GŤT ``eqd8ٿU<=8j&^qj3Chd-%} N|"TmS[4R@á5jVU{%u巖췏k:WUU2~.X>`q?kDqWe=:4I%Gq??Rb'x{#eR'Zybs*uV`rI*qX^l% dtFT1eBTFeNP ARG/s>ccI5x,OSnL3 O֒:lwIQaJNOX=.bt'vc8%F=n1j2n")uN#.'s X Xz_%[rn*M5iIEeŁw<77a+W3{p>,L>;_ߧZ=IgzƯ緄uRScտA뭯Ƽ`{ŶJa>@t,!Vz(Lo`rzm&7T1U Z%:ie=B)@NX*y&dkRRrf),0'6*Y{2ڔP34m2UUns2a@L ](TUY%oCf)JD. e/?cӰV WO]]>Io_>ZUO5u_S3ԉ|6Ie 8ir_e 3*ցʔ#L[W3s$JzdCni!$-I}#jN8D LDilQeV!L=& RBAjFdR.seҝM`R&In5 H%S&kJ}Ʈ۳'y`ȿSouBŷ.xw"!HB݅\?7UL)9gn]Koe4 B%qƯPr4sMK$(AvT~w/I^6ݱ>_iYw,Yy*bׇ`!=K2ηjJF]p]E))洢TRszoUpjbfoQNp oJbvR[B3ozRNƌu0?V @jt>C]o h@x+rZfFY7Zn(ڝ!IbEo7]kb[t{|E~7h'C%4E3 &@zj*LkZ0p|nQngzx n@oٺI`ܢm>h^/F'ܢAVB눳3n!'`2K{3G)Jx+v(%Ci#5oH(g]<@SQg7U;-1uix_yo-~xxJ"xT :EhP Xl4nQ(FwˑAN!c8tv]a:#eOW`#TPmZcd"ڴk="&◥KM*W=Y]`r6ECW\KPgSYq%}F'BهYl)Jk5ˊKfC /JKJ>fJ2;+_1f} ޓNVϯyp~{-i\c/ڿ[9RQ'd8 ša|r8ͧaxdSÑ%a%܍,9̣$2dMʽ#o sAݽZ@zz0VA]`QyfwTO%JvL4]qia>(!qJh:Ah\$TN)U@J*5ƜoGob6ե 15025ms (14:34:16.537) Jan 21 14:34:16 crc kubenswrapper[4902]: Trace[2058640533]: [15.025832893s] [15.025832893s] END Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.537338 4902 trace.go:236] Trace[562478301]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 14:34:02.420) (total time: 14116ms): Jan 21 14:34:16 crc kubenswrapper[4902]: Trace[562478301]: ---"Objects listed" error: 14116ms (14:34:16.537) Jan 21 14:34:16 crc kubenswrapper[4902]: Trace[562478301]: [14.116333259s] [14.116333259s] END Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.537390 4902 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.537343 4902 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.537437 4902 trace.go:236] Trace[1341060811]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 14:34:01.755) (total time: 14782ms): Jan 21 14:34:16 crc kubenswrapper[4902]: Trace[1341060811]: ---"Objects listed" error: 14782ms (14:34:16.537) Jan 21 14:34:16 crc kubenswrapper[4902]: Trace[1341060811]: [14.782383245s] [14.782383245s] END Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.537451 4902 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.538625 4902 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.543847 4902 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.565374 4902 csr.go:261] certificate signing request csr-jfcs6 is approved, waiting to be issued Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.572224 4902 csr.go:257] certificate signing request csr-jfcs6 is issued Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.579691 4902 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49792->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.579759 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49792->192.168.126.11:17697: read: connection reset by peer" Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.580122 4902 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.580181 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.624185 4902 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.172281 4902 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.172356 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.233962 4902 apiserver.go:52] "Watching apiserver" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.237589 4902 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.237973 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.238425 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.238505 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.238566 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.238616 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.238743 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.240120 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.240410 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.241940 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.242022 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.242937 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.243036 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.243126 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.244585 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.244652 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.244781 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.245014 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.246309 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.246504 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.248143 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:40:57.724675158 +0000 UTC Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.297182 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.299169 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.301086 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.316260 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.336210 4902 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.336391 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342127 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342177 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342200 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342244 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342268 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342288 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342307 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342327 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342349 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342366 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342389 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342409 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342430 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342451 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342452 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342508 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342530 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342548 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342566 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342636 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342639 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342657 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342681 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342681 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342721 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342792 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342809 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342836 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342856 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342883 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342901 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342924 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342948 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342974 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342995 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343017 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343063 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343030 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343088 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343117 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343139 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343125 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343164 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343192 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343211 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343213 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343263 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343280 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343294 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343300 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343343 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343340 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343396 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343442 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343459 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343464 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343477 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343488 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343514 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343536 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343561 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343583 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343624 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343628 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343648 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343668 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343683 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343688 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343726 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343759 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343775 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343781 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343801 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343854 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343908 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343929 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343968 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343988 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344010 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344030 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344068 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344215 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344244 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344264 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344290 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344397 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344445 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344484 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344509 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344536 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344558 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344610 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344633 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344656 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344714 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344751 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344770 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344849 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344871 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344891 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344913 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344933 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344954 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344973 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345010 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345160 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345199 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345236 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345259 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345312 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345332 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345367 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345390 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345412 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345430 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345477 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345501 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345540 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345584 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345623 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345646 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345673 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345715 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345738 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345759 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345799 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345823 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345848 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345870 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345892 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345914 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345935 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345957 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345979 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346018 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346061 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346085 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346109 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346130 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346150 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346172 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346194 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346215 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346235 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346256 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346280 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346304 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346330 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346373 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346399 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346423 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346445 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346465 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346491 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346515 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346538 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346562 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346585 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346605 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346627 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346652 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346675 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346698 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346720 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346744 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346763 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346793 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346813 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346835 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346859 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346882 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346903 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346925 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346949 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346973 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346995 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347019 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347294 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347327 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347349 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347377 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347402 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347448 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347471 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347493 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347514 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347535 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347555 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347576 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347600 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347622 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347646 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347670 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347694 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347716 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347744 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347766 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347790 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347813 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347837 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347859 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347880 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347903 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347928 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347954 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347978 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348003 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348025 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348064 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348090 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348143 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348174 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348203 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348230 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348255 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348280 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348306 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348340 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348369 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348394 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348422 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348449 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348471 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348494 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348574 4902 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348591 4902 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348605 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348619 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348631 4902 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348643 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348655 4902 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348670 4902 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348684 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348697 4902 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348708 4902 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348720 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348733 4902 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348744 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348775 4902 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.349697 4902 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343867 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344198 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344321 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344472 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344518 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344622 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344628 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344683 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344869 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345016 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345193 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345234 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345468 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345515 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345787 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345920 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346092 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346239 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346285 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346470 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346513 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346805 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347345 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347622 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347717 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347824 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347980 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348263 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348321 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348466 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.352807 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.352832 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348598 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348616 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348673 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348818 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348837 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348843 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348993 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.349014 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.349201 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.349241 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.349346 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.349380 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.349540 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.349751 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.349769 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.353155 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.349846 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:34:17.849828783 +0000 UTC m=+19.926661812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.353404 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.353575 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.353623 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.353667 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348516 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.349975 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.350097 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.350118 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.350249 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.350284 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.350404 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.350422 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.350531 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.350662 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.350742 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.350814 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.350864 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.351003 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.351178 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.351287 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.351346 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.351363 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.351537 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.351618 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.351703 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.351911 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.351953 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.352314 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.354093 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.354160 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.354260 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.349934 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.354704 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.354401 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.354906 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.354988 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.355879 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.356340 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.356355 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.356518 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.356547 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.357188 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.357388 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.357738 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.357891 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.358002 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.358324 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.358340 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.358731 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.358804 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.358878 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.359091 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.359167 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.359189 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.359193 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.359473 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.359761 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.359785 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.359902 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.360099 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.361384 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.361618 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.361731 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.361755 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.362037 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.362276 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.362344 4902 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.362415 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:17.862392368 +0000 UTC m=+19.939225397 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.362359 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.362519 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.362777 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.362828 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.362890 4902 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.362920 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:17.862913403 +0000 UTC m=+19.939746432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.363068 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.363546 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.364122 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.364211 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.364492 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.365005 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.365145 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.365595 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.365686 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.365745 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.365956 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.366175 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.366251 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.366642 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.366689 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.367015 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.367056 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.367333 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.367482 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.367652 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.367912 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.367930 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.368141 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.368351 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.368650 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.368906 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.369906 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.370336 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.370449 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.370568 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.370650 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.370797 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.371747 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.371943 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.374268 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.374742 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.374787 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.374780 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.374805 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.374814 4902 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.374819 4902 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.374900 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:17.874879112 +0000 UTC m=+19.951712361 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.374924 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:17.874915233 +0000 UTC m=+19.951748492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.374974 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.377033 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.377845 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.379440 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.379628 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.379754 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.380302 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.380375 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.380823 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.382753 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.383208 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.383553 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.383596 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.384350 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.384540 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.385432 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.385610 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.385660 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.385924 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.386963 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.386999 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.386975 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.387096 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.387433 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.388959 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.390031 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.390619 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.391123 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.396584 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.396966 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.397249 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.397314 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.398773 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.408833 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.409868 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.410343 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.413934 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.415439 4902 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02" exitCode=255 Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.415648 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02"} Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.421935 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.422377 4902 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.429275 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.430082 4902 scope.go:117] "RemoveContainer" containerID="35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.430357 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.432732 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.437533 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449552 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449617 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449705 4902 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449716 4902 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449726 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449735 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449744 4902 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449753 4902 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449762 4902 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449770 4902 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449779 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449788 4902 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449796 4902 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449804 4902 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449813 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449821 4902 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449830 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449840 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449849 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449857 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449866 4902 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449876 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449886 4902 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449896 4902 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449887 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449908 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449945 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449904 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450072 4902 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450087 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450104 4902 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450117 4902 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450133 4902 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450147 4902 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450159 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450172 4902 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450181 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450197 4902 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450211 4902 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450224 4902 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450240 4902 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450257 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450274 4902 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450288 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450302 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450315 4902 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450329 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450342 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450354 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450367 4902 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450379 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450393 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450406 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450419 4902 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450433 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450445 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450457 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450469 4902 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450482 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450495 4902 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450509 4902 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450523 4902 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450536 4902 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450549 4902 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450562 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450575 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450587 4902 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450601 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450613 4902 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450625 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450639 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450651 4902 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450663 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450673 4902 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450685 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450697 4902 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450709 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450720 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450732 4902 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450741 4902 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450753 4902 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450764 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450773 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450781 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450789 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450798 4902 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450807 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450815 4902 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450824 4902 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450833 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450842 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450851 4902 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450865 4902 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450886 4902 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450895 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450904 4902 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450913 4902 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450922 4902 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450937 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450949 4902 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450966 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450984 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451070 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451086 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451098 4902 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451112 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451127 4902 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451141 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451156 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451202 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451216 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451233 4902 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451250 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451263 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451276 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451290 4902 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451303 4902 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451317 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451330 4902 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451342 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451355 4902 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451367 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451388 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451401 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451413 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451427 4902 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451438 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451450 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451464 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451477 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451489 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451500 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451512 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451523 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451537 4902 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451550 4902 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451561 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451571 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451592 4902 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451604 4902 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451615 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451626 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451638 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451649 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451660 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451672 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451683 4902 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451694 4902 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451704 4902 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451715 4902 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451729 4902 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451741 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451753 4902 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451764 4902 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451778 4902 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451791 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451804 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451815 4902 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451829 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451840 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451851 4902 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451863 4902 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451875 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451887 4902 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451898 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451910 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451922 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451933 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451945 4902 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451958 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451970 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451981 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451992 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.452011 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.452024 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.452035 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.452090 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.452104 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.452116 4902 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.452127 4902 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.452139 4902 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.452152 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.452165 4902 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.452177 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.466331 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.477707 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.490183 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.503783 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.515404 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.525285 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.533867 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.547155 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.557157 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.565311 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.574007 4902 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-21 14:29:16 +0000 UTC, rotation deadline is 2026-11-20 09:14:14.406970723 +0000 UTC Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.574082 4902 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7266h39m56.83289123s for next certificate rotation Jan 21 14:34:17 crc kubenswrapper[4902]: W0121 14:34:17.575639 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-38b087dd777b1643258b98fad4c6bc090fe401764756a6046f1cca9609518a85 WatchSource:0}: Error finding container 38b087dd777b1643258b98fad4c6bc090fe401764756a6046f1cca9609518a85: Status 404 returned error can't find the container with id 38b087dd777b1643258b98fad4c6bc090fe401764756a6046f1cca9609518a85 Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.576596 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:34:17 crc kubenswrapper[4902]: W0121 14:34:17.597977 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-3da1374cc3504eef7f81b6552fe8e38497ff87dd64c88558be24c085e208c0ad WatchSource:0}: Error finding container 3da1374cc3504eef7f81b6552fe8e38497ff87dd64c88558be24c085e208c0ad: Status 404 returned error can't find the container with id 3da1374cc3504eef7f81b6552fe8e38497ff87dd64c88558be24c085e208c0ad Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.859954 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.860155 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:34:18.860129073 +0000 UTC m=+20.936962102 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.960559 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.960603 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.960625 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.960645 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.960773 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.960790 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.960800 4902 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.960845 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:18.960831874 +0000 UTC m=+21.037664903 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.960885 4902 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.960906 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:18.960900616 +0000 UTC m=+21.037733645 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.960948 4902 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.960967 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:18.960961387 +0000 UTC m=+21.037794416 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.961006 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.961014 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.961021 4902 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.961058 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:18.961033269 +0000 UTC m=+21.037866298 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.145976 4902 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 21 14:34:18 crc kubenswrapper[4902]: W0121 14:34:18.146270 4902 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 14:34:18 crc kubenswrapper[4902]: W0121 14:34:18.146334 4902 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 14:34:18 crc kubenswrapper[4902]: W0121 14:34:18.146336 4902 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 21 14:34:18 crc kubenswrapper[4902]: W0121 14:34:18.146366 4902 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 21 14:34:18 crc kubenswrapper[4902]: W0121 14:34:18.146287 4902 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 14:34:18 crc kubenswrapper[4902]: W0121 14:34:18.146286 4902 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.146413 4902 controller.go:145] "Failed to ensure lease exists, will retry" err="Post \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases?timeout=10s\": read tcp 38.129.56.21:49398->38.129.56.21:6443: use of closed network connection" interval="6.4s" Jan 21 14:34:18 crc kubenswrapper[4902]: W0121 14:34:18.146411 4902 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 21 14:34:18 crc kubenswrapper[4902]: W0121 14:34:18.146461 4902 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 14:34:18 crc kubenswrapper[4902]: W0121 14:34:18.146483 4902 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.146442 4902 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.129.56.21:49398->38.129.56.21:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188cc59ea371daa3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 14:33:58.787414691 +0000 UTC m=+0.864247720,LastTimestamp:2026-01-21 14:33:58.787414691 +0000 UTC m=+0.864247720,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.179228 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-62549"] Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.179643 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-62549" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.181526 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.181767 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.182709 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.193594 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.210489 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.230663 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.241907 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.249150 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 01:34:58.734244822 +0000 UTC Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.250910 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.262890 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5f7f4ebe-2b62-4cab-934b-f038b6a05d07-hosts-file\") pod \"node-resolver-62549\" (UID: \"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\") " pod="openshift-dns/node-resolver-62549" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.262941 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjnps\" (UniqueName: \"kubernetes.io/projected/5f7f4ebe-2b62-4cab-934b-f038b6a05d07-kube-api-access-qjnps\") pod \"node-resolver-62549\" (UID: \"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\") " pod="openshift-dns/node-resolver-62549" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.272127 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.284309 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.298842 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.299629 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.300267 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.300855 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.301411 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.301859 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.302422 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.302920 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.303560 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.304064 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.304565 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.305286 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.305775 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.308561 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.308585 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.309066 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.309889 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.310626 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.310975 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.311917 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.312494 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.312908 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.313829 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.314323 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.315334 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.315718 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.316688 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.317330 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.318192 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.318726 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.319263 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.320035 4902 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.320148 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.321158 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.321682 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.322653 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.323026 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.324518 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.325501 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.325967 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.327066 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.327937 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.328925 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.329499 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.330440 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.331059 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.331876 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.332393 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.333249 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.333588 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.333935 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.334748 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.335343 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.336183 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.336677 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.337216 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.338016 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.344703 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.363459 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjnps\" (UniqueName: \"kubernetes.io/projected/5f7f4ebe-2b62-4cab-934b-f038b6a05d07-kube-api-access-qjnps\") pod \"node-resolver-62549\" (UID: \"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\") " pod="openshift-dns/node-resolver-62549" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.363516 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5f7f4ebe-2b62-4cab-934b-f038b6a05d07-hosts-file\") pod \"node-resolver-62549\" (UID: \"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\") " pod="openshift-dns/node-resolver-62549" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.363578 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.363657 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5f7f4ebe-2b62-4cab-934b-f038b6a05d07-hosts-file\") pod \"node-resolver-62549\" (UID: \"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\") " pod="openshift-dns/node-resolver-62549" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.380834 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.398319 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjnps\" (UniqueName: \"kubernetes.io/projected/5f7f4ebe-2b62-4cab-934b-f038b6a05d07-kube-api-access-qjnps\") pod \"node-resolver-62549\" (UID: \"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\") " pod="openshift-dns/node-resolver-62549" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.399734 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.419501 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.421469 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405"} Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.421752 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.423467 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3da1374cc3504eef7f81b6552fe8e38497ff87dd64c88558be24c085e208c0ad"} Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.425002 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c"} Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.425031 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d"} Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.425061 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5ea185bd2307fef20a711a18ba4db2a4ec1d2f999a29a181dd19b00ebc3b6ccc"} Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.426922 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1"} Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.426954 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"38b087dd777b1643258b98fad4c6bc090fe401764756a6046f1cca9609518a85"} Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.435719 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.459061 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.477493 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.492963 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-62549" Jan 21 14:34:18 crc kubenswrapper[4902]: W0121 14:34:18.506550 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f7f4ebe_2b62_4cab_934b_f038b6a05d07.slice/crio-7d3692c1b4b3e544aef2f67122dcee6a64d813c953fe9e6ee3bf1f3b6807919f WatchSource:0}: Error finding container 7d3692c1b4b3e544aef2f67122dcee6a64d813c953fe9e6ee3bf1f3b6807919f: Status 404 returned error can't find the container with id 7d3692c1b4b3e544aef2f67122dcee6a64d813c953fe9e6ee3bf1f3b6807919f Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.514062 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.554142 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.574638 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.597345 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.613384 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.630307 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.651867 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.670530 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.684221 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.701296 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.867017 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.867152 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:34:20.867133609 +0000 UTC m=+22.943966638 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.942138 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-h68nf"] Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.942695 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.943108 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8l7jc"] Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.943895 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-m2bnb"] Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.944234 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-mztd6"] Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.944355 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.944545 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.944556 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.944949 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.945015 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.944742 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.944781 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.944830 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.952896 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.953089 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.953940 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.954493 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.954676 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.954691 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.954727 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.954692 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.954821 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.954841 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.954896 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.954920 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.954936 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.955064 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.967720 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d6c85cc7-ee09-4640-ab22-ce79d086ad7a-proxy-tls\") pod \"machine-config-daemon-m2bnb\" (UID: \"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\") " pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.967788 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-var-lib-cni-bin\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.967818 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-slash\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.967841 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-run-ovn-kubernetes\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968340 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7dbee8a9-6952-46b5-a958-ff8f1847fabd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968392 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968419 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-cnibin\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968470 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dbee8a9-6952-46b5-a958-ff8f1847fabd-system-cni-dir\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968523 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-run-netns\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.968541 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.968563 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.968578 4902 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.968630 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:20.968611911 +0000 UTC m=+23.045445140 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968547 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-node-log\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968683 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovnkube-script-lib\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968716 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968735 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-var-lib-kubelet\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968751 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-system-cni-dir\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968766 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-run-k8s-cni-cncf-io\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968782 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-run-multus-certs\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.968798 4902 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968801 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-env-overrides\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968835 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dbee8a9-6952-46b5-a958-ff8f1847fabd-os-release\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.968843 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:20.968832137 +0000 UTC m=+23.045665166 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968858 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7dbee8a9-6952-46b5-a958-ff8f1847fabd-cni-binary-copy\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968881 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rr89\" (UniqueName: \"kubernetes.io/projected/7dbee8a9-6952-46b5-a958-ff8f1847fabd-kube-api-access-9rr89\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968904 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-openvswitch\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968935 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968961 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcxq8\" (UniqueName: \"kubernetes.io/projected/0ec3a89a-830c-4274-8c1e-bd3c98120708-kube-api-access-fcxq8\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968988 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969019 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dbee8a9-6952-46b5-a958-ff8f1847fabd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969062 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dtf6\" (UniqueName: \"kubernetes.io/projected/d6c85cc7-ee09-4640-ab22-ce79d086ad7a-kube-api-access-4dtf6\") pod \"machine-config-daemon-m2bnb\" (UID: \"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\") " pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969082 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-cni-bin\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969103 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-var-lib-cni-multus\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969123 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-systemd-units\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.969131 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969142 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-etc-openvswitch\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969161 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-ovn\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.969147 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.969185 4902 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969187 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d6c85cc7-ee09-4640-ab22-ce79d086ad7a-rootfs\") pod \"machine-config-daemon-m2bnb\" (UID: \"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\") " pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969219 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-os-release\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.969238 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:20.969226738 +0000 UTC m=+23.046059767 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969256 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-cni-binary-copy\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969278 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-systemd\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969299 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovn-node-metrics-cert\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969322 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-run-netns\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969372 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-etc-kubernetes\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969387 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-cni-netd\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969403 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-multus-cni-dir\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969440 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-multus-daemon-config\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969458 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h7w8\" (UniqueName: \"kubernetes.io/projected/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-kube-api-access-8h7w8\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969473 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-kubelet\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969512 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dbee8a9-6952-46b5-a958-ff8f1847fabd-cnibin\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969575 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-hostroot\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969617 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-multus-conf-dir\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969634 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-log-socket\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969653 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovnkube-config\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969690 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969734 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d6c85cc7-ee09-4640-ab22-ce79d086ad7a-mcd-auth-proxy-config\") pod \"machine-config-daemon-m2bnb\" (UID: \"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\") " pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.969777 4902 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.969820 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:20.969806944 +0000 UTC m=+23.046639973 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969774 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-multus-socket-dir-parent\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969847 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-var-lib-openvswitch\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.971427 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.995355 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.011882 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.031140 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.052919 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.059788 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071256 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-cnibin\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071306 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dbee8a9-6952-46b5-a958-ff8f1847fabd-system-cni-dir\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071333 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-run-netns\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071369 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-node-log\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071392 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovnkube-script-lib\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071411 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dbee8a9-6952-46b5-a958-ff8f1847fabd-system-cni-dir\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071461 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-node-log\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071425 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-var-lib-kubelet\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071496 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-run-netns\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071473 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-var-lib-kubelet\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071422 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-cnibin\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071527 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-system-cni-dir\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071558 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-run-k8s-cni-cncf-io\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071579 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-run-multus-certs\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071602 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-env-overrides\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071623 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dbee8a9-6952-46b5-a958-ff8f1847fabd-os-release\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071632 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-run-k8s-cni-cncf-io\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071645 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7dbee8a9-6952-46b5-a958-ff8f1847fabd-cni-binary-copy\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071667 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rr89\" (UniqueName: \"kubernetes.io/projected/7dbee8a9-6952-46b5-a958-ff8f1847fabd-kube-api-access-9rr89\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071680 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-system-cni-dir\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071692 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-openvswitch\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071714 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071739 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcxq8\" (UniqueName: \"kubernetes.io/projected/0ec3a89a-830c-4274-8c1e-bd3c98120708-kube-api-access-fcxq8\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071761 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dbee8a9-6952-46b5-a958-ff8f1847fabd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071847 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dtf6\" (UniqueName: \"kubernetes.io/projected/d6c85cc7-ee09-4640-ab22-ce79d086ad7a-kube-api-access-4dtf6\") pod \"machine-config-daemon-m2bnb\" (UID: \"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\") " pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071873 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-cni-bin\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071901 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-run-multus-certs\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071902 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-var-lib-cni-multus\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071928 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-var-lib-cni-multus\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071949 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-systemd-units\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071973 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-etc-openvswitch\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071997 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-ovn\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072022 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d6c85cc7-ee09-4640-ab22-ce79d086ad7a-rootfs\") pod \"machine-config-daemon-m2bnb\" (UID: \"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\") " pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072060 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-os-release\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072081 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-cni-binary-copy\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072099 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-systemd\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072120 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovn-node-metrics-cert\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072144 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-run-netns\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072167 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-etc-kubernetes\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072191 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-cni-netd\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072217 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-multus-cni-dir\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072238 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-multus-daemon-config\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072259 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h7w8\" (UniqueName: \"kubernetes.io/projected/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-kube-api-access-8h7w8\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072282 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-kubelet\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072302 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dbee8a9-6952-46b5-a958-ff8f1847fabd-cnibin\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072319 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovnkube-script-lib\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072383 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-hostroot\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072325 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-hostroot\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072420 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-systemd-units\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072423 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-multus-conf-dir\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072447 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-multus-conf-dir\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072456 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-log-socket\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072480 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovnkube-config\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072484 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-env-overrides\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072521 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d6c85cc7-ee09-4640-ab22-ce79d086ad7a-mcd-auth-proxy-config\") pod \"machine-config-daemon-m2bnb\" (UID: \"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\") " pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072542 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-multus-socket-dir-parent\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072562 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-var-lib-openvswitch\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072587 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d6c85cc7-ee09-4640-ab22-ce79d086ad7a-proxy-tls\") pod \"machine-config-daemon-m2bnb\" (UID: \"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\") " pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072608 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-var-lib-cni-bin\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072632 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-slash\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072657 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7dbee8a9-6952-46b5-a958-ff8f1847fabd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072668 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7dbee8a9-6952-46b5-a958-ff8f1847fabd-cni-binary-copy\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072684 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-run-ovn-kubernetes\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072714 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-cni-bin\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071873 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dbee8a9-6952-46b5-a958-ff8f1847fabd-os-release\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072755 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-run-ovn-kubernetes\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072757 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-openvswitch\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072787 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072797 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-etc-openvswitch\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072837 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-kubelet\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072868 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dbee8a9-6952-46b5-a958-ff8f1847fabd-cnibin\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072901 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-log-socket\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072926 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-ovn\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072963 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d6c85cc7-ee09-4640-ab22-ce79d086ad7a-rootfs\") pod \"machine-config-daemon-m2bnb\" (UID: \"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\") " pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073024 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-os-release\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073071 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dbee8a9-6952-46b5-a958-ff8f1847fabd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073124 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-slash\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073158 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-var-lib-cni-bin\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073203 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-multus-socket-dir-parent\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073532 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovnkube-config\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073584 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-var-lib-openvswitch\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073620 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-etc-kubernetes\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073650 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-systemd\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073752 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-cni-binary-copy\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073804 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-cni-netd\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073855 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d6c85cc7-ee09-4640-ab22-ce79d086ad7a-mcd-auth-proxy-config\") pod \"machine-config-daemon-m2bnb\" (UID: \"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\") " pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073969 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-multus-cni-dir\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.074014 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-run-netns\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.074072 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7dbee8a9-6952-46b5-a958-ff8f1847fabd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.074375 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-multus-daemon-config\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.079889 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.080830 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovn-node-metrics-cert\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.081452 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d6c85cc7-ee09-4640-ab22-ce79d086ad7a-proxy-tls\") pod \"machine-config-daemon-m2bnb\" (UID: \"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\") " pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.101290 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.101705 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rr89\" (UniqueName: \"kubernetes.io/projected/7dbee8a9-6952-46b5-a958-ff8f1847fabd-kube-api-access-9rr89\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.117475 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.118676 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.146370 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.196555 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.206283 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dtf6\" (UniqueName: \"kubernetes.io/projected/d6c85cc7-ee09-4640-ab22-ce79d086ad7a-kube-api-access-4dtf6\") pod \"machine-config-daemon-m2bnb\" (UID: \"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\") " pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.206333 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcxq8\" (UniqueName: \"kubernetes.io/projected/0ec3a89a-830c-4274-8c1e-bd3c98120708-kube-api-access-fcxq8\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.220062 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.244901 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.249331 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 03:20:42.093382458 +0000 UTC Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.256454 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.259070 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.271391 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.275782 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.278861 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.287981 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.294840 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.294865 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.294855 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:19 crc kubenswrapper[4902]: E0121 14:34:19.294945 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:19 crc kubenswrapper[4902]: E0121 14:34:19.295034 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:19 crc kubenswrapper[4902]: E0121 14:34:19.295130 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.299235 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.307276 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h7w8\" (UniqueName: \"kubernetes.io/projected/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-kube-api-access-8h7w8\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.320230 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.339102 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.365072 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.380294 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.394078 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.408636 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.422783 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.429616 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"5ce6899ab2b12b8f4895228356fb88bbef937550a4743b5874ab9aba66a78a98"} Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.434319 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" event={"ID":"7dbee8a9-6952-46b5-a958-ff8f1847fabd","Type":"ContainerStarted","Data":"604704edeaee03e5b7b43758ad962447ff48da321e957b529c8db32a87c93efe"} Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.436573 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.436718 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"5db5d6c210cd289c7ac6d65a204f7254d1bffda346bedb3bbbbf5f06bf748884"} Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.438067 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-62549" event={"ID":"5f7f4ebe-2b62-4cab-934b-f038b6a05d07","Type":"ContainerStarted","Data":"dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9"} Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.438133 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-62549" event={"ID":"5f7f4ebe-2b62-4cab-934b-f038b6a05d07","Type":"ContainerStarted","Data":"7d3692c1b4b3e544aef2f67122dcee6a64d813c953fe9e6ee3bf1f3b6807919f"} Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.452424 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.466586 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.468701 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.482461 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.499707 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.516395 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.529155 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.543713 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.549036 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.550817 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.557609 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.563399 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.568174 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.570900 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.586009 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.585878 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.589590 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.599416 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.624938 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.638440 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.138887 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.152292 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.155742 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.156790 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.170776 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.182106 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.198122 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.211526 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.225823 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.239530 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.250362 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 21:04:54.763893733 +0000 UTC Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.252904 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.272753 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.286489 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.299612 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.312441 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.324253 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.336030 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.348208 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.360120 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.375019 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.388088 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.400512 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.410845 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.426901 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.443459 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc"} Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.443514 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388"} Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.445518 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mztd6" event={"ID":"037b55cf-cb9e-41ce-8b1e-3898f490a4aa","Type":"ContainerStarted","Data":"801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e"} Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.445581 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mztd6" event={"ID":"037b55cf-cb9e-41ce-8b1e-3898f490a4aa","Type":"ContainerStarted","Data":"2640a78ce524443cef8004d901f431b31719521bf07a79a70416e95f2c4391f7"} Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.448247 4902 generic.go:334] "Generic (PLEG): container finished" podID="7dbee8a9-6952-46b5-a958-ff8f1847fabd" containerID="ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597" exitCode=0 Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.448346 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" event={"ID":"7dbee8a9-6952-46b5-a958-ff8f1847fabd","Type":"ContainerDied","Data":"ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597"} Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.450826 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9" exitCode=0 Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.450936 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9"} Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.452980 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d"} Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.469770 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.507584 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.551519 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.596825 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.630324 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.676707 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.704268 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.750675 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.784685 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.824364 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.863099 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.891532 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.891675 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:34:24.891649991 +0000 UTC m=+26.968483020 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.906022 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.945749 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.984711 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.993124 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.993167 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.993189 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.993208 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.993282 4902 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.993323 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.993368 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.993366 4902 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.993332 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:24.993319009 +0000 UTC m=+27.070152038 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.993442 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:24.993425042 +0000 UTC m=+27.070258071 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.993384 4902 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.993457 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.993490 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.993502 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:24.993483053 +0000 UTC m=+27.070316082 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.993504 4902 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.993577 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:24.993557345 +0000 UTC m=+27.070390374 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.025898 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.063397 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.066719 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.068512 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.068548 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.068562 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.068634 4902 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.129397 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.136153 4902 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.136413 4902 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.137550 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.137576 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.137585 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.137599 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.137608 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: E0121 14:34:21.159371 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.162345 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.162378 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.162390 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.162409 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.162420 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: E0121 14:34:21.176907 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.180432 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.180469 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.180483 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.180502 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.180516 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: E0121 14:34:21.192471 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.195917 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.195948 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.195957 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.195972 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.195981 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.199625 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: E0121 14:34:21.209433 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.216253 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.216293 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.216303 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.216318 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.216330 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: E0121 14:34:21.227990 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: E0121 14:34:21.228130 4902 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.228299 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.230240 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.230262 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.230271 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.230284 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.230292 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.250960 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 15:03:22.52646036 +0000 UTC Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.265945 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.294301 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.294301 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.294407 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:21 crc kubenswrapper[4902]: E0121 14:34:21.294551 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:21 crc kubenswrapper[4902]: E0121 14:34:21.294598 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:21 crc kubenswrapper[4902]: E0121 14:34:21.294656 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.332709 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.332745 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.332757 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.332773 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.332786 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.434649 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.434693 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.434705 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.434722 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.434732 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.460753 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.460834 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.460865 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.460893 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.460914 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.460950 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.463602 4902 generic.go:334] "Generic (PLEG): container finished" podID="7dbee8a9-6952-46b5-a958-ff8f1847fabd" containerID="0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30" exitCode=0 Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.463788 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" event={"ID":"7dbee8a9-6952-46b5-a958-ff8f1847fabd","Type":"ContainerDied","Data":"0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.480336 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.495582 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.508453 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.520639 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.532961 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.538169 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.538218 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.538230 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.538249 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.538262 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.544809 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.559772 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-lg6wz"] Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.560232 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lg6wz" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.560604 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.575428 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.595242 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.598526 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f01bb5a-c917-4341-a173-725a85c1f0d2-host\") pod \"node-ca-lg6wz\" (UID: \"6f01bb5a-c917-4341-a173-725a85c1f0d2\") " pod="openshift-image-registry/node-ca-lg6wz" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.598581 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv4jk\" (UniqueName: \"kubernetes.io/projected/6f01bb5a-c917-4341-a173-725a85c1f0d2-kube-api-access-gv4jk\") pod \"node-ca-lg6wz\" (UID: \"6f01bb5a-c917-4341-a173-725a85c1f0d2\") " pod="openshift-image-registry/node-ca-lg6wz" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.598622 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6f01bb5a-c917-4341-a173-725a85c1f0d2-serviceca\") pod \"node-ca-lg6wz\" (UID: \"6f01bb5a-c917-4341-a173-725a85c1f0d2\") " pod="openshift-image-registry/node-ca-lg6wz" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.616228 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.637054 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.640547 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.640578 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.640588 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.640602 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.640612 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.670640 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.699377 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f01bb5a-c917-4341-a173-725a85c1f0d2-host\") pod \"node-ca-lg6wz\" (UID: \"6f01bb5a-c917-4341-a173-725a85c1f0d2\") " pod="openshift-image-registry/node-ca-lg6wz" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.699434 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv4jk\" (UniqueName: \"kubernetes.io/projected/6f01bb5a-c917-4341-a173-725a85c1f0d2-kube-api-access-gv4jk\") pod \"node-ca-lg6wz\" (UID: \"6f01bb5a-c917-4341-a173-725a85c1f0d2\") " pod="openshift-image-registry/node-ca-lg6wz" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.699465 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6f01bb5a-c917-4341-a173-725a85c1f0d2-serviceca\") pod \"node-ca-lg6wz\" (UID: \"6f01bb5a-c917-4341-a173-725a85c1f0d2\") " pod="openshift-image-registry/node-ca-lg6wz" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.699624 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f01bb5a-c917-4341-a173-725a85c1f0d2-host\") pod \"node-ca-lg6wz\" (UID: \"6f01bb5a-c917-4341-a173-725a85c1f0d2\") " pod="openshift-image-registry/node-ca-lg6wz" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.700550 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6f01bb5a-c917-4341-a173-725a85c1f0d2-serviceca\") pod \"node-ca-lg6wz\" (UID: \"6f01bb5a-c917-4341-a173-725a85c1f0d2\") " pod="openshift-image-registry/node-ca-lg6wz" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.707355 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.734976 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv4jk\" (UniqueName: \"kubernetes.io/projected/6f01bb5a-c917-4341-a173-725a85c1f0d2-kube-api-access-gv4jk\") pod \"node-ca-lg6wz\" (UID: \"6f01bb5a-c917-4341-a173-725a85c1f0d2\") " pod="openshift-image-registry/node-ca-lg6wz" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.743139 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.743164 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.743172 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.743185 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.743195 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.769552 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.808676 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.845995 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.846110 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.846139 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.846172 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.846195 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.853230 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.873237 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lg6wz" Jan 21 14:34:21 crc kubenswrapper[4902]: W0121 14:34:21.888866 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f01bb5a_c917_4341_a173_725a85c1f0d2.slice/crio-d05e37e7081cdd970393e37320a02beac918a818bf16915ff71656467471a497 WatchSource:0}: Error finding container d05e37e7081cdd970393e37320a02beac918a818bf16915ff71656467471a497: Status 404 returned error can't find the container with id d05e37e7081cdd970393e37320a02beac918a818bf16915ff71656467471a497 Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.900178 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.925424 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.947983 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.948021 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.948037 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.948091 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.948105 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.966730 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.006327 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.045129 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.050203 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.050237 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.050246 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.050259 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.050268 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:22Z","lastTransitionTime":"2026-01-21T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.088252 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.134973 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.153498 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.153558 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.153581 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.153614 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.153635 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:22Z","lastTransitionTime":"2026-01-21T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.181885 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.204015 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.245969 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.251828 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 19:39:32.744108303 +0000 UTC Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.255930 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.255976 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.255989 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.256012 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.256024 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:22Z","lastTransitionTime":"2026-01-21T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.286676 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.325546 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.359196 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.359264 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.359281 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.359301 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.359314 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:22Z","lastTransitionTime":"2026-01-21T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.367653 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.404419 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.446001 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.461366 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.461407 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.461420 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.461437 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.461449 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:22Z","lastTransitionTime":"2026-01-21T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.468578 4902 generic.go:334] "Generic (PLEG): container finished" podID="7dbee8a9-6952-46b5-a958-ff8f1847fabd" containerID="02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475" exitCode=0 Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.468652 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" event={"ID":"7dbee8a9-6952-46b5-a958-ff8f1847fabd","Type":"ContainerDied","Data":"02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475"} Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.470274 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lg6wz" event={"ID":"6f01bb5a-c917-4341-a173-725a85c1f0d2","Type":"ContainerStarted","Data":"5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b"} Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.470356 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lg6wz" event={"ID":"6f01bb5a-c917-4341-a173-725a85c1f0d2","Type":"ContainerStarted","Data":"d05e37e7081cdd970393e37320a02beac918a818bf16915ff71656467471a497"} Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.482648 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.527899 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.563309 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.563349 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.563361 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.563375 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.563384 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:22Z","lastTransitionTime":"2026-01-21T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.574162 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.607267 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.645635 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.665935 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.665965 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.665975 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.665989 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.665999 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:22Z","lastTransitionTime":"2026-01-21T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.684346 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.724390 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.768531 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.768566 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.768577 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.768593 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.768605 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:22Z","lastTransitionTime":"2026-01-21T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.768834 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.804264 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.842747 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.871291 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.871314 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.871322 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.871334 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.871343 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:22Z","lastTransitionTime":"2026-01-21T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.887079 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.925996 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.972405 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.974315 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.974346 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.974356 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.974372 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.974384 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:22Z","lastTransitionTime":"2026-01-21T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.008683 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.043793 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.077519 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.077563 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.077577 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.077618 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.077662 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:23Z","lastTransitionTime":"2026-01-21T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.083381 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.127590 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.180644 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.180692 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.180709 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.180730 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.180745 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:23Z","lastTransitionTime":"2026-01-21T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.252697 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 20:23:49.653877099 +0000 UTC Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.283627 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.283678 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.283689 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.283709 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.283720 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:23Z","lastTransitionTime":"2026-01-21T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.293847 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.293870 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.293854 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:23 crc kubenswrapper[4902]: E0121 14:34:23.293971 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:23 crc kubenswrapper[4902]: E0121 14:34:23.294023 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:23 crc kubenswrapper[4902]: E0121 14:34:23.294107 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.386203 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.386241 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.386251 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.386267 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.386278 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:23Z","lastTransitionTime":"2026-01-21T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.486389 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6"} Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.489860 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.489935 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.489958 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.489988 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.490012 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:23Z","lastTransitionTime":"2026-01-21T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.490862 4902 generic.go:334] "Generic (PLEG): container finished" podID="7dbee8a9-6952-46b5-a958-ff8f1847fabd" containerID="bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f" exitCode=0 Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.490915 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" event={"ID":"7dbee8a9-6952-46b5-a958-ff8f1847fabd","Type":"ContainerDied","Data":"bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f"} Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.509002 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.524647 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.543096 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.558400 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.574950 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.590200 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.592436 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.592470 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.592480 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.592499 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.592510 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:23Z","lastTransitionTime":"2026-01-21T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.604845 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.622060 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.637419 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.650056 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.673033 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.698610 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.698660 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.698675 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.698695 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.698708 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:23Z","lastTransitionTime":"2026-01-21T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.723219 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.740825 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.759098 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.773570 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.801623 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.801660 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.801675 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.801690 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.801700 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:23Z","lastTransitionTime":"2026-01-21T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.904696 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.904730 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.904739 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.904755 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.904766 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:23Z","lastTransitionTime":"2026-01-21T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.007510 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.007543 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.007554 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.007568 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.007579 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:24Z","lastTransitionTime":"2026-01-21T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.109906 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.109979 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.110003 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.110033 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.110101 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:24Z","lastTransitionTime":"2026-01-21T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.213036 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.213097 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.213108 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.213123 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.213135 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:24Z","lastTransitionTime":"2026-01-21T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.253015 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 20:41:48.95167179 +0000 UTC Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.315645 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.315702 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.315722 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.315741 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.315755 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:24Z","lastTransitionTime":"2026-01-21T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.418250 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.418312 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.418329 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.418353 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.418369 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:24Z","lastTransitionTime":"2026-01-21T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.502832 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" event={"ID":"7dbee8a9-6952-46b5-a958-ff8f1847fabd","Type":"ContainerStarted","Data":"26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870"} Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.520854 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.520908 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.520925 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.520953 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.520973 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:24Z","lastTransitionTime":"2026-01-21T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.524387 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.541961 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.559403 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.573876 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.589983 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.605646 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.624156 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.624199 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.624209 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.624223 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.624233 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:24Z","lastTransitionTime":"2026-01-21T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.630392 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.646464 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.663520 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.680012 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.695134 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.719597 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.726564 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.726613 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.726625 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.726642 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.726654 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:24Z","lastTransitionTime":"2026-01-21T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.744197 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.764790 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.784506 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.829890 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.829934 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.829944 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.829960 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.829971 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:24Z","lastTransitionTime":"2026-01-21T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.929597 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:24 crc kubenswrapper[4902]: E0121 14:34:24.929867 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:34:32.92983689 +0000 UTC m=+35.006669919 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.932613 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.932654 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.932667 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.932682 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.932693 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:24Z","lastTransitionTime":"2026-01-21T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.031475 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.031613 4902 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.031749 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:33.031734846 +0000 UTC m=+35.108567865 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.031940 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.031979 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.031999 4902 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.032077 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.032080 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:33.032034345 +0000 UTC m=+35.108867414 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.032114 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.032134 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.032212 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.032228 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.032237 4902 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.032264 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:33.032255451 +0000 UTC m=+35.109088470 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.032301 4902 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.032326 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:33.032319053 +0000 UTC m=+35.109152082 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.036293 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.036340 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.036362 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.036391 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.036413 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:25Z","lastTransitionTime":"2026-01-21T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.139517 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.139578 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.139596 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.139613 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.139623 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:25Z","lastTransitionTime":"2026-01-21T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.242750 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.242811 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.242831 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.242859 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.242878 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:25Z","lastTransitionTime":"2026-01-21T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.253540 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 18:23:00.154338694 +0000 UTC Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.294166 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.294207 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.294301 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.294408 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.294550 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.294722 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.344815 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.344853 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.344866 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.344884 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.344899 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:25Z","lastTransitionTime":"2026-01-21T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.448237 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.448312 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.448337 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.448373 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.448396 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:25Z","lastTransitionTime":"2026-01-21T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.512308 4902 generic.go:334] "Generic (PLEG): container finished" podID="7dbee8a9-6952-46b5-a958-ff8f1847fabd" containerID="26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870" exitCode=0 Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.512379 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" event={"ID":"7dbee8a9-6952-46b5-a958-ff8f1847fabd","Type":"ContainerDied","Data":"26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870"} Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.534846 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.552210 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.552266 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.552274 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.552295 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.552305 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:25Z","lastTransitionTime":"2026-01-21T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.556928 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.579735 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.597153 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.611538 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.628348 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.651290 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.655818 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.655873 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.655886 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.655904 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.655921 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:25Z","lastTransitionTime":"2026-01-21T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.665774 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.680788 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.696126 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.712459 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.737401 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.753125 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.758679 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.758724 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.758736 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.758757 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.758773 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:25Z","lastTransitionTime":"2026-01-21T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.767150 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.780945 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.861861 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.861892 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.861901 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.861916 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.861927 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:25Z","lastTransitionTime":"2026-01-21T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.964878 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.965175 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.965186 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.965200 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.965211 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:25Z","lastTransitionTime":"2026-01-21T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.067768 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.067829 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.067845 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.067867 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.067886 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:26Z","lastTransitionTime":"2026-01-21T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.170678 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.170725 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.170738 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.170754 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.170763 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:26Z","lastTransitionTime":"2026-01-21T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.254127 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 09:34:06.114480974 +0000 UTC Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.273395 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.273446 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.273458 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.273480 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.273494 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:26Z","lastTransitionTime":"2026-01-21T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.376497 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.376541 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.376551 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.376565 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.376575 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:26Z","lastTransitionTime":"2026-01-21T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.479442 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.479487 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.479496 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.479511 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.479521 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:26Z","lastTransitionTime":"2026-01-21T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.521372 4902 generic.go:334] "Generic (PLEG): container finished" podID="7dbee8a9-6952-46b5-a958-ff8f1847fabd" containerID="99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93" exitCode=0 Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.521489 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" event={"ID":"7dbee8a9-6952-46b5-a958-ff8f1847fabd","Type":"ContainerDied","Data":"99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93"} Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.528419 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250"} Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.528855 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.528896 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.529019 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.557389 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.563113 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.574121 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.581636 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.582591 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.582665 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.582693 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.582724 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.582746 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:26Z","lastTransitionTime":"2026-01-21T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.597071 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.615004 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.629137 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.647566 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.661890 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.675447 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.685286 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.685471 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.685485 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.685507 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.685519 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:26Z","lastTransitionTime":"2026-01-21T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.690962 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.706369 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.721907 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.735916 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.749354 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.762309 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.774914 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.787961 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.788001 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.788011 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.788027 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.788054 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:26Z","lastTransitionTime":"2026-01-21T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.793866 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.808451 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.822542 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.834578 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.846062 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.861590 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.878918 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.890860 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.890894 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.890905 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.890922 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.890933 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:26Z","lastTransitionTime":"2026-01-21T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.891350 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.904063 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.917474 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.931476 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.950278 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.964275 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.977885 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.992825 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.993678 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.993745 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.993756 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.993775 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.993787 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:26Z","lastTransitionTime":"2026-01-21T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.096696 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.096736 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.096748 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.096765 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.096777 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:27Z","lastTransitionTime":"2026-01-21T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.177332 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.193597 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.199075 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.199130 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.199143 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.199162 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.199178 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:27Z","lastTransitionTime":"2026-01-21T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.209693 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.223718 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.239027 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.253287 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.254359 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 23:58:16.887998491 +0000 UTC Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.269282 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.287891 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.294884 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.294906 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:27 crc kubenswrapper[4902]: E0121 14:34:27.295076 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:27 crc kubenswrapper[4902]: E0121 14:34:27.295144 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.294911 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:27 crc kubenswrapper[4902]: E0121 14:34:27.295223 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.301798 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.301823 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.301834 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.301848 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.301858 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:27Z","lastTransitionTime":"2026-01-21T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.311558 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.327178 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.346343 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.366844 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.381879 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.405705 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.405810 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.405838 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.405897 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.405938 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:27Z","lastTransitionTime":"2026-01-21T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.406561 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.423704 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.436598 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.508683 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.508753 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.508764 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.508780 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.508793 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:27Z","lastTransitionTime":"2026-01-21T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.537457 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" event={"ID":"7dbee8a9-6952-46b5-a958-ff8f1847fabd","Type":"ContainerStarted","Data":"5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0"} Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.564856 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.587073 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.609009 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.612434 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.612475 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.612486 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.612503 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.612515 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:27Z","lastTransitionTime":"2026-01-21T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.632299 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.647289 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.672534 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.689403 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.708370 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.715158 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.715436 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.715451 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.715469 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.715481 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:27Z","lastTransitionTime":"2026-01-21T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.730487 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.744797 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.766096 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.784144 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.797791 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.808507 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.817427 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.817463 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.817474 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.817491 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.817502 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:27Z","lastTransitionTime":"2026-01-21T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.822104 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.920313 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.920356 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.920370 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.920385 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.920397 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:27Z","lastTransitionTime":"2026-01-21T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.022973 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.023014 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.023025 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.023056 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.023066 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:28Z","lastTransitionTime":"2026-01-21T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.126417 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.126476 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.126491 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.126509 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.126525 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:28Z","lastTransitionTime":"2026-01-21T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.228920 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.228961 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.228972 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.228988 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.228998 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:28Z","lastTransitionTime":"2026-01-21T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.255460 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 13:11:40.141525214 +0000 UTC Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.316254 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.331482 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.332330 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.332402 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.332418 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.332440 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.332455 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:28Z","lastTransitionTime":"2026-01-21T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.346166 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.363422 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.380288 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.434093 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.434133 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.434144 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.434160 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.434171 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:28Z","lastTransitionTime":"2026-01-21T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.443360 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.455989 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.467281 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.479559 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.494492 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.507086 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.519480 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.533766 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.537116 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.537236 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.537305 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.537383 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.537442 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:28Z","lastTransitionTime":"2026-01-21T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.541898 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/0.log" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.544770 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250" exitCode=1 Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.544917 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250"} Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.545946 4902 scope.go:117] "RemoveContainer" containerID="14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.551606 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.564902 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.578875 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.592060 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.608551 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.623383 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.635659 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.639671 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.639809 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.639916 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.639999 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.640103 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:28Z","lastTransitionTime":"2026-01-21T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.648933 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.696635 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.731428 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.744892 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.746923 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.746966 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.746979 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.746999 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.747012 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:28Z","lastTransitionTime":"2026-01-21T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.760163 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.773757 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.793847 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:28Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:34:28.257205 6159 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 14:34:28.257243 6159 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 14:34:28.257251 6159 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 14:34:28.257264 6159 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 14:34:28.257283 6159 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 14:34:28.257289 6159 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 14:34:28.257314 6159 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 14:34:28.257426 6159 factory.go:656] Stopping watch factory\\\\nI0121 14:34:28.257445 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:28.257342 6159 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 14:34:28.257476 6159 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:28.257483 6159 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 14:34:28.257493 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:28.257569 6159 ovnkube.go:137] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.816831 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.831318 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.844561 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.849653 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.849693 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.849706 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.849724 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.849736 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:28Z","lastTransitionTime":"2026-01-21T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.952626 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.952665 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.952674 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.952690 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.952700 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:28Z","lastTransitionTime":"2026-01-21T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.055876 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.055922 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.055934 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.055950 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.055960 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:29Z","lastTransitionTime":"2026-01-21T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.158125 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.158162 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.158171 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.158187 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.158198 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:29Z","lastTransitionTime":"2026-01-21T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.256448 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 16:46:53.682575152 +0000 UTC Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.265797 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.266156 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.266176 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.266198 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.266214 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:29Z","lastTransitionTime":"2026-01-21T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.294173 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.294219 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:29 crc kubenswrapper[4902]: E0121 14:34:29.294342 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.294374 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:29 crc kubenswrapper[4902]: E0121 14:34:29.294483 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:29 crc kubenswrapper[4902]: E0121 14:34:29.294562 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.369689 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.369722 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.369731 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.369746 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.369757 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:29Z","lastTransitionTime":"2026-01-21T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.472537 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.472617 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.472635 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.472661 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.472682 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:29Z","lastTransitionTime":"2026-01-21T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.551399 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/0.log" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.555647 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae"} Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.556375 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.575692 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.575735 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.575744 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.575762 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.575774 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:29Z","lastTransitionTime":"2026-01-21T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.576212 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.597368 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.621227 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.635570 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.650740 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.669404 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.678934 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.678993 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.679007 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.679060 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.679078 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:29Z","lastTransitionTime":"2026-01-21T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.693904 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.712272 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.726396 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.742937 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.761349 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.781750 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.781825 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.781846 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.781878 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.781903 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:29Z","lastTransitionTime":"2026-01-21T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.786758 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:28Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:34:28.257205 6159 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 14:34:28.257243 6159 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 14:34:28.257251 6159 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 14:34:28.257264 6159 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 14:34:28.257283 6159 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 14:34:28.257289 6159 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 14:34:28.257314 6159 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 14:34:28.257426 6159 factory.go:656] Stopping watch factory\\\\nI0121 14:34:28.257445 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:28.257342 6159 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 14:34:28.257476 6159 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:28.257483 6159 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 14:34:28.257493 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:28.257569 6159 ovnkube.go:137] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.803681 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.815761 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.829289 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.884886 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.884947 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.884968 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.884997 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.885021 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:29Z","lastTransitionTime":"2026-01-21T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.987776 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.987851 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.987879 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.987912 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.987938 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:29Z","lastTransitionTime":"2026-01-21T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.091462 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.091542 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.091567 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.091603 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.091627 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:30Z","lastTransitionTime":"2026-01-21T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.195382 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.195430 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.195440 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.195459 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.195473 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:30Z","lastTransitionTime":"2026-01-21T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.257674 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 03:32:48.57644292 +0000 UTC Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.298451 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.298515 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.298526 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.298546 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.298556 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:30Z","lastTransitionTime":"2026-01-21T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.401782 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.401851 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.401864 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.401890 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.401905 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:30Z","lastTransitionTime":"2026-01-21T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.505107 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.505201 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.505227 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.505262 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.505285 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:30Z","lastTransitionTime":"2026-01-21T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.609467 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.609542 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.609565 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.609600 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.609623 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:30Z","lastTransitionTime":"2026-01-21T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.712960 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.713019 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.713032 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.713109 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.713124 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:30Z","lastTransitionTime":"2026-01-21T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.815409 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.815478 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.815487 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.815506 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.815517 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:30Z","lastTransitionTime":"2026-01-21T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.918398 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.918469 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.918484 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.918509 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.918526 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:30Z","lastTransitionTime":"2026-01-21T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.020970 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.021014 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.021026 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.021074 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.021087 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.125252 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.125337 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.125348 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.125370 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.125382 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.228820 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.228875 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.228893 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.228920 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.228939 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.258561 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 11:18:24.823850548 +0000 UTC Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.265904 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw"] Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.266431 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.269570 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.272377 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.285464 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.294557 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:31 crc kubenswrapper[4902]: E0121 14:34:31.294723 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.294994 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.295184 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:31 crc kubenswrapper[4902]: E0121 14:34:31.295656 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:31 crc kubenswrapper[4902]: E0121 14:34:31.295729 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.302298 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.321734 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.331625 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.331811 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.331933 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.332104 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.332225 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.339494 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.358201 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.374636 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.392362 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.401854 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f00b2c1e-2662-466e-b936-05f43db67fec-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-mpqkw\" (UID: \"f00b2c1e-2662-466e-b936-05f43db67fec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.401926 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f00b2c1e-2662-466e-b936-05f43db67fec-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-mpqkw\" (UID: \"f00b2c1e-2662-466e-b936-05f43db67fec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.402067 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4sj6\" (UniqueName: \"kubernetes.io/projected/f00b2c1e-2662-466e-b936-05f43db67fec-kube-api-access-p4sj6\") pod \"ovnkube-control-plane-749d76644c-mpqkw\" (UID: \"f00b2c1e-2662-466e-b936-05f43db67fec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.402150 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f00b2c1e-2662-466e-b936-05f43db67fec-env-overrides\") pod \"ovnkube-control-plane-749d76644c-mpqkw\" (UID: \"f00b2c1e-2662-466e-b936-05f43db67fec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.403191 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.403233 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.403251 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.403277 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.403291 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.409349 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: E0121 14:34:31.419100 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.421252 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.423586 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.423659 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.423685 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.423717 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.423742 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: E0121 14:34:31.441174 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.445820 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.446211 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.446270 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.446285 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.446307 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.446328 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: E0121 14:34:31.466092 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.470904 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.470939 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.470949 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.470965 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.470975 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.478491 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: E0121 14:34:31.483057 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.487418 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.487467 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.487481 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.487500 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.487514 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.496078 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: E0121 14:34:31.500566 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: E0121 14:34:31.500853 4902 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.502534 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.502628 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.502676 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f00b2c1e-2662-466e-b936-05f43db67fec-env-overrides\") pod \"ovnkube-control-plane-749d76644c-mpqkw\" (UID: \"f00b2c1e-2662-466e-b936-05f43db67fec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.502737 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f00b2c1e-2662-466e-b936-05f43db67fec-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-mpqkw\" (UID: \"f00b2c1e-2662-466e-b936-05f43db67fec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.502760 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f00b2c1e-2662-466e-b936-05f43db67fec-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-mpqkw\" (UID: \"f00b2c1e-2662-466e-b936-05f43db67fec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.502780 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4sj6\" (UniqueName: \"kubernetes.io/projected/f00b2c1e-2662-466e-b936-05f43db67fec-kube-api-access-p4sj6\") pod \"ovnkube-control-plane-749d76644c-mpqkw\" (UID: \"f00b2c1e-2662-466e-b936-05f43db67fec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.502711 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.502928 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.502990 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.503590 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f00b2c1e-2662-466e-b936-05f43db67fec-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-mpqkw\" (UID: \"f00b2c1e-2662-466e-b936-05f43db67fec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.503632 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f00b2c1e-2662-466e-b936-05f43db67fec-env-overrides\") pod \"ovnkube-control-plane-749d76644c-mpqkw\" (UID: \"f00b2c1e-2662-466e-b936-05f43db67fec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.509657 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.511454 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f00b2c1e-2662-466e-b936-05f43db67fec-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-mpqkw\" (UID: \"f00b2c1e-2662-466e-b936-05f43db67fec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.520759 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4sj6\" (UniqueName: \"kubernetes.io/projected/f00b2c1e-2662-466e-b936-05f43db67fec-kube-api-access-p4sj6\") pod \"ovnkube-control-plane-749d76644c-mpqkw\" (UID: \"f00b2c1e-2662-466e-b936-05f43db67fec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.523489 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.537603 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.555232 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:28Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:34:28.257205 6159 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 14:34:28.257243 6159 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 14:34:28.257251 6159 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 14:34:28.257264 6159 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 14:34:28.257283 6159 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 14:34:28.257289 6159 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 14:34:28.257314 6159 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 14:34:28.257426 6159 factory.go:656] Stopping watch factory\\\\nI0121 14:34:28.257445 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:28.257342 6159 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 14:34:28.257476 6159 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:28.257483 6159 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 14:34:28.257493 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:28.257569 6159 ovnkube.go:137] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.563487 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/1.log" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.564450 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/0.log" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.567147 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae" exitCode=1 Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.567183 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.567234 4902 scope.go:117] "RemoveContainer" containerID="14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.569902 4902 scope.go:117] "RemoveContainer" containerID="c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae" Jan 21 14:34:31 crc kubenswrapper[4902]: E0121 14:34:31.570223 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.584864 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.588708 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.598102 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: W0121 14:34:31.606581 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf00b2c1e_2662_466e_b936_05f43db67fec.slice/crio-2980df735d450d2bb49d80b102f7391b6e3ce04ca275525a69e4d76b33131155 WatchSource:0}: Error finding container 2980df735d450d2bb49d80b102f7391b6e3ce04ca275525a69e4d76b33131155: Status 404 returned error can't find the container with id 2980df735d450d2bb49d80b102f7391b6e3ce04ca275525a69e4d76b33131155 Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.607727 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.607849 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.607949 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.608032 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.608140 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.613847 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.627701 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.647591 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:28Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:34:28.257205 6159 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 14:34:28.257243 6159 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 14:34:28.257251 6159 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 14:34:28.257264 6159 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 14:34:28.257283 6159 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 14:34:28.257289 6159 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 14:34:28.257314 6159 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 14:34:28.257426 6159 factory.go:656] Stopping watch factory\\\\nI0121 14:34:28.257445 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:28.257342 6159 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 14:34:28.257476 6159 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:28.257483 6159 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 14:34:28.257493 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:28.257569 6159 ovnkube.go:137] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:30Z\\\",\\\"message\\\":\\\":443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 14:34:29.333853 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:29.333878 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:29.333891 6305 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 14:34:29.333901 6305 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:29.333966 6305 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z]\\\\nI0121 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.670702 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.684400 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.695979 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.709655 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.710767 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.710809 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.710823 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.710842 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.710855 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.722473 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.736823 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.751371 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.765934 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.779358 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.791557 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.810505 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.813379 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.813416 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.813426 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.813442 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.813451 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.916111 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.916285 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.916333 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.916369 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.916395 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.019284 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.019327 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.019341 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.019380 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.019392 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:32Z","lastTransitionTime":"2026-01-21T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.123697 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.123786 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.123812 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.123843 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.123866 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:32Z","lastTransitionTime":"2026-01-21T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.227011 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.227071 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.227087 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.227134 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.227146 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:32Z","lastTransitionTime":"2026-01-21T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.259145 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 16:49:19.693132728 +0000 UTC Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.330449 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.330504 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.330517 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.330536 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.330556 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:32Z","lastTransitionTime":"2026-01-21T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.391626 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-kq588"] Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.392092 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:32 crc kubenswrapper[4902]: E0121 14:34:32.392154 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.413246 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.428694 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.432636 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.432698 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.432715 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.432737 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.432752 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:32Z","lastTransitionTime":"2026-01-21T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.445955 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.465151 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.479350 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.510722 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:28Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:34:28.257205 6159 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 14:34:28.257243 6159 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 14:34:28.257251 6159 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 14:34:28.257264 6159 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 14:34:28.257283 6159 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 14:34:28.257289 6159 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 14:34:28.257314 6159 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 14:34:28.257426 6159 factory.go:656] Stopping watch factory\\\\nI0121 14:34:28.257445 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:28.257342 6159 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 14:34:28.257476 6159 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:28.257483 6159 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 14:34:28.257493 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:28.257569 6159 ovnkube.go:137] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:30Z\\\",\\\"message\\\":\\\":443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 14:34:29.333853 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:29.333878 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:29.333891 6305 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 14:34:29.333901 6305 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:29.333966 6305 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z]\\\\nI0121 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.516465 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.516516 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh22z\" (UniqueName: \"kubernetes.io/projected/05d94e6a-249a-484c-8895-085e81f1dfaa-kube-api-access-wh22z\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.522826 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.533708 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.535186 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.535217 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.535227 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.535244 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.535255 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:32Z","lastTransitionTime":"2026-01-21T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.547368 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.558332 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.569057 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.570597 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" event={"ID":"f00b2c1e-2662-466e-b936-05f43db67fec","Type":"ContainerStarted","Data":"2980df735d450d2bb49d80b102f7391b6e3ce04ca275525a69e4d76b33131155"} Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.583215 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.595073 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.609976 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.617499 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.617536 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh22z\" (UniqueName: \"kubernetes.io/projected/05d94e6a-249a-484c-8895-085e81f1dfaa-kube-api-access-wh22z\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:32 crc kubenswrapper[4902]: E0121 14:34:32.617731 4902 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:34:32 crc kubenswrapper[4902]: E0121 14:34:32.617876 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs podName:05d94e6a-249a-484c-8895-085e81f1dfaa nodeName:}" failed. No retries permitted until 2026-01-21 14:34:33.117843483 +0000 UTC m=+35.194676692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs") pod "network-metrics-daemon-kq588" (UID: "05d94e6a-249a-484c-8895-085e81f1dfaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.626534 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.634980 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh22z\" (UniqueName: \"kubernetes.io/projected/05d94e6a-249a-484c-8895-085e81f1dfaa-kube-api-access-wh22z\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.640825 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.640885 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.640903 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.640928 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.640946 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:32Z","lastTransitionTime":"2026-01-21T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.643910 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.666966 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.743881 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.743941 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.743951 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.743967 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.743979 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:32Z","lastTransitionTime":"2026-01-21T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.846847 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.847231 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.847385 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.847491 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.847573 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:32Z","lastTransitionTime":"2026-01-21T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.951021 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.951096 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.951108 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.951155 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.951170 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:32Z","lastTransitionTime":"2026-01-21T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.022756 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.023078 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:34:49.023029369 +0000 UTC m=+51.099862418 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.054912 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.054962 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.054973 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.054988 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.054997 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:33Z","lastTransitionTime":"2026-01-21T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.124199 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.124248 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.124276 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.124299 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.124319 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124382 4902 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124405 4902 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124411 4902 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124444 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:49.124428651 +0000 UTC m=+51.201261680 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124469 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124510 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124535 4902 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124487 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs podName:05d94e6a-249a-484c-8895-085e81f1dfaa nodeName:}" failed. No retries permitted until 2026-01-21 14:34:34.124473642 +0000 UTC m=+36.201306671 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs") pod "network-metrics-daemon-kq588" (UID: "05d94e6a-249a-484c-8895-085e81f1dfaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124627 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:49.124602166 +0000 UTC m=+51.201435415 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124626 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124686 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124656 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:49.124648247 +0000 UTC m=+51.201481506 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124704 4902 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124817 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:49.124785101 +0000 UTC m=+51.201618160 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.157797 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.157849 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.157859 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.157876 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.157888 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:33Z","lastTransitionTime":"2026-01-21T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.259827 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 14:32:06.94026988 +0000 UTC Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.261308 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.261377 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.261394 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.261414 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.261428 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:33Z","lastTransitionTime":"2026-01-21T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.294706 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.294757 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.294830 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.294851 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.294994 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.295199 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.364856 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.364898 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.364911 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.364927 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.364939 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:33Z","lastTransitionTime":"2026-01-21T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.467326 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.467364 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.467373 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.467387 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.467397 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:33Z","lastTransitionTime":"2026-01-21T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.570304 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.570355 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.570364 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.570381 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.570392 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:33Z","lastTransitionTime":"2026-01-21T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.575141 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/1.log" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.580831 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" event={"ID":"f00b2c1e-2662-466e-b936-05f43db67fec","Type":"ContainerStarted","Data":"baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0"} Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.580900 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" event={"ID":"f00b2c1e-2662-466e-b936-05f43db67fec","Type":"ContainerStarted","Data":"4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928"} Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.596035 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.609409 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.621902 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.636495 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.654808 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.672960 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.673029 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.673086 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.673119 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.673142 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:33Z","lastTransitionTime":"2026-01-21T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.676216 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.689719 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.713291 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.728142 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.758317 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:28Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:34:28.257205 6159 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 14:34:28.257243 6159 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 14:34:28.257251 6159 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 14:34:28.257264 6159 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 14:34:28.257283 6159 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 14:34:28.257289 6159 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 14:34:28.257314 6159 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 14:34:28.257426 6159 factory.go:656] Stopping watch factory\\\\nI0121 14:34:28.257445 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:28.257342 6159 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 14:34:28.257476 6159 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:28.257483 6159 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 14:34:28.257493 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:28.257569 6159 ovnkube.go:137] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:30Z\\\",\\\"message\\\":\\\":443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 14:34:29.333853 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:29.333878 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:29.333891 6305 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 14:34:29.333901 6305 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:29.333966 6305 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z]\\\\nI0121 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.776598 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.776663 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.776683 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.776710 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.776728 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:33Z","lastTransitionTime":"2026-01-21T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.779882 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.797637 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.812339 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.823747 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.840913 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.861074 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.875876 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.879583 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.879621 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.879630 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.879662 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.879673 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:33Z","lastTransitionTime":"2026-01-21T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.982029 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.982117 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.982134 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.982158 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.982178 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:33Z","lastTransitionTime":"2026-01-21T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.084851 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.084905 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.084917 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.084935 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.084947 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:34Z","lastTransitionTime":"2026-01-21T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.134519 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:34 crc kubenswrapper[4902]: E0121 14:34:34.134763 4902 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:34:34 crc kubenswrapper[4902]: E0121 14:34:34.134881 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs podName:05d94e6a-249a-484c-8895-085e81f1dfaa nodeName:}" failed. No retries permitted until 2026-01-21 14:34:36.134852799 +0000 UTC m=+38.211685868 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs") pod "network-metrics-daemon-kq588" (UID: "05d94e6a-249a-484c-8895-085e81f1dfaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.187899 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.187949 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.187962 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.187982 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.187993 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:34Z","lastTransitionTime":"2026-01-21T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.261112 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 02:54:16.882051799 +0000 UTC Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.290393 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.290452 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.290462 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.290481 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.290494 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:34Z","lastTransitionTime":"2026-01-21T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.294816 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:34 crc kubenswrapper[4902]: E0121 14:34:34.295010 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.393939 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.393985 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.393998 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.394017 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.394031 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:34Z","lastTransitionTime":"2026-01-21T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.496913 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.496977 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.496988 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.497006 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.497019 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:34Z","lastTransitionTime":"2026-01-21T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.599928 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.599984 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.599996 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.600013 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.600027 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:34Z","lastTransitionTime":"2026-01-21T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.702653 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.702697 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.702706 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.702720 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.702730 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:34Z","lastTransitionTime":"2026-01-21T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.807008 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.807111 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.807135 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.807165 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.807187 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:34Z","lastTransitionTime":"2026-01-21T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.910346 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.910423 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.910452 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.910481 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.910508 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:34Z","lastTransitionTime":"2026-01-21T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.013347 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.013721 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.013789 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.013859 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.013922 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:35Z","lastTransitionTime":"2026-01-21T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.117956 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.118023 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.118070 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.118097 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.118119 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:35Z","lastTransitionTime":"2026-01-21T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.221299 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.221357 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.221371 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.221392 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.221406 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:35Z","lastTransitionTime":"2026-01-21T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.261520 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 20:05:10.096984402 +0000 UTC Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.294082 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.294105 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:35 crc kubenswrapper[4902]: E0121 14:34:35.294206 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.294237 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:35 crc kubenswrapper[4902]: E0121 14:34:35.294344 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:35 crc kubenswrapper[4902]: E0121 14:34:35.294558 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.324802 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.325169 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.325348 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.325505 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.325629 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:35Z","lastTransitionTime":"2026-01-21T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.428984 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.429094 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.429119 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.429155 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.429179 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:35Z","lastTransitionTime":"2026-01-21T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.531977 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.532344 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.532450 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.532533 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.532617 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:35Z","lastTransitionTime":"2026-01-21T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.635887 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.635941 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.635960 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.635983 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.636002 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:35Z","lastTransitionTime":"2026-01-21T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.739520 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.739583 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.739601 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.739627 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.739656 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:35Z","lastTransitionTime":"2026-01-21T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.843167 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.843218 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.843234 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.843289 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.843305 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:35Z","lastTransitionTime":"2026-01-21T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.947204 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.947273 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.947290 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.947319 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.947337 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:35Z","lastTransitionTime":"2026-01-21T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.050365 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.050701 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.050878 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.051004 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.051220 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:36Z","lastTransitionTime":"2026-01-21T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.154659 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.155216 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.155411 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.155627 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.155817 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:36Z","lastTransitionTime":"2026-01-21T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.159419 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:36 crc kubenswrapper[4902]: E0121 14:34:36.159664 4902 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:34:36 crc kubenswrapper[4902]: E0121 14:34:36.159774 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs podName:05d94e6a-249a-484c-8895-085e81f1dfaa nodeName:}" failed. No retries permitted until 2026-01-21 14:34:40.159744329 +0000 UTC m=+42.236577398 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs") pod "network-metrics-daemon-kq588" (UID: "05d94e6a-249a-484c-8895-085e81f1dfaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.258695 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.259011 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.259096 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.259172 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.259233 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:36Z","lastTransitionTime":"2026-01-21T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.262090 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 00:03:20.757551386 +0000 UTC Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.294825 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:36 crc kubenswrapper[4902]: E0121 14:34:36.295080 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.362100 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.362155 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.362169 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.362190 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.362203 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:36Z","lastTransitionTime":"2026-01-21T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.464459 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.464513 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.464529 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.464551 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.464571 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:36Z","lastTransitionTime":"2026-01-21T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.567858 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.567916 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.567938 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.567967 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.567988 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:36Z","lastTransitionTime":"2026-01-21T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.671481 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.671553 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.671578 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.671608 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.671629 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:36Z","lastTransitionTime":"2026-01-21T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.774839 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.774896 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.774909 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.774934 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.774948 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:36Z","lastTransitionTime":"2026-01-21T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.878398 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.878495 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.878528 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.878602 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.878635 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:36Z","lastTransitionTime":"2026-01-21T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.982364 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.982541 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.982620 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.982653 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.982707 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:36Z","lastTransitionTime":"2026-01-21T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.086863 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.086996 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.087072 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.087169 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.087196 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:37Z","lastTransitionTime":"2026-01-21T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.190905 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.190952 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.190969 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.190992 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.191010 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:37Z","lastTransitionTime":"2026-01-21T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.262919 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 02:21:36.620908961 +0000 UTC Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.294082 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.294156 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.294184 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.294182 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.294204 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.294215 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.294280 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:37Z","lastTransitionTime":"2026-01-21T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:37 crc kubenswrapper[4902]: E0121 14:34:37.294368 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.294451 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:37 crc kubenswrapper[4902]: E0121 14:34:37.294645 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:37 crc kubenswrapper[4902]: E0121 14:34:37.294802 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.398121 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.398210 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.398236 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.398276 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.398304 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:37Z","lastTransitionTime":"2026-01-21T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.501946 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.502005 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.502024 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.502089 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.502126 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:37Z","lastTransitionTime":"2026-01-21T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.606007 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.606106 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.606125 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.606148 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.606199 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:37Z","lastTransitionTime":"2026-01-21T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.709702 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.709754 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.709769 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.709790 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.709807 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:37Z","lastTransitionTime":"2026-01-21T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.812560 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.812622 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.812636 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.812657 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.812670 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:37Z","lastTransitionTime":"2026-01-21T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.915539 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.915583 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.915593 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.915612 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.915625 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:37Z","lastTransitionTime":"2026-01-21T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.019009 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.019070 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.019080 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.019097 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.019108 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:38Z","lastTransitionTime":"2026-01-21T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.121915 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.121967 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.121977 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.121996 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.122006 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:38Z","lastTransitionTime":"2026-01-21T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.224856 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.224930 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.224946 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.224969 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.224984 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:38Z","lastTransitionTime":"2026-01-21T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.264016 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 06:08:36.881962447 +0000 UTC Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.295083 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:38 crc kubenswrapper[4902]: E0121 14:34:38.295246 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.313895 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.328653 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.328716 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.328736 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.328765 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.328784 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:38Z","lastTransitionTime":"2026-01-21T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.332937 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.353586 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.382711 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.404629 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.427457 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.431404 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.431433 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.431445 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.431463 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.431476 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:38Z","lastTransitionTime":"2026-01-21T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.445580 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.460147 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.476253 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.495881 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.510655 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.533897 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:28Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:34:28.257205 6159 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 14:34:28.257243 6159 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 14:34:28.257251 6159 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 14:34:28.257264 6159 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 14:34:28.257283 6159 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 14:34:28.257289 6159 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 14:34:28.257314 6159 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 14:34:28.257426 6159 factory.go:656] Stopping watch factory\\\\nI0121 14:34:28.257445 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:28.257342 6159 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 14:34:28.257476 6159 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:28.257483 6159 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 14:34:28.257493 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:28.257569 6159 ovnkube.go:137] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:30Z\\\",\\\"message\\\":\\\":443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 14:34:29.333853 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:29.333878 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:29.333891 6305 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 14:34:29.333901 6305 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:29.333966 6305 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z]\\\\nI0121 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.534200 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.534261 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.534275 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.534298 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.534310 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:38Z","lastTransitionTime":"2026-01-21T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.559693 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.580037 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.598391 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.616028 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.629679 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.636952 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.637069 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.637084 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.637103 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.637119 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:38Z","lastTransitionTime":"2026-01-21T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.740799 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.740848 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.740863 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.740881 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.740921 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:38Z","lastTransitionTime":"2026-01-21T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.844271 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.844326 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.844346 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.844370 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.844388 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:38Z","lastTransitionTime":"2026-01-21T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.947335 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.947405 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.947425 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.947447 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.947461 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:38Z","lastTransitionTime":"2026-01-21T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.051130 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.051197 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.051230 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.051259 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.051283 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:39Z","lastTransitionTime":"2026-01-21T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.155037 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.155105 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.155117 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.155135 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.155148 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:39Z","lastTransitionTime":"2026-01-21T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.258576 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.258630 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.258647 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.258670 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.258687 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:39Z","lastTransitionTime":"2026-01-21T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.264697 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 09:57:03.840094141 +0000 UTC Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.294140 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:39 crc kubenswrapper[4902]: E0121 14:34:39.294279 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.294609 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:39 crc kubenswrapper[4902]: E0121 14:34:39.294732 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.294739 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:39 crc kubenswrapper[4902]: E0121 14:34:39.294811 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.360655 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.360697 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.360740 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.360786 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.360802 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:39Z","lastTransitionTime":"2026-01-21T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.463410 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.463479 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.463496 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.463518 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.463539 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:39Z","lastTransitionTime":"2026-01-21T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.565988 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.566030 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.566056 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.566072 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.566084 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:39Z","lastTransitionTime":"2026-01-21T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.669414 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.669454 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.669466 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.669483 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.669495 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:39Z","lastTransitionTime":"2026-01-21T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.776411 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.776502 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.776524 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.776550 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.776571 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:39Z","lastTransitionTime":"2026-01-21T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.879620 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.880012 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.880207 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.880342 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.880444 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:39Z","lastTransitionTime":"2026-01-21T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.982971 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.983011 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.983020 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.983036 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.983077 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:39Z","lastTransitionTime":"2026-01-21T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.086260 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.086324 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.086341 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.086373 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.086390 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:40Z","lastTransitionTime":"2026-01-21T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.189160 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.189422 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.189512 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.189652 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.189724 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:40Z","lastTransitionTime":"2026-01-21T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.205986 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:40 crc kubenswrapper[4902]: E0121 14:34:40.206426 4902 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:34:40 crc kubenswrapper[4902]: E0121 14:34:40.206694 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs podName:05d94e6a-249a-484c-8895-085e81f1dfaa nodeName:}" failed. No retries permitted until 2026-01-21 14:34:48.206659979 +0000 UTC m=+50.283493038 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs") pod "network-metrics-daemon-kq588" (UID: "05d94e6a-249a-484c-8895-085e81f1dfaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.265228 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 08:08:47.12303033 +0000 UTC Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.292469 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.292553 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.292574 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.292598 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.292618 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:40Z","lastTransitionTime":"2026-01-21T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.294741 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:40 crc kubenswrapper[4902]: E0121 14:34:40.294939 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.395381 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.395435 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.395453 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.395480 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.395501 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:40Z","lastTransitionTime":"2026-01-21T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.497492 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.497529 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.497542 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.497561 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.497575 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:40Z","lastTransitionTime":"2026-01-21T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.600609 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.600667 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.600678 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.600701 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.600714 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:40Z","lastTransitionTime":"2026-01-21T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.703545 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.703588 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.703641 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.703657 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.703668 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:40Z","lastTransitionTime":"2026-01-21T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.806194 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.806243 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.806253 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.806270 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.806293 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:40Z","lastTransitionTime":"2026-01-21T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.909530 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.909599 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.909619 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.909646 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.909663 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:40Z","lastTransitionTime":"2026-01-21T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.012436 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.012467 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.012476 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.012491 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.012500 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.115658 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.115924 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.116037 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.116162 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.116280 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.219509 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.219586 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.219611 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.219643 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.219665 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.266415 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 16:36:02.791811513 +0000 UTC Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.294805 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.294827 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.294846 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:41 crc kubenswrapper[4902]: E0121 14:34:41.295563 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:41 crc kubenswrapper[4902]: E0121 14:34:41.295680 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:41 crc kubenswrapper[4902]: E0121 14:34:41.295321 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.322873 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.323293 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.323416 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.323522 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.323648 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.426659 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.426693 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.426702 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.426715 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.426727 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.510371 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.510426 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.510442 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.510465 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.510486 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: E0121 14:34:41.532105 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.537154 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.537221 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.537235 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.537256 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.537269 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: E0121 14:34:41.554206 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.559742 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.559802 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.559813 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.559835 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.559851 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: E0121 14:34:41.576088 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.580860 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.580920 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.580935 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.580958 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.580975 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: E0121 14:34:41.602405 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.606803 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.606852 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.606867 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.606913 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.606939 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: E0121 14:34:41.625505 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:41 crc kubenswrapper[4902]: E0121 14:34:41.625658 4902 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.627529 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.627589 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.627609 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.627656 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.627669 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.731373 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.731453 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.731475 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.731499 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.731519 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.834627 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.834663 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.834673 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.834689 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.834701 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.937143 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.937189 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.937200 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.937216 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.937227 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.040844 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.040914 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.040930 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.040953 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.040971 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:42Z","lastTransitionTime":"2026-01-21T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.144565 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.144632 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.144650 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.144674 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.144694 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:42Z","lastTransitionTime":"2026-01-21T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.247978 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.248109 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.248123 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.248148 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.248159 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:42Z","lastTransitionTime":"2026-01-21T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.267447 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 02:15:47.746234243 +0000 UTC Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.295003 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:42 crc kubenswrapper[4902]: E0121 14:34:42.295297 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.296132 4902 scope.go:117] "RemoveContainer" containerID="c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.334939 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:30Z\\\",\\\"message\\\":\\\":443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 14:34:29.333853 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:29.333878 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:29.333891 6305 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 14:34:29.333901 6305 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:29.333966 6305 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z]\\\\nI0121 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.350989 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.351087 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.351117 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.351143 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.351162 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:42Z","lastTransitionTime":"2026-01-21T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.358533 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.378807 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.398268 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.420581 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.438793 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.454741 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.454786 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.454799 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.454820 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.454835 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:42Z","lastTransitionTime":"2026-01-21T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.457266 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.474507 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.501105 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.517386 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.528457 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.542009 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.552701 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.557642 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.557702 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.557717 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.557736 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.557751 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:42Z","lastTransitionTime":"2026-01-21T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.566762 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.584974 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.599580 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.612713 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.617030 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/1.log" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.624572 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57"} Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.624997 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.641473 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.654183 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.660067 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.660106 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.660115 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.660128 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.660138 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:42Z","lastTransitionTime":"2026-01-21T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.673993 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.692739 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.705669 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.734668 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.755636 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.762341 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.762377 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.762391 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.762407 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.762419 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:42Z","lastTransitionTime":"2026-01-21T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.771242 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.786318 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.799709 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.811716 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.830280 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:30Z\\\",\\\"message\\\":\\\":443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 14:34:29.333853 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:29.333878 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:29.333891 6305 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 14:34:29.333901 6305 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:29.333966 6305 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z]\\\\nI0121 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.852491 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.865524 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.865575 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.865586 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.865602 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.865613 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:42Z","lastTransitionTime":"2026-01-21T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.867651 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.881723 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.894301 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.905515 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.968741 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.968799 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.968818 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.968841 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.968857 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:42Z","lastTransitionTime":"2026-01-21T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.072227 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.072469 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.072481 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.072503 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.072516 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:43Z","lastTransitionTime":"2026-01-21T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.175637 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.175681 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.175693 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.175713 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.175726 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:43Z","lastTransitionTime":"2026-01-21T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.268300 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 22:37:47.244109875 +0000 UTC Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.278972 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.279026 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.279296 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.279341 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.279357 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:43Z","lastTransitionTime":"2026-01-21T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.294493 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.294543 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:43 crc kubenswrapper[4902]: E0121 14:34:43.294664 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:43 crc kubenswrapper[4902]: E0121 14:34:43.294826 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.294696 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:43 crc kubenswrapper[4902]: E0121 14:34:43.294978 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.382369 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.382437 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.382456 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.382482 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.382503 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:43Z","lastTransitionTime":"2026-01-21T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.486189 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.486238 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.486293 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.486313 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.486325 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:43Z","lastTransitionTime":"2026-01-21T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.589303 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.589339 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.589352 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.589368 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.589380 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:43Z","lastTransitionTime":"2026-01-21T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.631362 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/2.log" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.632799 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/1.log" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.636783 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57" exitCode=1 Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.636842 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57"} Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.636891 4902 scope.go:117] "RemoveContainer" containerID="c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.637961 4902 scope.go:117] "RemoveContainer" containerID="c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57" Jan 21 14:34:43 crc kubenswrapper[4902]: E0121 14:34:43.639385 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.654730 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.667646 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.680195 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.692671 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.692970 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.693078 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.693160 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.693245 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:43Z","lastTransitionTime":"2026-01-21T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.695763 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.716294 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.728449 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.754227 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.771781 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.788682 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.795827 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.795904 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.795929 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.795964 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.795988 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:43Z","lastTransitionTime":"2026-01-21T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.805905 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.822542 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.849692 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.867838 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.885740 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.898962 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.899028 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.899072 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.899096 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.899117 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:43Z","lastTransitionTime":"2026-01-21T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.906357 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.921758 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.949144 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:30Z\\\",\\\"message\\\":\\\":443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 14:34:29.333853 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:29.333878 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:29.333891 6305 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 14:34:29.333901 6305 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:29.333966 6305 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z]\\\\nI0121 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:43Z\\\",\\\"message\\\":\\\" annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:34:43.143831 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw\\\\nI0121 14:34:43.143831 6521 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-kq588 before timer (time: 2026-01-21 14:34:44.616256342 +0000 UTC m=+2.021937169): skip\\\\nI0121 14:34:43.143829 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m2bnb\\\\nI0121 14:34:43.143839 6521 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143851 6521 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw in node crc\\\\nI0121 14:34:43.143853 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143860 6521 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-m2bnb in node crc\\\\nI0121 14:34:43.143863 6521 ovn.go:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.002930 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.002987 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.003000 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.003022 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.003039 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:44Z","lastTransitionTime":"2026-01-21T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.106678 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.106749 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.106770 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.106798 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.106870 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:44Z","lastTransitionTime":"2026-01-21T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.210426 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.210493 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.210504 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.210520 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.210536 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:44Z","lastTransitionTime":"2026-01-21T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.269370 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 16:10:43.611320112 +0000 UTC Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.294204 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:44 crc kubenswrapper[4902]: E0121 14:34:44.294486 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.313902 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.313986 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.314009 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.314088 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.314116 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:44Z","lastTransitionTime":"2026-01-21T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.417372 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.417445 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.417466 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.417496 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.417516 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:44Z","lastTransitionTime":"2026-01-21T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.520929 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.521019 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.521105 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.521155 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.521178 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:44Z","lastTransitionTime":"2026-01-21T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.625000 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.625102 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.625121 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.625149 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.625171 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:44Z","lastTransitionTime":"2026-01-21T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.643424 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/2.log" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.648575 4902 scope.go:117] "RemoveContainer" containerID="c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57" Jan 21 14:34:44 crc kubenswrapper[4902]: E0121 14:34:44.648763 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.665424 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.682324 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.705464 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.723194 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.728436 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.728519 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.728551 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.728583 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.728608 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:44Z","lastTransitionTime":"2026-01-21T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.743962 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.763449 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.782963 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.796603 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.808181 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.823995 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.831365 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.831415 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.831431 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.831455 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.831470 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:44Z","lastTransitionTime":"2026-01-21T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.838015 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.861776 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.882837 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.907722 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.927341 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.934003 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.934085 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.934111 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.934139 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.934162 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:44Z","lastTransitionTime":"2026-01-21T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.940854 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.972923 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:43Z\\\",\\\"message\\\":\\\" annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:34:43.143831 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw\\\\nI0121 14:34:43.143831 6521 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-kq588 before timer (time: 2026-01-21 14:34:44.616256342 +0000 UTC m=+2.021937169): skip\\\\nI0121 14:34:43.143829 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m2bnb\\\\nI0121 14:34:43.143839 6521 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143851 6521 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw in node crc\\\\nI0121 14:34:43.143853 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143860 6521 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-m2bnb in node crc\\\\nI0121 14:34:43.143863 6521 ovn.go:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.037729 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.037785 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.037796 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.037819 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.037835 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:45Z","lastTransitionTime":"2026-01-21T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.140652 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.140732 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.140751 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.140780 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.140801 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:45Z","lastTransitionTime":"2026-01-21T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.243579 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.243659 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.243677 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.243706 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.243724 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:45Z","lastTransitionTime":"2026-01-21T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.270212 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 00:26:55.123853126 +0000 UTC Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.294891 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.295084 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:45 crc kubenswrapper[4902]: E0121 14:34:45.295147 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:45 crc kubenswrapper[4902]: E0121 14:34:45.295373 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.295569 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:45 crc kubenswrapper[4902]: E0121 14:34:45.295714 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.347499 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.347569 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.347592 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.347625 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.347649 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:45Z","lastTransitionTime":"2026-01-21T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.451574 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.451651 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.451669 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.451693 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.451712 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:45Z","lastTransitionTime":"2026-01-21T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.555261 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.555308 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.555318 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.555332 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.555342 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:45Z","lastTransitionTime":"2026-01-21T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.658253 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.658295 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.658304 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.658320 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.658330 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:45Z","lastTransitionTime":"2026-01-21T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.761727 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.761795 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.761807 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.761826 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.761839 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:45Z","lastTransitionTime":"2026-01-21T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.865661 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.865738 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.865760 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.865787 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.865805 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:45Z","lastTransitionTime":"2026-01-21T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.968930 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.969014 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.969090 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.969164 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.969189 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:45Z","lastTransitionTime":"2026-01-21T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.072590 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.072667 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.072694 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.072730 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.072758 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:46Z","lastTransitionTime":"2026-01-21T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.175391 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.175455 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.175477 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.175507 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.175532 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:46Z","lastTransitionTime":"2026-01-21T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.271227 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 17:51:12.351134791 +0000 UTC Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.279220 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.279274 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.279299 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.279330 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.279354 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:46Z","lastTransitionTime":"2026-01-21T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.309762 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:46 crc kubenswrapper[4902]: E0121 14:34:46.310099 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.382304 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.382342 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.382353 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.382369 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.382381 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:46Z","lastTransitionTime":"2026-01-21T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.485709 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.485781 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.485800 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.485823 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.485847 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:46Z","lastTransitionTime":"2026-01-21T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.590221 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.590262 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.590270 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.590285 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.590294 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:46Z","lastTransitionTime":"2026-01-21T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.667258 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.683885 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.687350 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.693710 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.693760 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.693779 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.693805 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.693824 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:46Z","lastTransitionTime":"2026-01-21T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.704824 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.724991 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.744583 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.762804 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.779794 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.796487 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.796554 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.796574 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.796600 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.796618 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:46Z","lastTransitionTime":"2026-01-21T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.808492 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.844119 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.863987 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.882777 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.899461 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.899518 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.899537 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.899563 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.899581 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:46Z","lastTransitionTime":"2026-01-21T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.905720 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.924455 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.954670 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:43Z\\\",\\\"message\\\":\\\" annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:34:43.143831 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw\\\\nI0121 14:34:43.143831 6521 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-kq588 before timer (time: 2026-01-21 14:34:44.616256342 +0000 UTC m=+2.021937169): skip\\\\nI0121 14:34:43.143829 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m2bnb\\\\nI0121 14:34:43.143839 6521 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143851 6521 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw in node crc\\\\nI0121 14:34:43.143853 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143860 6521 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-m2bnb in node crc\\\\nI0121 14:34:43.143863 6521 ovn.go:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.971665 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.987771 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.002768 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.003229 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.003470 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.003680 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.003890 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:47Z","lastTransitionTime":"2026-01-21T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.010767 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:47Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.029581 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:47Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.107419 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.107475 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.107491 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.107514 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.107536 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:47Z","lastTransitionTime":"2026-01-21T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.210977 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.211084 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.211111 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.211142 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.211165 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:47Z","lastTransitionTime":"2026-01-21T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.271718 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 20:23:27.474577242 +0000 UTC Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.294544 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.294609 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.294636 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:47 crc kubenswrapper[4902]: E0121 14:34:47.294729 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:47 crc kubenswrapper[4902]: E0121 14:34:47.294897 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:47 crc kubenswrapper[4902]: E0121 14:34:47.295116 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.315078 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.315144 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.315170 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.315200 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.315223 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:47Z","lastTransitionTime":"2026-01-21T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.422399 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.422468 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.422504 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.422525 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.422540 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:47Z","lastTransitionTime":"2026-01-21T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.524521 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.524591 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.524617 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.524686 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.524716 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:47Z","lastTransitionTime":"2026-01-21T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.628444 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.628504 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.628521 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.628545 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.628563 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:47Z","lastTransitionTime":"2026-01-21T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.731900 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.731978 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.731995 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.732020 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.732038 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:47Z","lastTransitionTime":"2026-01-21T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.835910 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.835971 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.835990 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.836014 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.836032 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:47Z","lastTransitionTime":"2026-01-21T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.939258 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.939304 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.939317 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.939333 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.939345 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:47Z","lastTransitionTime":"2026-01-21T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.042030 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.042136 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.042159 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.042190 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.042212 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:48Z","lastTransitionTime":"2026-01-21T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.146134 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.146192 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.146211 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.146234 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.146252 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:48Z","lastTransitionTime":"2026-01-21T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.249120 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.249190 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.249205 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.249224 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.249237 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:48Z","lastTransitionTime":"2026-01-21T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.272464 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 03:53:03.438077509 +0000 UTC Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.293953 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:48 crc kubenswrapper[4902]: E0121 14:34:48.294167 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.299172 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:48 crc kubenswrapper[4902]: E0121 14:34:48.299380 4902 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:34:48 crc kubenswrapper[4902]: E0121 14:34:48.299453 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs podName:05d94e6a-249a-484c-8895-085e81f1dfaa nodeName:}" failed. No retries permitted until 2026-01-21 14:35:04.299431366 +0000 UTC m=+66.376264425 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs") pod "network-metrics-daemon-kq588" (UID: "05d94e6a-249a-484c-8895-085e81f1dfaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.316942 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.337225 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.351462 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.351536 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.351553 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.351575 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.351589 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:48Z","lastTransitionTime":"2026-01-21T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.372093 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:43Z\\\",\\\"message\\\":\\\" annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:34:43.143831 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw\\\\nI0121 14:34:43.143831 6521 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-kq588 before timer (time: 2026-01-21 14:34:44.616256342 +0000 UTC m=+2.021937169): skip\\\\nI0121 14:34:43.143829 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m2bnb\\\\nI0121 14:34:43.143839 6521 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143851 6521 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw in node crc\\\\nI0121 14:34:43.143853 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143860 6521 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-m2bnb in node crc\\\\nI0121 14:34:43.143863 6521 ovn.go:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.400228 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.423665 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.436426 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.454075 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.454121 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.454137 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.454156 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.454171 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:48Z","lastTransitionTime":"2026-01-21T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.459138 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.474423 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.490189 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.503036 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.547868 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.556468 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.556508 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.556523 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.556545 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.556563 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:48Z","lastTransitionTime":"2026-01-21T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.570752 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.582469 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.602283 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.613571 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.626116 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e90c662-3709-4ec5-8dcd-cd159916c9a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b4bad85265f15d2d15a1392664127223090ef5a25e0e99ac221baf1f0bfe1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2642d8280b5ab7900096ad18d52cb26533086521d54f01ab26e192327f759c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59bcdedecf49081493b9e8afa887ae3a75965d72a53f5bab7a6bd002c4d163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.637893 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.650400 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.658926 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.659105 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.659205 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.659296 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.659382 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:48Z","lastTransitionTime":"2026-01-21T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.762251 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.762625 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.762851 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.763139 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.763354 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:48Z","lastTransitionTime":"2026-01-21T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.866580 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.866887 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.866987 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.867139 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.867296 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:48Z","lastTransitionTime":"2026-01-21T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.969817 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.969914 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.969937 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.969970 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.969993 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:48Z","lastTransitionTime":"2026-01-21T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.073205 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.073265 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.073282 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.073306 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.073323 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:49Z","lastTransitionTime":"2026-01-21T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.108312 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.108518 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:35:21.10848391 +0000 UTC m=+83.185316979 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.176830 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.176883 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.176900 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.176926 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.176940 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:49Z","lastTransitionTime":"2026-01-21T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.209805 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.209896 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.209947 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.209984 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.210097 4902 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.210204 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.210266 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.210285 4902 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.210301 4902 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.210372 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:35:21.21017471 +0000 UTC m=+83.287007769 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.210401 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.210462 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.210479 4902 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.210418 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:35:21.210398807 +0000 UTC m=+83.287231876 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.210575 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 14:35:21.210553751 +0000 UTC m=+83.287386970 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.210873 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 14:35:21.21086024 +0000 UTC m=+83.287693269 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.272578 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 18:19:21.021485001 +0000 UTC Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.279303 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.279335 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.279346 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.279362 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.279373 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:49Z","lastTransitionTime":"2026-01-21T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.293926 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.293982 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.294177 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.294332 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.294467 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.295033 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.381879 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.381948 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.381961 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.381983 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.381997 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:49Z","lastTransitionTime":"2026-01-21T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.484871 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.485228 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.485363 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.485497 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.485667 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:49Z","lastTransitionTime":"2026-01-21T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.588732 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.588772 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.588780 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.588796 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.588807 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:49Z","lastTransitionTime":"2026-01-21T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.691335 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.691383 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.691394 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.691414 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.691428 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:49Z","lastTransitionTime":"2026-01-21T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.794170 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.794222 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.794234 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.794253 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.794267 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:49Z","lastTransitionTime":"2026-01-21T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.897220 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.897292 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.897312 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.897336 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.897375 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:49Z","lastTransitionTime":"2026-01-21T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.000886 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.001020 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.001110 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.001149 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.001173 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:50Z","lastTransitionTime":"2026-01-21T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.103392 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.103436 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.103445 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.103461 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.103471 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:50Z","lastTransitionTime":"2026-01-21T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.206590 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.207233 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.207296 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.207339 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.207369 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:50Z","lastTransitionTime":"2026-01-21T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.274039 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 04:08:33.218483887 +0000 UTC Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.294801 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:50 crc kubenswrapper[4902]: E0121 14:34:50.295026 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.310658 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.311106 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.311275 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.311424 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.311576 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:50Z","lastTransitionTime":"2026-01-21T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.415549 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.415609 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.415626 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.415651 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.415672 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:50Z","lastTransitionTime":"2026-01-21T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.518492 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.518531 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.518543 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.518560 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.518574 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:50Z","lastTransitionTime":"2026-01-21T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.620905 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.621026 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.621089 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.621128 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.621150 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:50Z","lastTransitionTime":"2026-01-21T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.724437 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.724512 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.724535 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.724564 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.724585 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:50Z","lastTransitionTime":"2026-01-21T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.827298 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.827348 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.827370 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.827399 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.827420 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:50Z","lastTransitionTime":"2026-01-21T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.930404 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.930469 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.930492 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.930523 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.930546 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:50Z","lastTransitionTime":"2026-01-21T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.033014 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.033091 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.033102 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.033118 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.033131 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.136439 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.136520 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.136548 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.136580 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.136608 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.239506 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.239555 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.239564 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.239577 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.239586 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.274434 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:32:35.978644298 +0000 UTC Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.294905 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.294966 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.294917 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:51 crc kubenswrapper[4902]: E0121 14:34:51.295155 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:51 crc kubenswrapper[4902]: E0121 14:34:51.295248 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:51 crc kubenswrapper[4902]: E0121 14:34:51.295311 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.343134 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.343226 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.343250 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.343280 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.343304 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.446285 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.446317 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.446325 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.446339 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.446350 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.549103 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.549168 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.549185 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.549208 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.549225 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.652479 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.652551 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.652576 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.652606 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.652634 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.716081 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.716406 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.716481 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.716549 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.716615 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: E0121 14:34:51.730402 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:51Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.734836 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.734887 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.734905 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.734929 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.734946 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: E0121 14:34:51.750174 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:51Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.753873 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.753914 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.753923 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.753941 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.753950 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: E0121 14:34:51.769958 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:51Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.777335 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.777397 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.777415 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.777437 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.777454 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: E0121 14:34:51.792448 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:51Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.795874 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.795914 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.795924 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.795940 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.795951 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: E0121 14:34:51.813790 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:51Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:51 crc kubenswrapper[4902]: E0121 14:34:51.813970 4902 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.815814 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.815843 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.815853 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.815870 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.815881 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.918244 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.918270 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.918278 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.918290 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.918299 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.021654 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.021729 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.021744 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.021785 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.021800 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:52Z","lastTransitionTime":"2026-01-21T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.124955 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.125031 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.125093 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.125131 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.125156 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:52Z","lastTransitionTime":"2026-01-21T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.227885 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.227942 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.227956 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.227978 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.227991 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:52Z","lastTransitionTime":"2026-01-21T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.275328 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 22:15:56.212537493 +0000 UTC Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.294768 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:52 crc kubenswrapper[4902]: E0121 14:34:52.294947 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.330435 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.330488 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.330500 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.330516 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.330529 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:52Z","lastTransitionTime":"2026-01-21T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.434013 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.434165 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.434196 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.434227 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.434251 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:52Z","lastTransitionTime":"2026-01-21T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.538075 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.538124 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.538134 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.538151 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.538163 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:52Z","lastTransitionTime":"2026-01-21T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.641321 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.641355 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.641363 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.641376 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.641387 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:52Z","lastTransitionTime":"2026-01-21T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.743845 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.743891 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.743902 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.743921 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.743936 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:52Z","lastTransitionTime":"2026-01-21T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.847210 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.847542 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.847643 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.847743 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.847837 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:52Z","lastTransitionTime":"2026-01-21T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.950112 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.950349 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.950437 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.950517 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.950587 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:52Z","lastTransitionTime":"2026-01-21T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.054124 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.054182 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.054190 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.054213 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.054222 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:53Z","lastTransitionTime":"2026-01-21T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.183105 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.183159 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.183173 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.183197 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.183214 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:53Z","lastTransitionTime":"2026-01-21T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.276022 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 01:13:58.22597301 +0000 UTC Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.285460 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.285494 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.285507 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.285527 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.285536 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:53Z","lastTransitionTime":"2026-01-21T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.294842 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:53 crc kubenswrapper[4902]: E0121 14:34:53.295035 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.295078 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.295164 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:53 crc kubenswrapper[4902]: E0121 14:34:53.295259 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:53 crc kubenswrapper[4902]: E0121 14:34:53.295396 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.388527 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.388564 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.388572 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.388585 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.388596 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:53Z","lastTransitionTime":"2026-01-21T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.492425 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.492468 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.492483 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.492507 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.492522 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:53Z","lastTransitionTime":"2026-01-21T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.595525 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.595578 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.595593 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.595614 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.595630 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:53Z","lastTransitionTime":"2026-01-21T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.698815 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.698871 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.698887 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.698911 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.698928 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:53Z","lastTransitionTime":"2026-01-21T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.802435 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.802475 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.802484 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.802504 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.802513 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:53Z","lastTransitionTime":"2026-01-21T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.905172 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.905208 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.905217 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.905229 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.905238 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:53Z","lastTransitionTime":"2026-01-21T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.008963 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.009014 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.009030 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.009067 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.009083 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:54Z","lastTransitionTime":"2026-01-21T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.112315 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.112399 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.112436 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.112469 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.112492 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:54Z","lastTransitionTime":"2026-01-21T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.215470 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.215520 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.215530 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.215547 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.215561 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:54Z","lastTransitionTime":"2026-01-21T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.276414 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 02:14:10.761399868 +0000 UTC Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.294250 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:54 crc kubenswrapper[4902]: E0121 14:34:54.294401 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.317926 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.317964 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.317973 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.317989 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.318000 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:54Z","lastTransitionTime":"2026-01-21T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.421466 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.421541 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.421561 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.421586 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.421603 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:54Z","lastTransitionTime":"2026-01-21T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.524686 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.524727 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.524741 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.524763 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.524779 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:54Z","lastTransitionTime":"2026-01-21T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.627757 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.627796 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.627806 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.627823 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.627834 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:54Z","lastTransitionTime":"2026-01-21T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.730110 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.730149 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.730164 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.730183 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.730196 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:54Z","lastTransitionTime":"2026-01-21T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.833441 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.833487 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.833497 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.833512 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.833525 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:54Z","lastTransitionTime":"2026-01-21T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.936490 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.936553 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.936571 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.936597 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.936614 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:54Z","lastTransitionTime":"2026-01-21T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.039259 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.039348 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.039362 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.039383 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.039396 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:55Z","lastTransitionTime":"2026-01-21T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.141880 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.141926 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.141938 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.141955 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.141968 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:55Z","lastTransitionTime":"2026-01-21T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.244545 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.244588 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.244598 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.244612 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.244622 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:55Z","lastTransitionTime":"2026-01-21T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.276997 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 14:17:22.873603895 +0000 UTC Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.294605 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.294632 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.294658 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:55 crc kubenswrapper[4902]: E0121 14:34:55.294769 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:55 crc kubenswrapper[4902]: E0121 14:34:55.294820 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:55 crc kubenswrapper[4902]: E0121 14:34:55.294880 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.346729 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.346757 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.346765 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.346777 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.346786 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:55Z","lastTransitionTime":"2026-01-21T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.449641 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.450003 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.450238 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.450456 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.450634 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:55Z","lastTransitionTime":"2026-01-21T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.553626 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.553676 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.553690 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.553710 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.553726 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:55Z","lastTransitionTime":"2026-01-21T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.657255 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.657323 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.657337 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.657386 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.657407 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:55Z","lastTransitionTime":"2026-01-21T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.760645 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.760723 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.760747 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.760778 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.760801 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:55Z","lastTransitionTime":"2026-01-21T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.864325 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.864499 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.864550 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.864586 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.864618 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:55Z","lastTransitionTime":"2026-01-21T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.968448 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.968537 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.968566 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.968596 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.968615 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:55Z","lastTransitionTime":"2026-01-21T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.071966 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.072071 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.072091 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.072114 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.072139 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:56Z","lastTransitionTime":"2026-01-21T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.176007 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.176093 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.176112 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.176134 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.176152 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:56Z","lastTransitionTime":"2026-01-21T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.277278 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 13:37:06.060387284 +0000 UTC Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.279193 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.279279 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.279290 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.279308 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.279319 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:56Z","lastTransitionTime":"2026-01-21T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.294602 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:56 crc kubenswrapper[4902]: E0121 14:34:56.294870 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.382160 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.382215 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.382227 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.382249 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.382263 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:56Z","lastTransitionTime":"2026-01-21T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.485656 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.485723 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.485764 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.485801 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.485825 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:56Z","lastTransitionTime":"2026-01-21T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.589347 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.589411 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.589435 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.589466 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.589486 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:56Z","lastTransitionTime":"2026-01-21T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.693521 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.693588 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.693609 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.693637 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.693657 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:56Z","lastTransitionTime":"2026-01-21T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.796866 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.796930 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.796949 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.796980 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.797000 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:56Z","lastTransitionTime":"2026-01-21T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.899715 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.899764 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.899783 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.899811 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.899829 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:56Z","lastTransitionTime":"2026-01-21T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.001907 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.001949 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.001983 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.002004 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.002017 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:57Z","lastTransitionTime":"2026-01-21T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.104691 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.104720 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.104730 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.104771 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.104786 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:57Z","lastTransitionTime":"2026-01-21T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.207946 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.208021 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.208036 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.208080 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.208096 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:57Z","lastTransitionTime":"2026-01-21T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.278027 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 23:46:58.23686725 +0000 UTC Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.294620 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.294671 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:57 crc kubenswrapper[4902]: E0121 14:34:57.294752 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:57 crc kubenswrapper[4902]: E0121 14:34:57.294827 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.294871 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:57 crc kubenswrapper[4902]: E0121 14:34:57.295228 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.310872 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.310915 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.310927 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.310969 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.310981 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:57Z","lastTransitionTime":"2026-01-21T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.413834 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.413897 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.413909 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.413923 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.413934 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:57Z","lastTransitionTime":"2026-01-21T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.516324 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.516362 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.516373 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.516388 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.516397 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:57Z","lastTransitionTime":"2026-01-21T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.619638 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.619756 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.619778 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.619808 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.619828 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:57Z","lastTransitionTime":"2026-01-21T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.722398 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.722450 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.722462 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.722483 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.722496 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:57Z","lastTransitionTime":"2026-01-21T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.825315 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.825353 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.825366 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.825382 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.825394 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:57Z","lastTransitionTime":"2026-01-21T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.928768 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.928807 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.928819 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.928837 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.928861 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:57Z","lastTransitionTime":"2026-01-21T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.031823 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.031862 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.031873 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.031888 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.031902 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:58Z","lastTransitionTime":"2026-01-21T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.134573 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.134611 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.134622 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.134640 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.134651 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:58Z","lastTransitionTime":"2026-01-21T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.238087 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.238131 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.238143 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.238181 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.238192 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:58Z","lastTransitionTime":"2026-01-21T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.278200 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 22:37:22.473047808 +0000 UTC Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.293899 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:58 crc kubenswrapper[4902]: E0121 14:34:58.294136 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.318653 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.334336 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.340423 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.340489 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.340504 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.340525 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.340562 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:58Z","lastTransitionTime":"2026-01-21T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.350301 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.365180 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e90c662-3709-4ec5-8dcd-cd159916c9a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b4bad85265f15d2d15a1392664127223090ef5a25e0e99ac221baf1f0bfe1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2642d8280b5ab7900096ad18d52cb26533086521d54f01ab26e192327f759c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59bcdedecf49081493b9e8afa887ae3a75965d72a53f5bab7a6bd002c4d163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.380902 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.395473 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.410237 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.423242 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.438698 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.443549 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.443597 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.443616 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.443637 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.443651 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:58Z","lastTransitionTime":"2026-01-21T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.457887 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.480333 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.493663 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.511013 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.528392 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.541867 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.546555 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.546607 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.546618 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.546636 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.546648 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:58Z","lastTransitionTime":"2026-01-21T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.559822 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:43Z\\\",\\\"message\\\":\\\" annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:34:43.143831 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw\\\\nI0121 14:34:43.143831 6521 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-kq588 before timer (time: 2026-01-21 14:34:44.616256342 +0000 UTC m=+2.021937169): skip\\\\nI0121 14:34:43.143829 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m2bnb\\\\nI0121 14:34:43.143839 6521 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143851 6521 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw in node crc\\\\nI0121 14:34:43.143853 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143860 6521 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-m2bnb in node crc\\\\nI0121 14:34:43.143863 6521 ovn.go:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.574454 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.585297 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.648724 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.648779 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.648794 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.648816 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.648832 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:58Z","lastTransitionTime":"2026-01-21T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.751679 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.751732 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.751742 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.751757 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.751770 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:58Z","lastTransitionTime":"2026-01-21T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.854700 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.854776 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.854799 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.854831 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.854852 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:58Z","lastTransitionTime":"2026-01-21T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.958392 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.958469 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.958491 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.958518 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.958538 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:58Z","lastTransitionTime":"2026-01-21T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.060942 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.060988 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.060998 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.061015 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.061029 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:59Z","lastTransitionTime":"2026-01-21T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.164917 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.165011 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.165028 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.165077 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.165096 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:59Z","lastTransitionTime":"2026-01-21T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.268670 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.268719 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.268734 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.268753 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.268765 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:59Z","lastTransitionTime":"2026-01-21T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.279189 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 15:07:08.570709555 +0000 UTC Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.294556 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.294585 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.294686 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:59 crc kubenswrapper[4902]: E0121 14:34:59.295286 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:59 crc kubenswrapper[4902]: E0121 14:34:59.295417 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:59 crc kubenswrapper[4902]: E0121 14:34:59.295481 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.295890 4902 scope.go:117] "RemoveContainer" containerID="c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57" Jan 21 14:34:59 crc kubenswrapper[4902]: E0121 14:34:59.296231 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.371822 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.371890 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.371909 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.371939 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.371958 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:59Z","lastTransitionTime":"2026-01-21T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.475500 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.475536 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.475546 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.475562 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.475574 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:59Z","lastTransitionTime":"2026-01-21T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.578508 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.578588 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.578646 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.578672 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.578691 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:59Z","lastTransitionTime":"2026-01-21T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.682627 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.682667 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.682682 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.682697 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.682706 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:59Z","lastTransitionTime":"2026-01-21T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.786720 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.786789 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.786812 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.786890 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.786920 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:59Z","lastTransitionTime":"2026-01-21T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.890136 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.890212 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.890237 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.890272 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.890297 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:59Z","lastTransitionTime":"2026-01-21T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.993624 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.993678 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.993691 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.993710 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.993724 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:59Z","lastTransitionTime":"2026-01-21T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.096302 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.096350 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.096362 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.096382 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.096404 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:00Z","lastTransitionTime":"2026-01-21T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.199349 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.199395 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.199407 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.199428 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.199447 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:00Z","lastTransitionTime":"2026-01-21T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.280395 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 06:58:32.625458145 +0000 UTC Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.294803 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:00 crc kubenswrapper[4902]: E0121 14:35:00.294929 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.302415 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.302451 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.302462 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.302477 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.302491 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:00Z","lastTransitionTime":"2026-01-21T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.405875 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.405919 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.405938 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.405972 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.405989 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:00Z","lastTransitionTime":"2026-01-21T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.509234 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.509283 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.509301 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.509325 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.509345 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:00Z","lastTransitionTime":"2026-01-21T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.612080 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.612115 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.612125 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.612140 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.612152 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:00Z","lastTransitionTime":"2026-01-21T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.716375 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.716437 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.716450 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.716485 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.716503 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:00Z","lastTransitionTime":"2026-01-21T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.819006 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.819075 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.819086 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.819100 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.819111 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:00Z","lastTransitionTime":"2026-01-21T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.922145 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.922201 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.922211 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.922229 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.922241 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:00Z","lastTransitionTime":"2026-01-21T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.024795 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.025160 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.025256 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.025342 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.025415 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:01Z","lastTransitionTime":"2026-01-21T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.127915 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.128265 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.128358 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.128463 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.128552 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:01Z","lastTransitionTime":"2026-01-21T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.231450 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.231500 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.231509 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.231524 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.231538 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:01Z","lastTransitionTime":"2026-01-21T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.280503 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 19:02:52.284890511 +0000 UTC Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.294884 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.294931 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:01 crc kubenswrapper[4902]: E0121 14:35:01.295146 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.295204 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:01 crc kubenswrapper[4902]: E0121 14:35:01.295297 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:01 crc kubenswrapper[4902]: E0121 14:35:01.295413 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.335508 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.335818 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.335921 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.336242 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.336355 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:01Z","lastTransitionTime":"2026-01-21T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.439119 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.439615 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.439848 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.440018 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.440294 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:01Z","lastTransitionTime":"2026-01-21T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.543744 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.543786 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.543796 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.543814 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.543825 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:01Z","lastTransitionTime":"2026-01-21T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.646498 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.646549 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.646564 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.646587 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.646603 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:01Z","lastTransitionTime":"2026-01-21T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.748856 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.749196 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.749389 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.749570 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.749720 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:01Z","lastTransitionTime":"2026-01-21T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.852551 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.852972 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.853294 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.853578 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.853840 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:01Z","lastTransitionTime":"2026-01-21T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.956596 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.956837 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.956934 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.957002 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.957107 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:01Z","lastTransitionTime":"2026-01-21T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.060650 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.060696 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.060706 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.060723 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.060737 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.164126 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.164180 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.164192 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.164229 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.164242 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.201444 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.201491 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.201503 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.201520 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.201532 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: E0121 14:35:02.213428 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.217433 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.217463 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.217475 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.217493 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.217506 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: E0121 14:35:02.228780 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.231964 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.231990 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.232000 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.232016 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.232026 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: E0121 14:35:02.242584 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.245917 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.245951 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.245962 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.245989 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.246003 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: E0121 14:35:02.257724 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.261127 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.261156 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.261164 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.261177 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.261187 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: E0121 14:35:02.271965 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:02 crc kubenswrapper[4902]: E0121 14:35:02.272158 4902 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.273897 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.273927 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.273940 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.273954 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.273966 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.281274 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 08:38:27.285973398 +0000 UTC Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.294626 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:02 crc kubenswrapper[4902]: E0121 14:35:02.294746 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.377123 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.377168 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.377185 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.377206 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.377223 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.479441 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.479475 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.479484 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.479497 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.479506 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.581767 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.581807 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.581818 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.581836 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.581849 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.684722 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.684789 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.684811 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.684841 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.684862 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.787985 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.788492 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.788630 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.788768 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.788887 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.892238 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.892278 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.892287 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.892301 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.892311 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.995420 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.995762 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.995860 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.995951 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.996030 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.103153 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.103190 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.103201 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.103217 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.103229 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:03Z","lastTransitionTime":"2026-01-21T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.206546 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.206590 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.206603 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.206619 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.206631 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:03Z","lastTransitionTime":"2026-01-21T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.281434 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 12:58:12.272896311 +0000 UTC Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.293983 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.294079 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:03 crc kubenswrapper[4902]: E0121 14:35:03.294202 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.294238 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:03 crc kubenswrapper[4902]: E0121 14:35:03.294359 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:03 crc kubenswrapper[4902]: E0121 14:35:03.294461 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.308738 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.308771 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.308780 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.308795 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.308805 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:03Z","lastTransitionTime":"2026-01-21T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.410545 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.410617 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.410630 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.410648 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.410663 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:03Z","lastTransitionTime":"2026-01-21T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.514010 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.514067 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.514079 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.514095 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.514105 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:03Z","lastTransitionTime":"2026-01-21T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.616283 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.616316 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.616329 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.616345 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.616357 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:03Z","lastTransitionTime":"2026-01-21T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.719163 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.719207 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.719216 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.719229 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.719239 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:03Z","lastTransitionTime":"2026-01-21T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.822061 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.822114 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.822125 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.822139 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.822151 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:03Z","lastTransitionTime":"2026-01-21T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.924969 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.925037 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.925091 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.925116 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.925133 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:03Z","lastTransitionTime":"2026-01-21T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.027479 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.027512 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.027520 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.027533 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.027543 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:04Z","lastTransitionTime":"2026-01-21T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.130614 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.130668 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.130681 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.130705 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.130721 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:04Z","lastTransitionTime":"2026-01-21T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.232990 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.233020 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.233032 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.233065 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.233078 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:04Z","lastTransitionTime":"2026-01-21T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.282084 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:28:07.831308217 +0000 UTC Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.294629 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:04 crc kubenswrapper[4902]: E0121 14:35:04.294774 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.335179 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.335219 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.335231 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.335249 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.335266 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:04Z","lastTransitionTime":"2026-01-21T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.400567 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:04 crc kubenswrapper[4902]: E0121 14:35:04.400741 4902 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:35:04 crc kubenswrapper[4902]: E0121 14:35:04.400809 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs podName:05d94e6a-249a-484c-8895-085e81f1dfaa nodeName:}" failed. No retries permitted until 2026-01-21 14:35:36.400791737 +0000 UTC m=+98.477624776 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs") pod "network-metrics-daemon-kq588" (UID: "05d94e6a-249a-484c-8895-085e81f1dfaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.438002 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.438084 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.438100 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.438120 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.438132 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:04Z","lastTransitionTime":"2026-01-21T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.540591 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.540633 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.540643 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.540657 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.540667 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:04Z","lastTransitionTime":"2026-01-21T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.642561 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.642594 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.642602 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.642616 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.642625 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:04Z","lastTransitionTime":"2026-01-21T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.745286 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.745356 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.745371 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.745393 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.745412 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:04Z","lastTransitionTime":"2026-01-21T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.847949 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.848031 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.848081 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.848112 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.848131 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:04Z","lastTransitionTime":"2026-01-21T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.950895 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.950940 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.950948 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.950967 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.950979 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:04Z","lastTransitionTime":"2026-01-21T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.053876 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.053922 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.053932 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.053950 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.053961 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:05Z","lastTransitionTime":"2026-01-21T14:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.157523 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.157565 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.157573 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.157590 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.157600 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:05Z","lastTransitionTime":"2026-01-21T14:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.260277 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.260310 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.260322 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.260338 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.260350 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:05Z","lastTransitionTime":"2026-01-21T14:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.282633 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 07:32:50.892809826 +0000 UTC Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.293981 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.294029 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.294084 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:05 crc kubenswrapper[4902]: E0121 14:35:05.294111 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:05 crc kubenswrapper[4902]: E0121 14:35:05.294188 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:05 crc kubenswrapper[4902]: E0121 14:35:05.294245 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.362691 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.362756 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.362767 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.362783 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.362794 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:05Z","lastTransitionTime":"2026-01-21T14:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.466111 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.466150 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.466159 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.466175 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.466185 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:05Z","lastTransitionTime":"2026-01-21T14:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.568928 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.568990 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.568999 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.569015 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.569024 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:05Z","lastTransitionTime":"2026-01-21T14:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.671786 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.671841 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.671851 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.671869 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.671883 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:05Z","lastTransitionTime":"2026-01-21T14:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.728967 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mztd6_037b55cf-cb9e-41ce-8b1e-3898f490a4aa/kube-multus/0.log" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.729032 4902 generic.go:334] "Generic (PLEG): container finished" podID="037b55cf-cb9e-41ce-8b1e-3898f490a4aa" containerID="801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e" exitCode=1 Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.729114 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mztd6" event={"ID":"037b55cf-cb9e-41ce-8b1e-3898f490a4aa","Type":"ContainerDied","Data":"801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e"} Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.729612 4902 scope.go:117] "RemoveContainer" containerID="801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.743567 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.757273 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.772477 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.774630 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.774678 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.774687 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.774705 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.774716 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:05Z","lastTransitionTime":"2026-01-21T14:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.785659 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.801474 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e90c662-3709-4ec5-8dcd-cd159916c9a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b4bad85265f15d2d15a1392664127223090ef5a25e0e99ac221baf1f0bfe1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2642d8280b5ab7900096ad18d52cb26533086521d54f01ab26e192327f759c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59bcdedecf49081493b9e8afa887ae3a75965d72a53f5bab7a6bd002c4d163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.814893 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.830540 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.844589 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.862759 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.877023 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.877315 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.877399 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.877462 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.877522 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:05Z","lastTransitionTime":"2026-01-21T14:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.881923 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:43Z\\\",\\\"message\\\":\\\" annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:34:43.143831 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw\\\\nI0121 14:34:43.143831 6521 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-kq588 before timer (time: 2026-01-21 14:34:44.616256342 +0000 UTC m=+2.021937169): skip\\\\nI0121 14:34:43.143829 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m2bnb\\\\nI0121 14:34:43.143839 6521 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143851 6521 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw in node crc\\\\nI0121 14:34:43.143853 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143860 6521 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-m2bnb in node crc\\\\nI0121 14:34:43.143863 6521 ovn.go:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.902058 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.919912 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.933262 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.946086 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.958055 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.969644 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.979894 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.980182 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.980312 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.980423 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.980515 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:05Z","lastTransitionTime":"2026-01-21T14:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.982065 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:05Z\\\",\\\"message\\\":\\\"2026-01-21T14:34:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46\\\\n2026-01-21T14:34:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46 to /host/opt/cni/bin/\\\\n2026-01-21T14:34:20Z [verbose] multus-daemon started\\\\n2026-01-21T14:34:20Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:35:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.992265 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.083538 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.083575 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.083584 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.083598 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.083608 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:06Z","lastTransitionTime":"2026-01-21T14:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.187126 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.187177 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.187188 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.187207 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.187223 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:06Z","lastTransitionTime":"2026-01-21T14:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.282810 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 14:24:42.347774877 +0000 UTC Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.290791 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.291138 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.291266 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.291355 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.291449 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:06Z","lastTransitionTime":"2026-01-21T14:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.294238 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:06 crc kubenswrapper[4902]: E0121 14:35:06.294477 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.394808 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.395111 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.395210 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.395378 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.395468 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:06Z","lastTransitionTime":"2026-01-21T14:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.497739 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.497995 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.498106 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.498134 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.498143 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:06Z","lastTransitionTime":"2026-01-21T14:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.600490 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.600541 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.600552 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.600568 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.600580 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:06Z","lastTransitionTime":"2026-01-21T14:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.703220 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.703259 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.703270 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.703287 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.703299 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:06Z","lastTransitionTime":"2026-01-21T14:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.734183 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mztd6_037b55cf-cb9e-41ce-8b1e-3898f490a4aa/kube-multus/0.log" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.734231 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mztd6" event={"ID":"037b55cf-cb9e-41ce-8b1e-3898f490a4aa","Type":"ContainerStarted","Data":"1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6"} Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.754351 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:05Z\\\",\\\"message\\\":\\\"2026-01-21T14:34:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46\\\\n2026-01-21T14:34:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46 to /host/opt/cni/bin/\\\\n2026-01-21T14:34:20Z [verbose] multus-daemon started\\\\n2026-01-21T14:34:20Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:35:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.774949 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.789705 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.801831 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.805677 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.805704 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.805712 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.805725 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.805733 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:06Z","lastTransitionTime":"2026-01-21T14:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.815347 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e90c662-3709-4ec5-8dcd-cd159916c9a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b4bad85265f15d2d15a1392664127223090ef5a25e0e99ac221baf1f0bfe1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2642d8280b5ab7900096ad18d52cb26533086521d54f01ab26e192327f759c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59bcdedecf49081493b9e8afa887ae3a75965d72a53f5bab7a6bd002c4d163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.829150 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.842107 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.859371 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.872694 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.884415 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.903525 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.907916 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.907963 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.907978 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.907995 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.908011 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:06Z","lastTransitionTime":"2026-01-21T14:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.918874 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.931417 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.944509 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.956930 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.974753 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:43Z\\\",\\\"message\\\":\\\" annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:34:43.143831 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw\\\\nI0121 14:34:43.143831 6521 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-kq588 before timer (time: 2026-01-21 14:34:44.616256342 +0000 UTC m=+2.021937169): skip\\\\nI0121 14:34:43.143829 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m2bnb\\\\nI0121 14:34:43.143839 6521 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143851 6521 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw in node crc\\\\nI0121 14:34:43.143853 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143860 6521 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-m2bnb in node crc\\\\nI0121 14:34:43.143863 6521 ovn.go:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.984741 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.995147 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.010843 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.010910 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.010927 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.010951 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.010968 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:07Z","lastTransitionTime":"2026-01-21T14:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.113262 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.113311 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.113320 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.113337 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.113349 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:07Z","lastTransitionTime":"2026-01-21T14:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.216480 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.216548 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.216558 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.216582 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.216600 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:07Z","lastTransitionTime":"2026-01-21T14:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.283555 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 00:25:26.370509254 +0000 UTC Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.294087 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:07 crc kubenswrapper[4902]: E0121 14:35:07.294203 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.294367 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:07 crc kubenswrapper[4902]: E0121 14:35:07.294416 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.294567 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:07 crc kubenswrapper[4902]: E0121 14:35:07.294748 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.319171 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.319198 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.319207 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.319223 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.319232 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:07Z","lastTransitionTime":"2026-01-21T14:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.422136 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.422180 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.422191 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.422207 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.422217 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:07Z","lastTransitionTime":"2026-01-21T14:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.525431 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.525470 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.525481 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.525497 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.525511 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:07Z","lastTransitionTime":"2026-01-21T14:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.628510 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.628547 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.628559 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.628576 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.628587 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:07Z","lastTransitionTime":"2026-01-21T14:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.731358 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.731397 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.731409 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.731424 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.731434 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:07Z","lastTransitionTime":"2026-01-21T14:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.834014 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.834087 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.834097 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.834112 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.834122 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:07Z","lastTransitionTime":"2026-01-21T14:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.936914 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.936959 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.936972 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.936989 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.937000 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:07Z","lastTransitionTime":"2026-01-21T14:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.039245 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.039305 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.039317 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.039333 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.039344 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:08Z","lastTransitionTime":"2026-01-21T14:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.141782 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.141823 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.141833 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.141848 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.141859 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:08Z","lastTransitionTime":"2026-01-21T14:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.243863 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.243907 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.243940 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.243958 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.243968 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:08Z","lastTransitionTime":"2026-01-21T14:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.284578 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 02:07:32.757132786 +0000 UTC Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.293896 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:08 crc kubenswrapper[4902]: E0121 14:35:08.294014 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.317816 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.330063 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.343150 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.346062 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.346106 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.346122 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.346142 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.346158 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:08Z","lastTransitionTime":"2026-01-21T14:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.358887 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.374738 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.398348 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:43Z\\\",\\\"message\\\":\\\" annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:34:43.143831 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw\\\\nI0121 14:34:43.143831 6521 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-kq588 before timer (time: 2026-01-21 14:34:44.616256342 +0000 UTC m=+2.021937169): skip\\\\nI0121 14:34:43.143829 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m2bnb\\\\nI0121 14:34:43.143839 6521 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143851 6521 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw in node crc\\\\nI0121 14:34:43.143853 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143860 6521 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-m2bnb in node crc\\\\nI0121 14:34:43.143863 6521 ovn.go:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.412154 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.423428 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.435547 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:05Z\\\",\\\"message\\\":\\\"2026-01-21T14:34:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46\\\\n2026-01-21T14:34:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46 to /host/opt/cni/bin/\\\\n2026-01-21T14:34:20Z [verbose] multus-daemon started\\\\n2026-01-21T14:34:20Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:35:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.446088 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.448918 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.449030 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.449110 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.449204 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.449296 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:08Z","lastTransitionTime":"2026-01-21T14:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.458002 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e90c662-3709-4ec5-8dcd-cd159916c9a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b4bad85265f15d2d15a1392664127223090ef5a25e0e99ac221baf1f0bfe1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2642d8280b5ab7900096ad18d52cb26533086521d54f01ab26e192327f759c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59bcdedecf49081493b9e8afa887ae3a75965d72a53f5bab7a6bd002c4d163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.470103 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.480446 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.490556 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.501783 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.511382 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.525267 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.534134 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.552444 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.552482 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.552492 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.552507 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.552518 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:08Z","lastTransitionTime":"2026-01-21T14:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.654281 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.654321 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.654330 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.654344 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.654354 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:08Z","lastTransitionTime":"2026-01-21T14:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.756869 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.756911 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.756923 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.756938 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.756949 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:08Z","lastTransitionTime":"2026-01-21T14:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.859373 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.859414 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.859424 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.859438 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.859447 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:08Z","lastTransitionTime":"2026-01-21T14:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.961664 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.961695 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.961703 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.961715 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.961724 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:08Z","lastTransitionTime":"2026-01-21T14:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.064848 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.064902 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.064920 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.064945 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.064962 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:09Z","lastTransitionTime":"2026-01-21T14:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.177032 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.177100 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.177112 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.177150 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.177163 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:09Z","lastTransitionTime":"2026-01-21T14:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.279277 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.279300 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.279309 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.279324 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.279332 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:09Z","lastTransitionTime":"2026-01-21T14:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.285089 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 08:45:38.032207687 +0000 UTC Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.294236 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.294258 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:09 crc kubenswrapper[4902]: E0121 14:35:09.294354 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.294376 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:09 crc kubenswrapper[4902]: E0121 14:35:09.294482 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:09 crc kubenswrapper[4902]: E0121 14:35:09.294566 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.386275 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.386312 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.386321 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.386336 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.386347 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:09Z","lastTransitionTime":"2026-01-21T14:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.489536 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.489608 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.489627 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.489653 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.489672 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:09Z","lastTransitionTime":"2026-01-21T14:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.592677 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.592706 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.592714 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.592728 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.592737 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:09Z","lastTransitionTime":"2026-01-21T14:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.694944 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.694985 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.694997 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.695013 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.695025 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:09Z","lastTransitionTime":"2026-01-21T14:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.797794 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.797836 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.797845 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.797859 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.797869 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:09Z","lastTransitionTime":"2026-01-21T14:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.900554 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.900600 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.900610 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.900627 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.900638 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:09Z","lastTransitionTime":"2026-01-21T14:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.003392 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.003463 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.003480 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.003506 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.003525 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:10Z","lastTransitionTime":"2026-01-21T14:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.106466 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.106497 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.106509 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.106529 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.106541 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:10Z","lastTransitionTime":"2026-01-21T14:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.208570 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.208611 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.208625 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.208642 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.208655 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:10Z","lastTransitionTime":"2026-01-21T14:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.285498 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 08:14:58.234846109 +0000 UTC Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.294940 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:10 crc kubenswrapper[4902]: E0121 14:35:10.295122 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.310897 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.310935 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.310944 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.310962 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.310974 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:10Z","lastTransitionTime":"2026-01-21T14:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.414012 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.414076 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.414090 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.414113 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.414125 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:10Z","lastTransitionTime":"2026-01-21T14:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.517119 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.517160 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.517169 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.517184 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.517197 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:10Z","lastTransitionTime":"2026-01-21T14:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.619156 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.619191 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.619202 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.619219 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.619231 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:10Z","lastTransitionTime":"2026-01-21T14:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.721877 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.721961 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.721991 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.722016 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.722085 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:10Z","lastTransitionTime":"2026-01-21T14:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.825231 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.825270 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.825281 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.825296 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.825311 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:10Z","lastTransitionTime":"2026-01-21T14:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.929251 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.929319 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.929341 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.929370 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.929395 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:10Z","lastTransitionTime":"2026-01-21T14:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.033569 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.033633 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.033641 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.033659 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.033669 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:11Z","lastTransitionTime":"2026-01-21T14:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.136555 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.136585 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.136593 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.136607 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.136616 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:11Z","lastTransitionTime":"2026-01-21T14:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.240344 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.240401 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.240417 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.240440 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.240464 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:11Z","lastTransitionTime":"2026-01-21T14:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.286160 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:15:55.272225583 +0000 UTC Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.294696 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.294737 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.294799 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:11 crc kubenswrapper[4902]: E0121 14:35:11.294912 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:11 crc kubenswrapper[4902]: E0121 14:35:11.295064 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:11 crc kubenswrapper[4902]: E0121 14:35:11.295158 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.342796 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.342837 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.342845 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.342859 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.342869 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:11Z","lastTransitionTime":"2026-01-21T14:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.445952 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.445996 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.446007 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.446024 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.446036 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:11Z","lastTransitionTime":"2026-01-21T14:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.548220 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.548282 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.548295 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.548318 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.548330 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:11Z","lastTransitionTime":"2026-01-21T14:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.650689 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.650759 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.650769 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.650785 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.650794 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:11Z","lastTransitionTime":"2026-01-21T14:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.754107 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.754174 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.754196 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.754225 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.754248 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:11Z","lastTransitionTime":"2026-01-21T14:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.857733 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.857786 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.857809 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.857840 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.857864 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:11Z","lastTransitionTime":"2026-01-21T14:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.960894 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.960940 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.960951 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.960966 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.960975 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:11Z","lastTransitionTime":"2026-01-21T14:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.063716 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.063752 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.063760 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.063773 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.063784 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.166452 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.166492 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.166503 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.166520 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.166532 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.269280 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.269323 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.269335 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.269352 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.269364 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.287231 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 03:08:52.571124904 +0000 UTC Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.294622 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:12 crc kubenswrapper[4902]: E0121 14:35:12.294771 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.298874 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.298923 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.298935 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.298951 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.298964 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: E0121 14:35:12.311988 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:12Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.316519 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.316596 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.316618 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.316648 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.316668 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: E0121 14:35:12.335968 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:12Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.343162 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.343206 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.343214 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.343229 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.343241 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: E0121 14:35:12.360792 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:12Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.365474 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.365555 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.365572 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.365593 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.365631 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: E0121 14:35:12.384128 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:12Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.389400 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.389464 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.389476 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.389516 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.389529 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: E0121 14:35:12.404193 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:12Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:12 crc kubenswrapper[4902]: E0121 14:35:12.404317 4902 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.406735 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.406764 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.406772 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.406787 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.406799 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.509549 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.509604 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.509618 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.509639 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.509653 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.612932 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.612985 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.612996 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.613011 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.613021 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.715468 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.715504 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.715515 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.715535 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.715548 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.818720 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.818782 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.818809 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.818832 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.818851 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.922883 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.922953 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.922974 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.922996 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.923016 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.027290 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.027358 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.027382 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.027406 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.027425 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:13Z","lastTransitionTime":"2026-01-21T14:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.129739 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.129814 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.129843 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.129873 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.129891 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:13Z","lastTransitionTime":"2026-01-21T14:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.232986 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.233084 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.233100 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.233121 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.233136 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:13Z","lastTransitionTime":"2026-01-21T14:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.287834 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 23:18:36.258076676 +0000 UTC Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.294170 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.294234 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.294225 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:13 crc kubenswrapper[4902]: E0121 14:35:13.294845 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:13 crc kubenswrapper[4902]: E0121 14:35:13.295089 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:13 crc kubenswrapper[4902]: E0121 14:35:13.295262 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.295350 4902 scope.go:117] "RemoveContainer" containerID="c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.336475 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.336529 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.336544 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.336565 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.336582 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:13Z","lastTransitionTime":"2026-01-21T14:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.441263 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.441369 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.441385 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.441427 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.441440 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:13Z","lastTransitionTime":"2026-01-21T14:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.545622 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.545679 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.545701 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.545722 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.545735 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:13Z","lastTransitionTime":"2026-01-21T14:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.648440 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.648533 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.648542 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.648555 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.648564 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:13Z","lastTransitionTime":"2026-01-21T14:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.751310 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.751354 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.751366 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.751381 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.751393 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:13Z","lastTransitionTime":"2026-01-21T14:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.759377 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/2.log" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.762703 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e"} Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.763261 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.784132 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.800397 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.812269 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.827792 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.839987 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.853946 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.853999 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.854010 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.854027 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.854075 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:13Z","lastTransitionTime":"2026-01-21T14:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.857224 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:43Z\\\",\\\"message\\\":\\\" annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:34:43.143831 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw\\\\nI0121 14:34:43.143831 6521 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-kq588 before timer (time: 2026-01-21 14:34:44.616256342 +0000 UTC m=+2.021937169): skip\\\\nI0121 14:34:43.143829 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m2bnb\\\\nI0121 14:34:43.143839 6521 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143851 6521 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw in node crc\\\\nI0121 14:34:43.143853 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143860 6521 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-m2bnb in node crc\\\\nI0121 14:34:43.143863 6521 ovn.go:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.869529 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.882724 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.899436 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:05Z\\\",\\\"message\\\":\\\"2026-01-21T14:34:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46\\\\n2026-01-21T14:34:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46 to /host/opt/cni/bin/\\\\n2026-01-21T14:34:20Z [verbose] multus-daemon started\\\\n2026-01-21T14:34:20Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:35:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.912474 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.932140 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.947878 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.956996 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.957096 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.957112 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.957141 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.957160 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:13Z","lastTransitionTime":"2026-01-21T14:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.961489 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e90c662-3709-4ec5-8dcd-cd159916c9a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b4bad85265f15d2d15a1392664127223090ef5a25e0e99ac221baf1f0bfe1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2642d8280b5ab7900096ad18d52cb26533086521d54f01ab26e192327f759c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59bcdedecf49081493b9e8afa887ae3a75965d72a53f5bab7a6bd002c4d163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.984982 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.000887 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.020413 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.032932 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.043359 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.060013 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.060066 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.060081 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.060098 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.060111 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:14Z","lastTransitionTime":"2026-01-21T14:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.163430 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.163474 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.163482 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.163497 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.163507 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:14Z","lastTransitionTime":"2026-01-21T14:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.266563 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.266598 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.266606 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.266619 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.266629 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:14Z","lastTransitionTime":"2026-01-21T14:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.288200 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 15:36:57.716793265 +0000 UTC Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.294803 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:14 crc kubenswrapper[4902]: E0121 14:35:14.294987 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.369443 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.369486 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.369496 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.369513 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.369544 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:14Z","lastTransitionTime":"2026-01-21T14:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.471742 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.471790 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.471803 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.471820 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.471832 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:14Z","lastTransitionTime":"2026-01-21T14:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.574451 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.574512 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.574528 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.574545 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.574556 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:14Z","lastTransitionTime":"2026-01-21T14:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.677113 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.677151 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.677159 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.677171 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.677181 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:14Z","lastTransitionTime":"2026-01-21T14:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.767864 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/3.log" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.768495 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/2.log" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.771859 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e" exitCode=1 Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.771929 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e"} Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.771989 4902 scope.go:117] "RemoveContainer" containerID="c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.772540 4902 scope.go:117] "RemoveContainer" containerID="16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e" Jan 21 14:35:14 crc kubenswrapper[4902]: E0121 14:35:14.772774 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.780749 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.780785 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.780793 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.780808 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.780818 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:14Z","lastTransitionTime":"2026-01-21T14:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.790131 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:05Z\\\",\\\"message\\\":\\\"2026-01-21T14:34:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46\\\\n2026-01-21T14:34:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46 to /host/opt/cni/bin/\\\\n2026-01-21T14:34:20Z [verbose] multus-daemon started\\\\n2026-01-21T14:34:20Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:35:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.803422 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.815147 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.827460 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.839601 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.851318 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.867162 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.880746 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.882614 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.882675 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.882687 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.882710 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.882740 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:14Z","lastTransitionTime":"2026-01-21T14:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.893694 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e90c662-3709-4ec5-8dcd-cd159916c9a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b4bad85265f15d2d15a1392664127223090ef5a25e0e99ac221baf1f0bfe1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2642d8280b5ab7900096ad18d52cb26533086521d54f01ab26e192327f759c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59bcdedecf49081493b9e8afa887ae3a75965d72a53f5bab7a6bd002c4d163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.905745 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.918647 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.931680 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.947838 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.976546 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:43Z\\\",\\\"message\\\":\\\" annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:34:43.143831 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw\\\\nI0121 14:34:43.143831 6521 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-kq588 before timer (time: 2026-01-21 14:34:44.616256342 +0000 UTC m=+2.021937169): skip\\\\nI0121 14:34:43.143829 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m2bnb\\\\nI0121 14:34:43.143839 6521 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143851 6521 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw in node crc\\\\nI0121 14:34:43.143853 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143860 6521 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-m2bnb in node crc\\\\nI0121 14:34:43.143863 6521 ovn.go:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:14Z\\\",\\\"message\\\":\\\"45-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 14:35:14.371450 6933 services_controller.go:445] Built service openshift-authentication/oauth-openshift LB template configs for network=default: []services.lbConfig(nil)\\\\nF0121 14:35:14.372204 6933 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:35:14.372229 6933 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0121 14:35:14.372206 6933 services_controller.go:451] Built service openshift-authentication/oauth-open\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.985263 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.985310 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.985325 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.985348 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.985362 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:14Z","lastTransitionTime":"2026-01-21T14:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.995976 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.007488 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.017138 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.028284 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.089344 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.089390 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.089400 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.089419 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.089429 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:15Z","lastTransitionTime":"2026-01-21T14:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.192982 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.193082 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.193103 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.193135 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.193156 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:15Z","lastTransitionTime":"2026-01-21T14:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.289319 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 04:04:05.165181666 +0000 UTC Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.294605 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.294637 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.294649 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:15 crc kubenswrapper[4902]: E0121 14:35:15.294715 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:15 crc kubenswrapper[4902]: E0121 14:35:15.295122 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:15 crc kubenswrapper[4902]: E0121 14:35:15.295109 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.296188 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.296248 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.296272 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.296298 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.296321 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:15Z","lastTransitionTime":"2026-01-21T14:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.399370 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.399465 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.399484 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.399509 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.399526 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:15Z","lastTransitionTime":"2026-01-21T14:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.502805 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.502892 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.502917 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.502948 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.502968 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:15Z","lastTransitionTime":"2026-01-21T14:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.606130 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.606195 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.606221 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.606253 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.606277 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:15Z","lastTransitionTime":"2026-01-21T14:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.714367 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.714403 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.714412 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.714427 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.714439 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:15Z","lastTransitionTime":"2026-01-21T14:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.777640 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/3.log" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.782723 4902 scope.go:117] "RemoveContainer" containerID="16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e" Jan 21 14:35:15 crc kubenswrapper[4902]: E0121 14:35:15.782981 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.797102 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e90c662-3709-4ec5-8dcd-cd159916c9a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b4bad85265f15d2d15a1392664127223090ef5a25e0e99ac221baf1f0bfe1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2642d8280b5ab7900096ad18d52cb26533086521d54f01ab26e192327f759c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59bcdedecf49081493b9e8afa887ae3a75965d72a53f5bab7a6bd002c4d163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.812229 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.816789 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.816872 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.816886 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.816908 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.816923 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:15Z","lastTransitionTime":"2026-01-21T14:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.825435 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.838977 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.850118 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.860234 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.879800 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.893531 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.912420 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.922730 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.922788 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.922802 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.922825 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.922844 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:15Z","lastTransitionTime":"2026-01-21T14:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.939357 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.958942 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.977167 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.991468 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.021684 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:14Z\\\",\\\"message\\\":\\\"45-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 14:35:14.371450 6933 services_controller.go:445] Built service openshift-authentication/oauth-openshift LB template configs for network=default: []services.lbConfig(nil)\\\\nF0121 14:35:14.372204 6933 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:35:14.372229 6933 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0121 14:35:14.372206 6933 services_controller.go:451] Built service openshift-authentication/oauth-open\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:35:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.026064 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.026109 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.026121 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.026141 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.026155 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:16Z","lastTransitionTime":"2026-01-21T14:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.042604 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.057439 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.072446 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:05Z\\\",\\\"message\\\":\\\"2026-01-21T14:34:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46\\\\n2026-01-21T14:34:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46 to /host/opt/cni/bin/\\\\n2026-01-21T14:34:20Z [verbose] multus-daemon started\\\\n2026-01-21T14:34:20Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:35:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.088760 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.129062 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.129111 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.129125 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.129143 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.129155 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:16Z","lastTransitionTime":"2026-01-21T14:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.231250 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.231336 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.231356 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.231729 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.232305 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:16Z","lastTransitionTime":"2026-01-21T14:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.290214 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 20:04:38.994692461 +0000 UTC Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.293921 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:16 crc kubenswrapper[4902]: E0121 14:35:16.294184 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.309670 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.335554 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.335865 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.335965 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.336145 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.336264 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:16Z","lastTransitionTime":"2026-01-21T14:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.439136 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.439189 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.439201 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.439220 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.439234 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:16Z","lastTransitionTime":"2026-01-21T14:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.541989 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.542067 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.542077 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.542098 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.542111 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:16Z","lastTransitionTime":"2026-01-21T14:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.645428 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.645495 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.645508 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.645529 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.645547 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:16Z","lastTransitionTime":"2026-01-21T14:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.749106 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.749154 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.749167 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.749184 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.749199 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:16Z","lastTransitionTime":"2026-01-21T14:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.853244 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.853311 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.853325 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.853348 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.853364 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:16Z","lastTransitionTime":"2026-01-21T14:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.956612 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.956648 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.956748 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.956767 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.956778 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:16Z","lastTransitionTime":"2026-01-21T14:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.060066 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.060123 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.060145 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.060167 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.060181 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:17Z","lastTransitionTime":"2026-01-21T14:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.163463 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.163556 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.163900 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.163958 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.163973 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:17Z","lastTransitionTime":"2026-01-21T14:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.266713 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.266768 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.266778 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.266794 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.266804 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:17Z","lastTransitionTime":"2026-01-21T14:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.291520 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 03:44:24.372107739 +0000 UTC Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.293950 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:17 crc kubenswrapper[4902]: E0121 14:35:17.294101 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.294356 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.294461 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:17 crc kubenswrapper[4902]: E0121 14:35:17.294700 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:17 crc kubenswrapper[4902]: E0121 14:35:17.294909 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.369397 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.369446 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.369458 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.369478 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.369492 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:17Z","lastTransitionTime":"2026-01-21T14:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.472235 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.472294 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.472305 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.472321 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.472332 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:17Z","lastTransitionTime":"2026-01-21T14:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.575256 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.575315 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.575325 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.575345 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.575361 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:17Z","lastTransitionTime":"2026-01-21T14:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.678408 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.678465 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.678478 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.678499 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.678512 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:17Z","lastTransitionTime":"2026-01-21T14:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.781321 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.781613 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.781678 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.781748 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.781816 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:17Z","lastTransitionTime":"2026-01-21T14:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.884142 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.884192 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.884203 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.884218 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.884232 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:17Z","lastTransitionTime":"2026-01-21T14:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.987288 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.987594 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.987679 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.987780 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.987841 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:17Z","lastTransitionTime":"2026-01-21T14:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.090582 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.090679 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.090705 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.090736 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.090754 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:18Z","lastTransitionTime":"2026-01-21T14:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.192994 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.193036 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.193081 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.193098 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.193110 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:18Z","lastTransitionTime":"2026-01-21T14:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.291735 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 05:08:51.30420266 +0000 UTC Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.294296 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:18 crc kubenswrapper[4902]: E0121 14:35:18.294459 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.296665 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.296736 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.296751 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.296800 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.296815 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:18Z","lastTransitionTime":"2026-01-21T14:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.311541 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.336402 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.351030 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.367824 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e90c662-3709-4ec5-8dcd-cd159916c9a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b4bad85265f15d2d15a1392664127223090ef5a25e0e99ac221baf1f0bfe1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2642d8280b5ab7900096ad18d52cb26533086521d54f01ab26e192327f759c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59bcdedecf49081493b9e8afa887ae3a75965d72a53f5bab7a6bd002c4d163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.394590 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.398784 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.398841 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.398852 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.398866 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.398904 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:18Z","lastTransitionTime":"2026-01-21T14:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.412612 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.426586 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.437670 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.459278 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:14Z\\\",\\\"message\\\":\\\"45-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 14:35:14.371450 6933 services_controller.go:445] Built service openshift-authentication/oauth-openshift LB template configs for network=default: []services.lbConfig(nil)\\\\nF0121 14:35:14.372204 6933 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:35:14.372229 6933 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0121 14:35:14.372206 6933 services_controller.go:451] Built service openshift-authentication/oauth-open\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:35:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.480588 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.492996 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.502105 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.502332 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.502395 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.502469 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.502531 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:18Z","lastTransitionTime":"2026-01-21T14:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.503407 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.522223 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.535781 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.546114 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c6d394d-639a-4b18-9e61-3f28950ff275\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5941fa9b0928cf6a092eda06a1456dc7cc2e20ca9cded4fc963bf722557ddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbd2d677787d4c4acc335092c83a711598f506dd9b3d9e967cb27921650973f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd2d677787d4c4acc335092c83a711598f506dd9b3d9e967cb27921650973f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.561589 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.575120 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.590272 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:05Z\\\",\\\"message\\\":\\\"2026-01-21T14:34:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46\\\\n2026-01-21T14:34:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46 to /host/opt/cni/bin/\\\\n2026-01-21T14:34:20Z [verbose] multus-daemon started\\\\n2026-01-21T14:34:20Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:35:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.603009 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.604844 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.604884 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.604896 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.604917 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.604930 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:18Z","lastTransitionTime":"2026-01-21T14:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.708828 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.708919 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.708943 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.708975 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.709002 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:18Z","lastTransitionTime":"2026-01-21T14:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.812188 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.812236 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.812249 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.812267 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.812287 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:18Z","lastTransitionTime":"2026-01-21T14:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.915627 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.915677 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.915696 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.915718 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.915734 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:18Z","lastTransitionTime":"2026-01-21T14:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.018422 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.018467 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.018480 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.018500 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.018515 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:19Z","lastTransitionTime":"2026-01-21T14:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.122029 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.122118 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.122138 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.122162 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.122179 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:19Z","lastTransitionTime":"2026-01-21T14:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.225555 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.225643 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.225664 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.225712 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.225733 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:19Z","lastTransitionTime":"2026-01-21T14:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.293276 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 19:50:31.856510404 +0000 UTC Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.294555 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.294603 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.294747 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:19 crc kubenswrapper[4902]: E0121 14:35:19.294735 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:19 crc kubenswrapper[4902]: E0121 14:35:19.294876 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:19 crc kubenswrapper[4902]: E0121 14:35:19.294964 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.338128 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.338187 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.338204 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.338232 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.338253 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:19Z","lastTransitionTime":"2026-01-21T14:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.440862 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.440952 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.440976 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.441007 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.441029 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:19Z","lastTransitionTime":"2026-01-21T14:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.548798 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.548853 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.549108 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.549130 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.549141 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:19Z","lastTransitionTime":"2026-01-21T14:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.651770 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.651808 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.651817 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.651830 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.651840 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:19Z","lastTransitionTime":"2026-01-21T14:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.755723 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.755828 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.755842 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.755870 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.755886 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:19Z","lastTransitionTime":"2026-01-21T14:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.859380 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.859415 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.859424 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.859440 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.859449 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:19Z","lastTransitionTime":"2026-01-21T14:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.963011 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.963114 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.963134 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.963160 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.963179 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:19Z","lastTransitionTime":"2026-01-21T14:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.066342 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.066396 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.066405 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.066426 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.066437 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:20Z","lastTransitionTime":"2026-01-21T14:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.169338 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.169418 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.169445 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.169478 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.169502 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:20Z","lastTransitionTime":"2026-01-21T14:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.272892 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.272950 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.272964 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.272986 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.273005 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:20Z","lastTransitionTime":"2026-01-21T14:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.294572 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 20:23:54.142742961 +0000 UTC Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.294820 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:20 crc kubenswrapper[4902]: E0121 14:35:20.295016 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.375586 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.375629 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.375638 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.375655 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.375666 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:20Z","lastTransitionTime":"2026-01-21T14:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.478194 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.478254 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.478266 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.478284 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.478300 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:20Z","lastTransitionTime":"2026-01-21T14:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.582174 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.582245 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.582260 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.582283 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.582300 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:20Z","lastTransitionTime":"2026-01-21T14:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.684894 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.684937 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.684950 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.684967 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.684979 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:20Z","lastTransitionTime":"2026-01-21T14:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.788695 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.788763 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.788787 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.788820 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.788847 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:20Z","lastTransitionTime":"2026-01-21T14:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.891515 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.891575 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.891594 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.891619 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.891633 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:20Z","lastTransitionTime":"2026-01-21T14:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.994693 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.994747 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.994756 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.994774 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.994783 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:20Z","lastTransitionTime":"2026-01-21T14:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.097112 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.097156 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.097168 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.097184 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.097195 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:21Z","lastTransitionTime":"2026-01-21T14:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.182646 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.182972 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.182932255 +0000 UTC m=+147.259765344 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.199322 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.199357 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.199368 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.199385 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.199398 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:21Z","lastTransitionTime":"2026-01-21T14:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.283857 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.283924 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.283969 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.284011 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.284110 4902 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.284159 4902 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.284189 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.284120 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.284237 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.284253 4902 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.284196 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.284173152 +0000 UTC m=+147.361006191 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.284213 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.284389 4902 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.284313 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.284296856 +0000 UTC m=+147.361129895 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.284500 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.284479811 +0000 UTC m=+147.361312840 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.284519 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.284508392 +0000 UTC m=+147.361341421 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.294755 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.294771 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.294904 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.294780 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.294766 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 04:24:05.034192666 +0000 UTC Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.294999 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.295078 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.301989 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.302018 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.302028 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.302063 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.302076 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:21Z","lastTransitionTime":"2026-01-21T14:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.404128 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.404172 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.404182 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.404205 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.404214 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:21Z","lastTransitionTime":"2026-01-21T14:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.507258 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.507323 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.507341 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.507366 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.507383 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:21Z","lastTransitionTime":"2026-01-21T14:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.610781 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.610835 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.610851 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.610869 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.610882 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:21Z","lastTransitionTime":"2026-01-21T14:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.714304 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.714366 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.714387 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.714416 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.714438 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:21Z","lastTransitionTime":"2026-01-21T14:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.816865 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.816928 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.816946 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.816971 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.817093 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:21Z","lastTransitionTime":"2026-01-21T14:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.920348 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.920407 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.920425 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.920450 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.920558 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:21Z","lastTransitionTime":"2026-01-21T14:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.023999 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.024100 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.024123 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.024152 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.024172 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:22Z","lastTransitionTime":"2026-01-21T14:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.126925 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.126960 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.126969 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.126984 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.126995 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:22Z","lastTransitionTime":"2026-01-21T14:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.230245 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.230296 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.230308 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.230330 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.230343 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:22Z","lastTransitionTime":"2026-01-21T14:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.294173 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:22 crc kubenswrapper[4902]: E0121 14:35:22.294369 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.295224 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 15:23:27.358938444 +0000 UTC Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.333609 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.333662 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.333680 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.333702 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.333715 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:22Z","lastTransitionTime":"2026-01-21T14:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.436076 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.436120 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.436130 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.436151 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.436191 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:22Z","lastTransitionTime":"2026-01-21T14:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.539707 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.539756 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.539770 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.539793 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.539807 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:22Z","lastTransitionTime":"2026-01-21T14:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.643240 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.643301 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.643313 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.643336 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.643351 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:22Z","lastTransitionTime":"2026-01-21T14:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.745782 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.745832 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.745841 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.745854 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.745863 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:22Z","lastTransitionTime":"2026-01-21T14:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.765588 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.765629 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.765642 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.765657 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.765667 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:22Z","lastTransitionTime":"2026-01-21T14:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:22 crc kubenswrapper[4902]: E0121 14:35:22.778464 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.783099 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.783187 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.783204 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.783226 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.783241 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:22Z","lastTransitionTime":"2026-01-21T14:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:22 crc kubenswrapper[4902]: E0121 14:35:22.794887 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.798123 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.798219 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.798238 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.798264 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.798288 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:22Z","lastTransitionTime":"2026-01-21T14:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:22 crc kubenswrapper[4902]: E0121 14:35:22.812126 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.207643 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.207691 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.207704 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.207722 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.207747 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:23Z","lastTransitionTime":"2026-01-21T14:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:23 crc kubenswrapper[4902]: E0121 14:35:23.229096 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.235212 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.235270 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.235287 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.235314 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.235330 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:23Z","lastTransitionTime":"2026-01-21T14:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:23 crc kubenswrapper[4902]: E0121 14:35:23.264888 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:23 crc kubenswrapper[4902]: E0121 14:35:23.265089 4902 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.266769 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.266843 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.266857 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.266875 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.266886 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:23Z","lastTransitionTime":"2026-01-21T14:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.294506 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.294555 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.294637 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:23 crc kubenswrapper[4902]: E0121 14:35:23.294759 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:23 crc kubenswrapper[4902]: E0121 14:35:23.294845 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:23 crc kubenswrapper[4902]: E0121 14:35:23.294974 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.295477 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:28:19.730849606 +0000 UTC Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.370187 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.370322 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.370352 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.370390 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.370414 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:23Z","lastTransitionTime":"2026-01-21T14:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.473414 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.473473 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.473484 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.473509 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.473521 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:23Z","lastTransitionTime":"2026-01-21T14:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.576221 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.576261 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.576273 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.576291 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.576303 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:23Z","lastTransitionTime":"2026-01-21T14:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.679521 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.679557 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.679567 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.679586 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.679598 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:23Z","lastTransitionTime":"2026-01-21T14:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.781796 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.781833 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.781841 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.781854 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.781865 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:23Z","lastTransitionTime":"2026-01-21T14:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.884479 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.884545 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.884560 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.884573 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.884582 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:23Z","lastTransitionTime":"2026-01-21T14:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.987077 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.987119 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.987129 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.987147 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.987159 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:23Z","lastTransitionTime":"2026-01-21T14:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.090177 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.090254 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.090276 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.090308 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.090354 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:24Z","lastTransitionTime":"2026-01-21T14:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.193743 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.193800 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.193820 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.193842 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.193859 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:24Z","lastTransitionTime":"2026-01-21T14:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.294620 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:24 crc kubenswrapper[4902]: E0121 14:35:24.294801 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.295635 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 08:33:12.905032545 +0000 UTC Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.296217 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.296248 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.296261 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.296281 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.296293 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:24Z","lastTransitionTime":"2026-01-21T14:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.398851 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.398915 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.398934 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.398957 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.398974 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:24Z","lastTransitionTime":"2026-01-21T14:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.502226 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.502372 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.502391 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.502417 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.502473 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:24Z","lastTransitionTime":"2026-01-21T14:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.616177 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.616279 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.616291 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.616309 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.616324 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:24Z","lastTransitionTime":"2026-01-21T14:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.719452 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.719508 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.719534 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.719560 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.719576 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:24Z","lastTransitionTime":"2026-01-21T14:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.821973 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.822312 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.822376 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.822453 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.822546 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:24Z","lastTransitionTime":"2026-01-21T14:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.926077 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.926151 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.926175 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.926205 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.926227 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:24Z","lastTransitionTime":"2026-01-21T14:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.029169 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.029239 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.029260 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.029285 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.029303 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:25Z","lastTransitionTime":"2026-01-21T14:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.132438 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.132490 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.132503 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.132519 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.132533 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:25Z","lastTransitionTime":"2026-01-21T14:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.235755 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.235811 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.235822 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.235839 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.235851 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:25Z","lastTransitionTime":"2026-01-21T14:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.294068 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.294134 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.294179 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:25 crc kubenswrapper[4902]: E0121 14:35:25.294225 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:25 crc kubenswrapper[4902]: E0121 14:35:25.294343 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:25 crc kubenswrapper[4902]: E0121 14:35:25.294444 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.296093 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 00:41:29.504667789 +0000 UTC Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.338744 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.338803 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.338818 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.338844 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.338863 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:25Z","lastTransitionTime":"2026-01-21T14:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.441980 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.442034 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.442058 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.442072 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.442080 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:25Z","lastTransitionTime":"2026-01-21T14:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.545772 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.545818 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.545832 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.545970 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.545984 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:25Z","lastTransitionTime":"2026-01-21T14:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.649011 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.649063 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.649074 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.649089 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.649100 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:25Z","lastTransitionTime":"2026-01-21T14:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.751393 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.751808 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.751930 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.752119 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.752222 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:25Z","lastTransitionTime":"2026-01-21T14:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.855646 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.855702 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.855716 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.855736 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.855749 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:25Z","lastTransitionTime":"2026-01-21T14:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.958272 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.958329 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.958345 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.958369 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.958391 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:25Z","lastTransitionTime":"2026-01-21T14:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.062236 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.062292 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.062310 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.062332 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.062349 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:26Z","lastTransitionTime":"2026-01-21T14:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.164567 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.164638 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.164658 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.164687 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.164710 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:26Z","lastTransitionTime":"2026-01-21T14:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.268090 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.268177 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.268203 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.268235 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.268257 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:26Z","lastTransitionTime":"2026-01-21T14:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.294588 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:26 crc kubenswrapper[4902]: E0121 14:35:26.294820 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.296728 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 10:51:12.499816645 +0000 UTC Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.371884 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.371963 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.371991 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.372022 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.372092 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:26Z","lastTransitionTime":"2026-01-21T14:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.475649 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.475721 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.475745 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.475779 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.475802 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:26Z","lastTransitionTime":"2026-01-21T14:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.578810 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.578876 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.578900 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.579016 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.579073 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:26Z","lastTransitionTime":"2026-01-21T14:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.682419 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.682499 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.682511 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.682547 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.682561 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:26Z","lastTransitionTime":"2026-01-21T14:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.785035 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.785106 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.785118 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.785135 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.785151 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:26Z","lastTransitionTime":"2026-01-21T14:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.888733 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.888833 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.888857 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.888889 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.888911 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:26Z","lastTransitionTime":"2026-01-21T14:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.992187 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.992254 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.992267 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.992288 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.992304 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:26Z","lastTransitionTime":"2026-01-21T14:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.095872 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.095932 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.095944 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.095962 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.095974 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:27Z","lastTransitionTime":"2026-01-21T14:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.198578 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.198628 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.198644 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.198668 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.198681 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:27Z","lastTransitionTime":"2026-01-21T14:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.293994 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.294070 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:27 crc kubenswrapper[4902]: E0121 14:35:27.294178 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:27 crc kubenswrapper[4902]: E0121 14:35:27.294283 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.294015 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:27 crc kubenswrapper[4902]: E0121 14:35:27.294396 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.297067 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:03:42.764405929 +0000 UTC Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.301390 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.301434 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.301448 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.301474 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.301497 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:27Z","lastTransitionTime":"2026-01-21T14:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.403870 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.403909 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.403922 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.403939 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.403951 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:27Z","lastTransitionTime":"2026-01-21T14:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.506626 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.506941 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.507098 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.507240 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.507330 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:27Z","lastTransitionTime":"2026-01-21T14:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.610105 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.610160 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.610171 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.610192 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.610205 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:27Z","lastTransitionTime":"2026-01-21T14:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.714272 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.714314 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.714327 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.714350 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.714364 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:27Z","lastTransitionTime":"2026-01-21T14:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.817736 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.818114 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.818208 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.818303 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.818377 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:27Z","lastTransitionTime":"2026-01-21T14:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.920658 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.921110 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.921239 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.921338 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.921444 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:27Z","lastTransitionTime":"2026-01-21T14:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.024284 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.024706 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.024894 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.025006 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.025125 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:28Z","lastTransitionTime":"2026-01-21T14:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.127448 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.127733 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.127800 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.127871 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.127933 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:28Z","lastTransitionTime":"2026-01-21T14:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.230596 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.230664 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.230682 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.230702 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.230715 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:28Z","lastTransitionTime":"2026-01-21T14:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.294570 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:28 crc kubenswrapper[4902]: E0121 14:35:28.294796 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.297586 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:43:05.082207595 +0000 UTC Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.309998 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.327181 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.333411 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.333478 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.333495 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.333519 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.333535 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:28Z","lastTransitionTime":"2026-01-21T14:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.342830 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.356945 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.371701 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.386983 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.400789 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.414842 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e90c662-3709-4ec5-8dcd-cd159916c9a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b4bad85265f15d2d15a1392664127223090ef5a25e0e99ac221baf1f0bfe1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2642d8280b5ab7900096ad18d52cb26533086521d54f01ab26e192327f759c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59bcdedecf49081493b9e8afa887ae3a75965d72a53f5bab7a6bd002c4d163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.429092 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.436872 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.436911 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.436922 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.436940 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.436956 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:28Z","lastTransitionTime":"2026-01-21T14:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.443240 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.457411 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.470466 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.488188 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:14Z\\\",\\\"message\\\":\\\"45-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 14:35:14.371450 6933 services_controller.go:445] Built service openshift-authentication/oauth-openshift LB template configs for network=default: []services.lbConfig(nil)\\\\nF0121 14:35:14.372204 6933 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:35:14.372229 6933 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0121 14:35:14.372206 6933 services_controller.go:451] Built service openshift-authentication/oauth-open\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:35:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.506896 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.518649 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.529961 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.540013 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.540059 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.540069 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.540086 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.540100 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:28Z","lastTransitionTime":"2026-01-21T14:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.540459 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c6d394d-639a-4b18-9e61-3f28950ff275\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5941fa9b0928cf6a092eda06a1456dc7cc2e20ca9cded4fc963bf722557ddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbd2d677787d4c4acc335092c83a711598f506dd9b3d9e967cb27921650973f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd2d677787d4c4acc335092c83a711598f506dd9b3d9e967cb27921650973f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.551277 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.564383 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:05Z\\\",\\\"message\\\":\\\"2026-01-21T14:34:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46\\\\n2026-01-21T14:34:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46 to /host/opt/cni/bin/\\\\n2026-01-21T14:34:20Z [verbose] multus-daemon started\\\\n2026-01-21T14:34:20Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:35:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.643300 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.643347 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.643358 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.643376 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.643417 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:28Z","lastTransitionTime":"2026-01-21T14:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.745568 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.745927 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.745939 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.745961 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.745973 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:28Z","lastTransitionTime":"2026-01-21T14:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.848314 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.848358 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.848371 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.848387 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.848398 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:28Z","lastTransitionTime":"2026-01-21T14:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.951090 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.951121 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.951131 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.951146 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.951156 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:28Z","lastTransitionTime":"2026-01-21T14:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.053670 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.054097 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.054203 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.054307 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.054411 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:29Z","lastTransitionTime":"2026-01-21T14:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.157857 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.157911 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.157926 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.157945 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.157956 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:29Z","lastTransitionTime":"2026-01-21T14:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.260921 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.261015 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.261093 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.261130 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.261157 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:29Z","lastTransitionTime":"2026-01-21T14:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.294131 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:29 crc kubenswrapper[4902]: E0121 14:35:29.294325 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.294672 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:29 crc kubenswrapper[4902]: E0121 14:35:29.294804 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.295134 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:29 crc kubenswrapper[4902]: E0121 14:35:29.295262 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.298594 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 10:54:12.774958816 +0000 UTC Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.364247 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.364328 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.364358 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.364395 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.364422 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:29Z","lastTransitionTime":"2026-01-21T14:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.467706 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.467797 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.467823 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.467863 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.467889 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:29Z","lastTransitionTime":"2026-01-21T14:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.571129 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.571192 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.571213 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.571240 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.571264 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:29Z","lastTransitionTime":"2026-01-21T14:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.674459 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.674814 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.674954 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.675117 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.675240 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:29Z","lastTransitionTime":"2026-01-21T14:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.778027 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.778115 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.778151 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.778187 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.778210 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:29Z","lastTransitionTime":"2026-01-21T14:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.881696 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.881743 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.881754 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.881770 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.881780 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:29Z","lastTransitionTime":"2026-01-21T14:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.985012 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.985529 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.985691 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.985849 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.986032 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:29Z","lastTransitionTime":"2026-01-21T14:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.089547 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.089605 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.089623 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.089646 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.089664 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:30Z","lastTransitionTime":"2026-01-21T14:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.192768 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.192852 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.192873 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.192904 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.192944 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:30Z","lastTransitionTime":"2026-01-21T14:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.294564 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:30 crc kubenswrapper[4902]: E0121 14:35:30.294854 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.296693 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.297001 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.297520 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.297837 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.298320 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:30Z","lastTransitionTime":"2026-01-21T14:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.298798 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 09:22:56.534754375 +0000 UTC Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.401877 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.402335 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.402601 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.402857 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.403019 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:30Z","lastTransitionTime":"2026-01-21T14:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.506492 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.506904 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.507107 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.507489 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.507852 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:30Z","lastTransitionTime":"2026-01-21T14:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.612182 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.612558 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.612756 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.612974 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.613838 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:30Z","lastTransitionTime":"2026-01-21T14:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.716398 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.716441 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.716451 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.716467 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.716478 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:30Z","lastTransitionTime":"2026-01-21T14:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.818832 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.818865 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.818873 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.818888 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.818901 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:30Z","lastTransitionTime":"2026-01-21T14:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.921363 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.921694 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.921792 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.921894 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.921976 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:30Z","lastTransitionTime":"2026-01-21T14:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.024767 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.025162 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.025263 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.025974 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.025993 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:31Z","lastTransitionTime":"2026-01-21T14:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.128020 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.128096 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.128107 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.128121 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.128131 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:31Z","lastTransitionTime":"2026-01-21T14:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.230718 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.231204 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.231360 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.231514 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.231647 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:31Z","lastTransitionTime":"2026-01-21T14:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.294580 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.294580 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.294629 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:31 crc kubenswrapper[4902]: E0121 14:35:31.295257 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:31 crc kubenswrapper[4902]: E0121 14:35:31.295148 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:31 crc kubenswrapper[4902]: E0121 14:35:31.295088 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.295893 4902 scope.go:117] "RemoveContainer" containerID="16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e" Jan 21 14:35:31 crc kubenswrapper[4902]: E0121 14:35:31.296136 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.300053 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 10:26:30.303841964 +0000 UTC Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.334715 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.335073 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.335210 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.335364 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.335504 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:31Z","lastTransitionTime":"2026-01-21T14:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.438904 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.439023 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.439038 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.439079 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.439097 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:31Z","lastTransitionTime":"2026-01-21T14:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.542607 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.542693 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.542735 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.542772 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.542804 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:31Z","lastTransitionTime":"2026-01-21T14:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.646884 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.646967 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.646993 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.647027 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.647116 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:31Z","lastTransitionTime":"2026-01-21T14:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.751305 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.751397 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.751447 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.751486 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.751511 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:31Z","lastTransitionTime":"2026-01-21T14:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.854895 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.854969 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.854988 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.855013 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.855032 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:31Z","lastTransitionTime":"2026-01-21T14:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.957576 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.957623 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.957636 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.957654 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.957666 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:31Z","lastTransitionTime":"2026-01-21T14:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.061524 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.061583 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.061593 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.061612 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.061622 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:32Z","lastTransitionTime":"2026-01-21T14:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.164725 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.164776 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.164788 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.164809 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.164821 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:32Z","lastTransitionTime":"2026-01-21T14:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.268308 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.268357 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.268372 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.268391 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.268406 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:32Z","lastTransitionTime":"2026-01-21T14:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.294349 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:32 crc kubenswrapper[4902]: E0121 14:35:32.294485 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.300282 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:22:25.85490037 +0000 UTC Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.371609 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.371663 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.371676 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.371694 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.371708 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:32Z","lastTransitionTime":"2026-01-21T14:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.474065 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.474113 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.474124 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.474143 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.474156 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:32Z","lastTransitionTime":"2026-01-21T14:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.576745 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.576805 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.576814 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.576835 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.576846 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:32Z","lastTransitionTime":"2026-01-21T14:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.679348 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.679420 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.679442 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.679465 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.679485 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:32Z","lastTransitionTime":"2026-01-21T14:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.782073 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.782110 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.782120 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.782137 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.782149 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:32Z","lastTransitionTime":"2026-01-21T14:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.884987 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.885220 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.885271 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.885306 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.885330 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:32Z","lastTransitionTime":"2026-01-21T14:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.988372 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.988443 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.988461 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.988486 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.988504 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:32Z","lastTransitionTime":"2026-01-21T14:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.091488 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.091553 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.091571 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.091592 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.091605 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.194421 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.194462 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.194471 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.194493 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.194504 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.294204 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.294303 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:33 crc kubenswrapper[4902]: E0121 14:35:33.294338 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:33 crc kubenswrapper[4902]: E0121 14:35:33.294459 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.294504 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:33 crc kubenswrapper[4902]: E0121 14:35:33.294553 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.297186 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.297219 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.297229 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.297243 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.297253 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.300563 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 17:33:18.727416624 +0000 UTC Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.400656 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.400695 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.400706 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.400724 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.400737 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.503942 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.504013 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.504030 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.504076 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.504099 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.576448 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.576512 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.576524 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.576537 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.576546 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: E0121 14:35:33.591223 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.595154 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.595194 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.595207 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.595226 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.595238 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: E0121 14:35:33.615443 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.620620 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.620693 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.620712 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.620740 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.620761 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: E0121 14:35:33.638581 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.642968 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.643002 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.643018 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.643058 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.643078 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: E0121 14:35:33.657862 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.662426 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.662469 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.662484 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.662505 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.662519 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: E0121 14:35:33.678255 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:33 crc kubenswrapper[4902]: E0121 14:35:33.678378 4902 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.680118 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.680154 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.680163 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.680176 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.680185 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.782840 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.782903 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.782922 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.782950 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.782969 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.886378 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.886447 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.886471 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.886504 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.886528 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.989638 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.989694 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.989711 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.989903 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.989935 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.093391 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.093444 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.093454 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.093472 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.093482 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:34Z","lastTransitionTime":"2026-01-21T14:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.195603 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.195654 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.195668 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.195691 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.195704 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:34Z","lastTransitionTime":"2026-01-21T14:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.294384 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:34 crc kubenswrapper[4902]: E0121 14:35:34.294601 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.298712 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.298760 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.298774 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.298795 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.298809 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:34Z","lastTransitionTime":"2026-01-21T14:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.300679 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:05:41.359993249 +0000 UTC Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.401744 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.401806 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.401823 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.401848 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.401866 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:34Z","lastTransitionTime":"2026-01-21T14:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.505620 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.505659 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.505675 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.505694 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.505709 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:34Z","lastTransitionTime":"2026-01-21T14:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.607736 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.607772 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.607783 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.607799 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.607808 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:34Z","lastTransitionTime":"2026-01-21T14:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.711164 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.711231 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.711249 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.711276 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.711301 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:34Z","lastTransitionTime":"2026-01-21T14:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.814681 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.814765 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.814782 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.814803 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.814822 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:34Z","lastTransitionTime":"2026-01-21T14:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.917806 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.917875 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.917899 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.917923 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.917941 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:34Z","lastTransitionTime":"2026-01-21T14:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.020902 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.020982 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.020996 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.021020 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.021036 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:35Z","lastTransitionTime":"2026-01-21T14:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.124128 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.124203 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.124215 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.124236 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.124251 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:35Z","lastTransitionTime":"2026-01-21T14:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.227703 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.227746 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.227775 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.227792 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.227803 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:35Z","lastTransitionTime":"2026-01-21T14:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.294912 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.295005 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.295082 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:35 crc kubenswrapper[4902]: E0121 14:35:35.295206 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:35 crc kubenswrapper[4902]: E0121 14:35:35.295349 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:35 crc kubenswrapper[4902]: E0121 14:35:35.295489 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.300908 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 03:02:28.870483496 +0000 UTC Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.330484 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.330539 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.330550 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.330572 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.330586 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:35Z","lastTransitionTime":"2026-01-21T14:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.433877 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.433943 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.433962 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.433984 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.434000 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:35Z","lastTransitionTime":"2026-01-21T14:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.537023 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.537101 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.537122 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.537145 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.537158 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:35Z","lastTransitionTime":"2026-01-21T14:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.639859 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.639900 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.639919 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.639941 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.639952 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:35Z","lastTransitionTime":"2026-01-21T14:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.743922 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.744410 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.744691 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.744886 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.745103 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:35Z","lastTransitionTime":"2026-01-21T14:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.853305 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.853346 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.853356 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.853368 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.853377 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:35Z","lastTransitionTime":"2026-01-21T14:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.956580 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.956926 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.957177 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.957392 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.957636 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:35Z","lastTransitionTime":"2026-01-21T14:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.061196 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.061619 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.061699 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.061778 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.061854 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:36Z","lastTransitionTime":"2026-01-21T14:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.164720 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.164774 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.164787 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.164804 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.164816 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:36Z","lastTransitionTime":"2026-01-21T14:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.266958 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.267264 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.267496 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.267675 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.267788 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:36Z","lastTransitionTime":"2026-01-21T14:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.294906 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:36 crc kubenswrapper[4902]: E0121 14:35:36.295246 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.301842 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:33:36.756220403 +0000 UTC Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.371658 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.371724 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.371743 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.371764 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.371777 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:36Z","lastTransitionTime":"2026-01-21T14:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.475148 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.475194 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.475204 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.475220 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.475232 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:36Z","lastTransitionTime":"2026-01-21T14:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.480107 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:36 crc kubenswrapper[4902]: E0121 14:35:36.480371 4902 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:35:36 crc kubenswrapper[4902]: E0121 14:35:36.480506 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs podName:05d94e6a-249a-484c-8895-085e81f1dfaa nodeName:}" failed. No retries permitted until 2026-01-21 14:36:40.480479523 +0000 UTC m=+162.557312552 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs") pod "network-metrics-daemon-kq588" (UID: "05d94e6a-249a-484c-8895-085e81f1dfaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.577728 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.577762 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.577770 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.577785 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.577794 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:36Z","lastTransitionTime":"2026-01-21T14:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.680402 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.680448 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.680460 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.680477 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.680491 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:36Z","lastTransitionTime":"2026-01-21T14:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.784036 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.784119 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.784135 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.784158 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.784175 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:36Z","lastTransitionTime":"2026-01-21T14:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.886560 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.886617 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.886639 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.886664 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.886681 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:36Z","lastTransitionTime":"2026-01-21T14:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.989458 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.989494 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.989506 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.989524 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.989534 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:36Z","lastTransitionTime":"2026-01-21T14:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.092407 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.092443 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.092454 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.092470 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.092481 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:37Z","lastTransitionTime":"2026-01-21T14:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.195117 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.195160 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.195169 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.195183 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.195193 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:37Z","lastTransitionTime":"2026-01-21T14:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.294220 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.294301 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:37 crc kubenswrapper[4902]: E0121 14:35:37.294522 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.294704 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:37 crc kubenswrapper[4902]: E0121 14:35:37.294981 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:37 crc kubenswrapper[4902]: E0121 14:35:37.295195 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.298979 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.299026 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.299066 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.299085 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.299099 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:37Z","lastTransitionTime":"2026-01-21T14:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.302774 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 05:12:00.342769747 +0000 UTC Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.402358 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.402407 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.402422 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.402442 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.402457 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:37Z","lastTransitionTime":"2026-01-21T14:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.504976 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.505029 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.505043 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.505090 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.505103 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:37Z","lastTransitionTime":"2026-01-21T14:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.607824 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.607871 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.607881 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.607898 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.607913 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:37Z","lastTransitionTime":"2026-01-21T14:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.710721 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.710791 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.710800 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.710812 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.710822 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:37Z","lastTransitionTime":"2026-01-21T14:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.814222 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.814350 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.814363 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.814382 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.814395 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:37Z","lastTransitionTime":"2026-01-21T14:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.917502 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.917553 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.917564 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.917582 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.917594 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:37Z","lastTransitionTime":"2026-01-21T14:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.019798 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.019839 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.019849 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.019867 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.019880 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:38Z","lastTransitionTime":"2026-01-21T14:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.122997 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.123064 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.123096 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.123110 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.123120 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:38Z","lastTransitionTime":"2026-01-21T14:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.226059 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.226102 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.226115 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.226131 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.226141 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:38Z","lastTransitionTime":"2026-01-21T14:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.294422 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:38 crc kubenswrapper[4902]: E0121 14:35:38.294845 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.303435 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 07:23:09.945955645 +0000 UTC Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.329336 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.329641 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.329744 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.329845 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.329914 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:38Z","lastTransitionTime":"2026-01-21T14:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.331461 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.347447 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.364668 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.383496 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.398779 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.417353 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:14Z\\\",\\\"message\\\":\\\"45-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 14:35:14.371450 6933 services_controller.go:445] Built service openshift-authentication/oauth-openshift LB template configs for network=default: []services.lbConfig(nil)\\\\nF0121 14:35:14.372204 6933 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:35:14.372229 6933 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0121 14:35:14.372206 6933 services_controller.go:451] Built service openshift-authentication/oauth-open\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:35:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.433233 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.433358 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.433388 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.433426 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.433453 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:38Z","lastTransitionTime":"2026-01-21T14:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.433938 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c6d394d-639a-4b18-9e61-3f28950ff275\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5941fa9b0928cf6a092eda06a1456dc7cc2e20ca9cded4fc963bf722557ddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbd2d677787d4c4acc335092c83a711598f506dd9b3d9e967cb27921650973f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd2d677787d4c4acc335092c83a711598f506dd9b3d9e967cb27921650973f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.448010 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.460155 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.472890 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:05Z\\\",\\\"message\\\":\\\"2026-01-21T14:34:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46\\\\n2026-01-21T14:34:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46 to /host/opt/cni/bin/\\\\n2026-01-21T14:34:20Z [verbose] multus-daemon started\\\\n2026-01-21T14:34:20Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:35:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.487457 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.501679 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.513482 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.523757 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e90c662-3709-4ec5-8dcd-cd159916c9a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b4bad85265f15d2d15a1392664127223090ef5a25e0e99ac221baf1f0bfe1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2642d8280b5ab7900096ad18d52cb26533086521d54f01ab26e192327f759c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59bcdedecf49081493b9e8afa887ae3a75965d72a53f5bab7a6bd002c4d163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.535939 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.536536 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.536612 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.536626 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.536649 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.536663 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:38Z","lastTransitionTime":"2026-01-21T14:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.552295 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.564773 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.577398 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.590426 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.639694 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.640076 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.640331 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.640551 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.640731 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:38Z","lastTransitionTime":"2026-01-21T14:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.744083 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.744486 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.744688 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.744886 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.745025 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:38Z","lastTransitionTime":"2026-01-21T14:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.847143 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.847216 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.847231 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.847249 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.847262 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:38Z","lastTransitionTime":"2026-01-21T14:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.950250 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.950690 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.950895 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.951092 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.951273 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:38Z","lastTransitionTime":"2026-01-21T14:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.053963 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.054036 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.054066 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.054085 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.054097 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:39Z","lastTransitionTime":"2026-01-21T14:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.157291 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.157344 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.157356 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.157375 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.157388 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:39Z","lastTransitionTime":"2026-01-21T14:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.261834 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.261884 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.261896 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.261914 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.261927 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:39Z","lastTransitionTime":"2026-01-21T14:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.294515 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.294744 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.294898 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:39 crc kubenswrapper[4902]: E0121 14:35:39.294908 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:39 crc kubenswrapper[4902]: E0121 14:35:39.295453 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:39 crc kubenswrapper[4902]: E0121 14:35:39.295839 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.304115 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 18:42:35.605703334 +0000 UTC Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.365051 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.365103 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.365125 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.365145 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.365158 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:39Z","lastTransitionTime":"2026-01-21T14:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.468324 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.468865 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.469123 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.469293 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.469472 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:39Z","lastTransitionTime":"2026-01-21T14:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.572630 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.573204 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.573474 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.573712 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.573927 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:39Z","lastTransitionTime":"2026-01-21T14:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.677172 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.677231 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.677248 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.677277 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.677295 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:39Z","lastTransitionTime":"2026-01-21T14:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.780435 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.780842 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.781016 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.781219 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.781406 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:39Z","lastTransitionTime":"2026-01-21T14:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.885355 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.885423 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.885442 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.885468 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.885490 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:39Z","lastTransitionTime":"2026-01-21T14:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.988608 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.988670 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.988687 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.988708 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.988720 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:39Z","lastTransitionTime":"2026-01-21T14:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.091871 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.091964 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.091989 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.092023 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.092085 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:40Z","lastTransitionTime":"2026-01-21T14:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.195383 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.195438 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.195450 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.195472 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.195484 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:40Z","lastTransitionTime":"2026-01-21T14:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.294391 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:40 crc kubenswrapper[4902]: E0121 14:35:40.294873 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.298440 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.298478 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.298492 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.298515 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.298530 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:40Z","lastTransitionTime":"2026-01-21T14:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.305240 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 06:16:30.099487182 +0000 UTC Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.401397 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.401447 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.401458 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.401480 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.401493 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:40Z","lastTransitionTime":"2026-01-21T14:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.505910 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.505968 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.505982 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.506001 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.506013 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:40Z","lastTransitionTime":"2026-01-21T14:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.609083 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.609125 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.609139 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.609158 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.609172 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:40Z","lastTransitionTime":"2026-01-21T14:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.712199 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.712275 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.712295 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.712322 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.712340 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:40Z","lastTransitionTime":"2026-01-21T14:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.815892 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.815950 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.815968 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.815992 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.816012 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:40Z","lastTransitionTime":"2026-01-21T14:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.919290 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.919362 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.919399 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.919431 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.919452 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:40Z","lastTransitionTime":"2026-01-21T14:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.023567 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.023639 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.023657 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.023681 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.023698 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:41Z","lastTransitionTime":"2026-01-21T14:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.126391 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.126436 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.126448 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.126464 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.126477 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:41Z","lastTransitionTime":"2026-01-21T14:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.229817 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.229898 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.229919 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.229950 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.229973 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:41Z","lastTransitionTime":"2026-01-21T14:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.294546 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:41 crc kubenswrapper[4902]: E0121 14:35:41.294754 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.294831 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:41 crc kubenswrapper[4902]: E0121 14:35:41.294921 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.294994 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:41 crc kubenswrapper[4902]: E0121 14:35:41.295141 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.305473 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 20:16:53.121627667 +0000 UTC Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.333785 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.333880 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.333905 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.333949 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.333977 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:41Z","lastTransitionTime":"2026-01-21T14:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.436800 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.436872 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.436910 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.436940 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.436957 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:41Z","lastTransitionTime":"2026-01-21T14:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.539746 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.540062 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.540130 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.540193 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.540250 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:41Z","lastTransitionTime":"2026-01-21T14:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.642731 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.642791 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.642813 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.642839 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.642858 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:41Z","lastTransitionTime":"2026-01-21T14:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.745738 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.745775 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.745785 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.745800 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.745811 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:41Z","lastTransitionTime":"2026-01-21T14:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.848537 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.848612 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.848647 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.848677 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.848697 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:41Z","lastTransitionTime":"2026-01-21T14:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.952348 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.952888 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.953071 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.953264 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.953390 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:41Z","lastTransitionTime":"2026-01-21T14:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.057198 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.057265 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.057274 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.057294 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.057305 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:42Z","lastTransitionTime":"2026-01-21T14:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.160133 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.160189 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.160202 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.160221 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.160236 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:42Z","lastTransitionTime":"2026-01-21T14:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.262756 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.263265 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.263503 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.263738 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.263957 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:42Z","lastTransitionTime":"2026-01-21T14:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.294640 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:42 crc kubenswrapper[4902]: E0121 14:35:42.295162 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.306246 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 03:44:58.622007953 +0000 UTC Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.367158 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.367233 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.367242 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.367256 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.367266 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:42Z","lastTransitionTime":"2026-01-21T14:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.470140 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.470241 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.470292 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.470318 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.470335 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:42Z","lastTransitionTime":"2026-01-21T14:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.573446 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.573857 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.574067 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.574247 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.574409 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:42Z","lastTransitionTime":"2026-01-21T14:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.677734 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.677768 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.677778 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.677792 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.677803 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:42Z","lastTransitionTime":"2026-01-21T14:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.780598 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.780669 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.780680 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.780693 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.780744 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:42Z","lastTransitionTime":"2026-01-21T14:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.883424 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.883462 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.883471 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.883484 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.883495 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:42Z","lastTransitionTime":"2026-01-21T14:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.986509 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.986900 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.987008 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.987149 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.987253 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:42Z","lastTransitionTime":"2026-01-21T14:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.090531 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.090583 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.090595 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.090615 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.090627 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:43Z","lastTransitionTime":"2026-01-21T14:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.194764 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.194810 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.194821 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.194839 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.194851 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:43Z","lastTransitionTime":"2026-01-21T14:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.294842 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.294933 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:43 crc kubenswrapper[4902]: E0121 14:35:43.295032 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:43 crc kubenswrapper[4902]: E0121 14:35:43.295250 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.295594 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:43 crc kubenswrapper[4902]: E0121 14:35:43.295964 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.297308 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.297348 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.297358 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.297375 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.297387 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:43Z","lastTransitionTime":"2026-01-21T14:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.306780 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 05:51:00.735637707 +0000 UTC Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.400056 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.400107 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.400122 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.400140 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.400152 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:43Z","lastTransitionTime":"2026-01-21T14:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.502374 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.502403 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.502412 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.502425 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.502434 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:43Z","lastTransitionTime":"2026-01-21T14:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.606372 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.606442 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.606470 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.606505 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.606526 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:43Z","lastTransitionTime":"2026-01-21T14:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.709791 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.709896 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.709914 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.709947 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.709965 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:43Z","lastTransitionTime":"2026-01-21T14:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.813897 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.813944 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.813952 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.813968 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.813977 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:43Z","lastTransitionTime":"2026-01-21T14:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.877014 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.877075 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.877086 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.877102 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.877113 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:43Z","lastTransitionTime":"2026-01-21T14:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.995694 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq"] Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.996251 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.998373 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.000696 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.002087 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.002939 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.019600 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=58.019582982 podStartE2EDuration="58.019582982s" podCreationTimestamp="2026-01-21 14:34:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:44.019277583 +0000 UTC m=+106.096110612" watchObservedRunningTime="2026-01-21 14:35:44.019582982 +0000 UTC m=+106.096416011" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.062861 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8146a15d-15b4-4340-bc59-aa7767cc7977-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.062934 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8146a15d-15b4-4340-bc59-aa7767cc7977-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.062959 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8146a15d-15b4-4340-bc59-aa7767cc7977-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.062987 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8146a15d-15b4-4340-bc59-aa7767cc7977-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.063005 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8146a15d-15b4-4340-bc59-aa7767cc7977-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.092139 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-62549" podStartSLOduration=86.092116549 podStartE2EDuration="1m26.092116549s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:44.091793369 +0000 UTC m=+106.168626418" watchObservedRunningTime="2026-01-21 14:35:44.092116549 +0000 UTC m=+106.168949578" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.109263 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-h68nf" podStartSLOduration=86.109238164 podStartE2EDuration="1m26.109238164s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:44.108163222 +0000 UTC m=+106.184996251" watchObservedRunningTime="2026-01-21 14:35:44.109238164 +0000 UTC m=+106.186071193" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.159455 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=84.159436905 podStartE2EDuration="1m24.159436905s" podCreationTimestamp="2026-01-21 14:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:44.159055454 +0000 UTC m=+106.235888483" watchObservedRunningTime="2026-01-21 14:35:44.159436905 +0000 UTC m=+106.236269934" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.164616 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8146a15d-15b4-4340-bc59-aa7767cc7977-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.164677 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8146a15d-15b4-4340-bc59-aa7767cc7977-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.164713 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8146a15d-15b4-4340-bc59-aa7767cc7977-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.164735 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8146a15d-15b4-4340-bc59-aa7767cc7977-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.164764 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8146a15d-15b4-4340-bc59-aa7767cc7977-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.164894 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8146a15d-15b4-4340-bc59-aa7767cc7977-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.164975 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8146a15d-15b4-4340-bc59-aa7767cc7977-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.166541 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8146a15d-15b4-4340-bc59-aa7767cc7977-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.179295 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8146a15d-15b4-4340-bc59-aa7767cc7977-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.179483 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=87.179466694 podStartE2EDuration="1m27.179466694s" podCreationTimestamp="2026-01-21 14:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:44.178174147 +0000 UTC m=+106.255007176" watchObservedRunningTime="2026-01-21 14:35:44.179466694 +0000 UTC m=+106.256299723" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.181912 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8146a15d-15b4-4340-bc59-aa7767cc7977-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.202712 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=87.202691865 podStartE2EDuration="1m27.202691865s" podCreationTimestamp="2026-01-21 14:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:44.202023006 +0000 UTC m=+106.278856035" watchObservedRunningTime="2026-01-21 14:35:44.202691865 +0000 UTC m=+106.279524894" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.271004 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=28.27097906 podStartE2EDuration="28.27097906s" podCreationTimestamp="2026-01-21 14:35:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:44.269995071 +0000 UTC m=+106.346828100" watchObservedRunningTime="2026-01-21 14:35:44.27097906 +0000 UTC m=+106.347812089" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.292609 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podStartSLOduration=86.292588514 podStartE2EDuration="1m26.292588514s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:44.281340369 +0000 UTC m=+106.358173388" watchObservedRunningTime="2026-01-21 14:35:44.292588514 +0000 UTC m=+106.369421543" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.294128 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:44 crc kubenswrapper[4902]: E0121 14:35:44.294288 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.307010 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-lg6wz" podStartSLOduration=86.306989441 podStartE2EDuration="1m26.306989441s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:44.292941095 +0000 UTC m=+106.369774124" watchObservedRunningTime="2026-01-21 14:35:44.306989441 +0000 UTC m=+106.383822480" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.307219 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 13:59:11.1884524 +0000 UTC Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.307314 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.307904 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-mztd6" podStartSLOduration=86.307897157 podStartE2EDuration="1m26.307897157s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:44.30730246 +0000 UTC m=+106.384135499" watchObservedRunningTime="2026-01-21 14:35:44.307897157 +0000 UTC m=+106.384730186" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.314738 4902 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.315627 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.319983 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" podStartSLOduration=86.319963066 podStartE2EDuration="1m26.319963066s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:44.319806541 +0000 UTC m=+106.396639580" watchObservedRunningTime="2026-01-21 14:35:44.319963066 +0000 UTC m=+106.396796095" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.887424 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" event={"ID":"8146a15d-15b4-4340-bc59-aa7767cc7977","Type":"ContainerStarted","Data":"e5cabf022c587ac20a0ec5e7df00da5b80c7f5b24d2ee6ecb683a482297a7e17"} Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.887510 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" event={"ID":"8146a15d-15b4-4340-bc59-aa7767cc7977","Type":"ContainerStarted","Data":"38944b7c82071b7dd14c93d2b9cfbf636bee0ff8d9bd181a59f1ed9fcac0a38c"} Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.902788 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" podStartSLOduration=86.902772806 podStartE2EDuration="1m26.902772806s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:44.902524928 +0000 UTC m=+106.979357977" watchObservedRunningTime="2026-01-21 14:35:44.902772806 +0000 UTC m=+106.979605835" Jan 21 14:35:45 crc kubenswrapper[4902]: I0121 14:35:45.294368 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:45 crc kubenswrapper[4902]: I0121 14:35:45.294475 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:45 crc kubenswrapper[4902]: E0121 14:35:45.294518 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:45 crc kubenswrapper[4902]: E0121 14:35:45.294610 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:45 crc kubenswrapper[4902]: I0121 14:35:45.294469 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:45 crc kubenswrapper[4902]: E0121 14:35:45.294870 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:46 crc kubenswrapper[4902]: I0121 14:35:46.294038 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:46 crc kubenswrapper[4902]: E0121 14:35:46.294295 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:46 crc kubenswrapper[4902]: I0121 14:35:46.295035 4902 scope.go:117] "RemoveContainer" containerID="16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e" Jan 21 14:35:46 crc kubenswrapper[4902]: E0121 14:35:46.295221 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" Jan 21 14:35:47 crc kubenswrapper[4902]: I0121 14:35:47.294596 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:47 crc kubenswrapper[4902]: I0121 14:35:47.294640 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:47 crc kubenswrapper[4902]: I0121 14:35:47.294615 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:47 crc kubenswrapper[4902]: E0121 14:35:47.294802 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:47 crc kubenswrapper[4902]: E0121 14:35:47.295081 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:47 crc kubenswrapper[4902]: E0121 14:35:47.295171 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:48 crc kubenswrapper[4902]: I0121 14:35:48.294369 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:48 crc kubenswrapper[4902]: E0121 14:35:48.295524 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:49 crc kubenswrapper[4902]: I0121 14:35:49.294315 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:49 crc kubenswrapper[4902]: E0121 14:35:49.294452 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:49 crc kubenswrapper[4902]: I0121 14:35:49.295282 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:49 crc kubenswrapper[4902]: E0121 14:35:49.295351 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:49 crc kubenswrapper[4902]: I0121 14:35:49.295502 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:49 crc kubenswrapper[4902]: E0121 14:35:49.295576 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:50 crc kubenswrapper[4902]: I0121 14:35:50.294364 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:50 crc kubenswrapper[4902]: E0121 14:35:50.294587 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:51 crc kubenswrapper[4902]: I0121 14:35:51.294970 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:51 crc kubenswrapper[4902]: I0121 14:35:51.295013 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:51 crc kubenswrapper[4902]: I0121 14:35:51.295093 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:51 crc kubenswrapper[4902]: E0121 14:35:51.295204 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:51 crc kubenswrapper[4902]: E0121 14:35:51.295309 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:51 crc kubenswrapper[4902]: E0121 14:35:51.295395 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:51 crc kubenswrapper[4902]: I0121 14:35:51.913116 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mztd6_037b55cf-cb9e-41ce-8b1e-3898f490a4aa/kube-multus/1.log" Jan 21 14:35:51 crc kubenswrapper[4902]: I0121 14:35:51.914267 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mztd6_037b55cf-cb9e-41ce-8b1e-3898f490a4aa/kube-multus/0.log" Jan 21 14:35:51 crc kubenswrapper[4902]: I0121 14:35:51.914331 4902 generic.go:334] "Generic (PLEG): container finished" podID="037b55cf-cb9e-41ce-8b1e-3898f490a4aa" containerID="1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6" exitCode=1 Jan 21 14:35:51 crc kubenswrapper[4902]: I0121 14:35:51.914374 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mztd6" event={"ID":"037b55cf-cb9e-41ce-8b1e-3898f490a4aa","Type":"ContainerDied","Data":"1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6"} Jan 21 14:35:51 crc kubenswrapper[4902]: I0121 14:35:51.914415 4902 scope.go:117] "RemoveContainer" containerID="801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e" Jan 21 14:35:51 crc kubenswrapper[4902]: I0121 14:35:51.914916 4902 scope.go:117] "RemoveContainer" containerID="1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6" Jan 21 14:35:51 crc kubenswrapper[4902]: E0121 14:35:51.915233 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-mztd6_openshift-multus(037b55cf-cb9e-41ce-8b1e-3898f490a4aa)\"" pod="openshift-multus/multus-mztd6" podUID="037b55cf-cb9e-41ce-8b1e-3898f490a4aa" Jan 21 14:35:52 crc kubenswrapper[4902]: I0121 14:35:52.294842 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:52 crc kubenswrapper[4902]: E0121 14:35:52.295148 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:52 crc kubenswrapper[4902]: I0121 14:35:52.921283 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mztd6_037b55cf-cb9e-41ce-8b1e-3898f490a4aa/kube-multus/1.log" Jan 21 14:35:53 crc kubenswrapper[4902]: I0121 14:35:53.294618 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:53 crc kubenswrapper[4902]: I0121 14:35:53.294814 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:53 crc kubenswrapper[4902]: E0121 14:35:53.295119 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:53 crc kubenswrapper[4902]: I0121 14:35:53.294812 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:53 crc kubenswrapper[4902]: E0121 14:35:53.295387 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:53 crc kubenswrapper[4902]: E0121 14:35:53.295271 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:54 crc kubenswrapper[4902]: I0121 14:35:54.294600 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:54 crc kubenswrapper[4902]: E0121 14:35:54.295108 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:55 crc kubenswrapper[4902]: I0121 14:35:55.293992 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:55 crc kubenswrapper[4902]: I0121 14:35:55.294120 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:55 crc kubenswrapper[4902]: I0121 14:35:55.294081 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:55 crc kubenswrapper[4902]: E0121 14:35:55.294744 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:55 crc kubenswrapper[4902]: E0121 14:35:55.294919 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:55 crc kubenswrapper[4902]: E0121 14:35:55.295095 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:56 crc kubenswrapper[4902]: I0121 14:35:56.294826 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:56 crc kubenswrapper[4902]: E0121 14:35:56.295083 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:57 crc kubenswrapper[4902]: I0121 14:35:57.294957 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:57 crc kubenswrapper[4902]: I0121 14:35:57.294966 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:57 crc kubenswrapper[4902]: I0121 14:35:57.295102 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:57 crc kubenswrapper[4902]: E0121 14:35:57.295870 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:57 crc kubenswrapper[4902]: E0121 14:35:57.296185 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:57 crc kubenswrapper[4902]: E0121 14:35:57.296311 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:58 crc kubenswrapper[4902]: I0121 14:35:58.294672 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:58 crc kubenswrapper[4902]: E0121 14:35:58.295651 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:58 crc kubenswrapper[4902]: E0121 14:35:58.303994 4902 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 21 14:35:58 crc kubenswrapper[4902]: E0121 14:35:58.373593 4902 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 14:35:59 crc kubenswrapper[4902]: I0121 14:35:59.295276 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:59 crc kubenswrapper[4902]: E0121 14:35:59.295528 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:59 crc kubenswrapper[4902]: I0121 14:35:59.295855 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:59 crc kubenswrapper[4902]: E0121 14:35:59.295959 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:59 crc kubenswrapper[4902]: I0121 14:35:59.296249 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:59 crc kubenswrapper[4902]: E0121 14:35:59.296439 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:36:00 crc kubenswrapper[4902]: I0121 14:36:00.295118 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:36:00 crc kubenswrapper[4902]: E0121 14:36:00.295556 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:36:00 crc kubenswrapper[4902]: I0121 14:36:00.296901 4902 scope.go:117] "RemoveContainer" containerID="16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e" Jan 21 14:36:00 crc kubenswrapper[4902]: I0121 14:36:00.962821 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/3.log" Jan 21 14:36:00 crc kubenswrapper[4902]: I0121 14:36:00.966483 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb"} Jan 21 14:36:00 crc kubenswrapper[4902]: I0121 14:36:00.967222 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:36:01 crc kubenswrapper[4902]: I0121 14:36:01.001104 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podStartSLOduration=103.001075547 podStartE2EDuration="1m43.001075547s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:00.999891352 +0000 UTC m=+123.076724381" watchObservedRunningTime="2026-01-21 14:36:01.001075547 +0000 UTC m=+123.077908586" Jan 21 14:36:01 crc kubenswrapper[4902]: I0121 14:36:01.155100 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kq588"] Jan 21 14:36:01 crc kubenswrapper[4902]: I0121 14:36:01.155509 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:36:01 crc kubenswrapper[4902]: E0121 14:36:01.155592 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:36:01 crc kubenswrapper[4902]: I0121 14:36:01.294106 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:36:01 crc kubenswrapper[4902]: I0121 14:36:01.294198 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:36:01 crc kubenswrapper[4902]: E0121 14:36:01.294230 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:36:01 crc kubenswrapper[4902]: E0121 14:36:01.294405 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:36:01 crc kubenswrapper[4902]: I0121 14:36:01.294719 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:01 crc kubenswrapper[4902]: E0121 14:36:01.294844 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:36:03 crc kubenswrapper[4902]: I0121 14:36:03.294504 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:36:03 crc kubenswrapper[4902]: E0121 14:36:03.295830 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:36:03 crc kubenswrapper[4902]: I0121 14:36:03.294506 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:03 crc kubenswrapper[4902]: E0121 14:36:03.296112 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:36:03 crc kubenswrapper[4902]: I0121 14:36:03.294561 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:36:03 crc kubenswrapper[4902]: E0121 14:36:03.296336 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:36:03 crc kubenswrapper[4902]: I0121 14:36:03.294592 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:36:03 crc kubenswrapper[4902]: E0121 14:36:03.296567 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:36:03 crc kubenswrapper[4902]: E0121 14:36:03.375815 4902 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 14:36:05 crc kubenswrapper[4902]: I0121 14:36:05.294426 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:05 crc kubenswrapper[4902]: E0121 14:36:05.294652 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:36:05 crc kubenswrapper[4902]: I0121 14:36:05.294935 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:36:05 crc kubenswrapper[4902]: E0121 14:36:05.295028 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:36:05 crc kubenswrapper[4902]: I0121 14:36:05.295286 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:36:05 crc kubenswrapper[4902]: I0121 14:36:05.295320 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:36:05 crc kubenswrapper[4902]: E0121 14:36:05.295556 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:36:05 crc kubenswrapper[4902]: E0121 14:36:05.295781 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:36:07 crc kubenswrapper[4902]: I0121 14:36:07.294450 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:36:07 crc kubenswrapper[4902]: E0121 14:36:07.295327 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:36:07 crc kubenswrapper[4902]: I0121 14:36:07.295194 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:36:07 crc kubenswrapper[4902]: E0121 14:36:07.295673 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:36:07 crc kubenswrapper[4902]: I0121 14:36:07.294900 4902 scope.go:117] "RemoveContainer" containerID="1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6" Jan 21 14:36:07 crc kubenswrapper[4902]: I0121 14:36:07.295408 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:07 crc kubenswrapper[4902]: I0121 14:36:07.295218 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:36:07 crc kubenswrapper[4902]: E0121 14:36:07.296132 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:36:07 crc kubenswrapper[4902]: E0121 14:36:07.296268 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:36:08 crc kubenswrapper[4902]: E0121 14:36:08.376405 4902 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 14:36:08 crc kubenswrapper[4902]: I0121 14:36:08.998955 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mztd6_037b55cf-cb9e-41ce-8b1e-3898f490a4aa/kube-multus/1.log" Jan 21 14:36:08 crc kubenswrapper[4902]: I0121 14:36:08.999068 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mztd6" event={"ID":"037b55cf-cb9e-41ce-8b1e-3898f490a4aa","Type":"ContainerStarted","Data":"5db75faf330517f6e52171754b04634f6e477d49b65357ee3295df0a7560fb4d"} Jan 21 14:36:09 crc kubenswrapper[4902]: I0121 14:36:09.294158 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:36:09 crc kubenswrapper[4902]: I0121 14:36:09.294218 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:36:09 crc kubenswrapper[4902]: E0121 14:36:09.294305 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:36:09 crc kubenswrapper[4902]: I0121 14:36:09.294320 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:09 crc kubenswrapper[4902]: I0121 14:36:09.294387 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:36:09 crc kubenswrapper[4902]: E0121 14:36:09.294429 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:36:09 crc kubenswrapper[4902]: E0121 14:36:09.294491 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:36:09 crc kubenswrapper[4902]: E0121 14:36:09.294565 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:36:11 crc kubenswrapper[4902]: I0121 14:36:11.294137 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:36:11 crc kubenswrapper[4902]: I0121 14:36:11.294164 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:11 crc kubenswrapper[4902]: I0121 14:36:11.294225 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:36:11 crc kubenswrapper[4902]: I0121 14:36:11.294335 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:36:11 crc kubenswrapper[4902]: E0121 14:36:11.295418 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:36:11 crc kubenswrapper[4902]: E0121 14:36:11.295569 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:36:11 crc kubenswrapper[4902]: E0121 14:36:11.295616 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:36:11 crc kubenswrapper[4902]: E0121 14:36:11.295836 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:36:13 crc kubenswrapper[4902]: I0121 14:36:13.294409 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:36:13 crc kubenswrapper[4902]: I0121 14:36:13.294473 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:36:13 crc kubenswrapper[4902]: I0121 14:36:13.294456 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:13 crc kubenswrapper[4902]: E0121 14:36:13.294635 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:36:13 crc kubenswrapper[4902]: I0121 14:36:13.294405 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:36:13 crc kubenswrapper[4902]: E0121 14:36:13.294850 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:36:13 crc kubenswrapper[4902]: E0121 14:36:13.295021 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:36:13 crc kubenswrapper[4902]: E0121 14:36:13.295181 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.848550 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.944000 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.944533 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.946076 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tn2zp"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.946389 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.946615 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.947024 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.948089 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.948338 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.952190 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-x9bhh"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.953798 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.967769 4902 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4": failed to list *v1.Secret: secrets "machine-approver-sa-dockercfg-nl2j4" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.967830 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-nl2j4\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-approver-sa-dockercfg-nl2j4\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.969426 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.970067 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.975107 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.975365 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.975730 4902 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.975790 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.975885 4902 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: secrets "openshift-controller-manager-sa-dockercfg-msq4c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.975903 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-sa-dockercfg-msq4c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.975975 4902 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.975993 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.976074 4902 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.976089 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.976141 4902 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.976154 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.976217 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.976594 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.977238 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.977408 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.977420 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-57jmg"] Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.977575 4902 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.977599 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.977832 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.978169 4902 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.978199 4902 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: configmaps "openshift-global-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.978203 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.978246 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-global-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.978338 4902 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": failed to list *v1.ConfigMap: configmaps "openshift-apiserver-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.978362 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-apiserver-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.978453 4902 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv": failed to list *v1.Secret: secrets "openshift-apiserver-operator-dockercfg-xtcjv" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.978486 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-xtcjv\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-operator-dockercfg-xtcjv\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.978579 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.978693 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.978848 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.978961 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.978993 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.979219 4902 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: configmaps "image-import-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.979265 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"image-import-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.979359 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.978972 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.979641 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.979713 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.979836 4902 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.979865 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.979964 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.980088 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.980259 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.980426 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.983257 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.984233 4902 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.984272 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.984336 4902 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.984350 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.984434 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.984536 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.984633 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.984954 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.985000 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.985498 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.988116 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-k2wkm"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.988865 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.989003 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n2xzb"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.989705 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.989951 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.990717 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-wpch6"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.991533 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.993059 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.993638 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.993774 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-9nw4v"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.994313 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.997889 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-9hktz"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.998128 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.998570 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.998742 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.998751 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.998863 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.999064 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.999122 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.999271 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.999445 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.999530 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.999645 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:14.999868 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:14.999957 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.000172 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.000285 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.000395 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.000519 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.000557 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.000737 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.000838 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.000968 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.001020 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.001995 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.002331 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.002517 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.002675 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.002716 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.002832 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.003984 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004075 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004171 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004319 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004364 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004451 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004523 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004550 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004636 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004329 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004758 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004821 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004919 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004994 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.005116 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.007528 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.008278 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.023928 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.025401 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-j7zvj"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.026119 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.026516 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.027178 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.028564 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9nccj"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.039142 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-j7zvj" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.055553 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lrgnw"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.055814 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.056113 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.056155 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.056977 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3a537cbb-d314-4f04-94c8-625c03eb5a68-machine-approver-tls\") pod \"machine-approver-56656f9798-p4tq6\" (UID: \"3a537cbb-d314-4f04-94c8-625c03eb5a68\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057032 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a537cbb-d314-4f04-94c8-625c03eb5a68-config\") pod \"machine-approver-56656f9798-p4tq6\" (UID: \"3a537cbb-d314-4f04-94c8-625c03eb5a68\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057077 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-config\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057103 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbhnr\" (UniqueName: \"kubernetes.io/projected/01ee90aa-9465-4cd2-97a0-ce735d557649-kube-api-access-gbhnr\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057132 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-oauth-serving-cert\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057151 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c690c8a8-1bd9-45ff-ba62-93cb7f1ce890-serving-cert\") pod \"openshift-config-operator-7777fb866f-wpch6\" (UID: \"c690c8a8-1bd9-45ff-ba62-93cb7f1ce890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057166 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-trusted-ca-bundle\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057238 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpczv\" (UniqueName: \"kubernetes.io/projected/3a537cbb-d314-4f04-94c8-625c03eb5a68-kube-api-access-dpczv\") pod \"machine-approver-56656f9798-p4tq6\" (UID: \"3a537cbb-d314-4f04-94c8-625c03eb5a68\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057273 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-service-ca\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057300 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg77p\" (UniqueName: \"kubernetes.io/projected/853f0809-8828-4976-9b04-dd078ab64ced-kube-api-access-xg77p\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057330 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/853f0809-8828-4976-9b04-dd078ab64ced-console-oauth-config\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057370 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-client-ca\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057388 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/853f0809-8828-4976-9b04-dd078ab64ced-console-serving-cert\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057404 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ee90aa-9465-4cd2-97a0-ce735d557649-serving-cert\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057425 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c690c8a8-1bd9-45ff-ba62-93cb7f1ce890-available-featuregates\") pod \"openshift-config-operator-7777fb866f-wpch6\" (UID: \"c690c8a8-1bd9-45ff-ba62-93cb7f1ce890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057444 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4bc8\" (UniqueName: \"kubernetes.io/projected/c690c8a8-1bd9-45ff-ba62-93cb7f1ce890-kube-api-access-g4bc8\") pod \"openshift-config-operator-7777fb866f-wpch6\" (UID: \"c690c8a8-1bd9-45ff-ba62-93cb7f1ce890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057464 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-console-config\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057501 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3a537cbb-d314-4f04-94c8-625c03eb5a68-auth-proxy-config\") pod \"machine-approver-56656f9798-p4tq6\" (UID: \"3a537cbb-d314-4f04-94c8-625c03eb5a68\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057759 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.058355 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.058928 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.059161 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.059527 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-b5657"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.060067 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-b5657" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.063167 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.075296 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.063717 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.066095 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.075680 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-2lccn"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.067082 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.075714 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.067390 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.068034 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.068068 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.069609 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.069614 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.069674 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.069774 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.069866 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.069902 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.076283 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.076374 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2n2xb"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.076592 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.081616 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.081884 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.082610 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-q69sb"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.083672 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xm5cd"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.084347 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.084967 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2n2xb" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.085320 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.086096 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-q69sb" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.086974 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.089075 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.089822 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.090911 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.091789 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.095392 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.097784 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.098915 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nshzl"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.100462 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.113799 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.114651 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.116276 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.116864 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.117479 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.117952 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.118559 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5q929"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.119974 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.120487 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.120712 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.123203 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lrz7m"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.124939 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.126349 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.126813 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.127417 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.127533 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.128178 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.129676 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.131581 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tn2zp"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.134462 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-j7zvj"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.136948 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.138664 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-k2wkm"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.141760 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-9hktz"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.144010 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-x9bhh"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.145446 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.146467 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-57jmg"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.146772 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.150416 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.155559 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-rfwp8"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.156822 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rfwp8" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.158092 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9nw4v"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.158396 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-config\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.158464 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbhnr\" (UniqueName: \"kubernetes.io/projected/01ee90aa-9465-4cd2-97a0-ce735d557649-kube-api-access-gbhnr\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.158519 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-oauth-serving-cert\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.158548 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c690c8a8-1bd9-45ff-ba62-93cb7f1ce890-serving-cert\") pod \"openshift-config-operator-7777fb866f-wpch6\" (UID: \"c690c8a8-1bd9-45ff-ba62-93cb7f1ce890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.158656 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-trusted-ca-bundle\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.158706 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpczv\" (UniqueName: \"kubernetes.io/projected/3a537cbb-d314-4f04-94c8-625c03eb5a68-kube-api-access-dpczv\") pod \"machine-approver-56656f9798-p4tq6\" (UID: \"3a537cbb-d314-4f04-94c8-625c03eb5a68\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.158739 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-service-ca\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.158764 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg77p\" (UniqueName: \"kubernetes.io/projected/853f0809-8828-4976-9b04-dd078ab64ced-kube-api-access-xg77p\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.158791 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/853f0809-8828-4976-9b04-dd078ab64ced-console-oauth-config\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.158815 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-client-ca\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.158950 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/853f0809-8828-4976-9b04-dd078ab64ced-console-serving-cert\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.159063 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ee90aa-9465-4cd2-97a0-ce735d557649-serving-cert\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.159347 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c690c8a8-1bd9-45ff-ba62-93cb7f1ce890-available-featuregates\") pod \"openshift-config-operator-7777fb866f-wpch6\" (UID: \"c690c8a8-1bd9-45ff-ba62-93cb7f1ce890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.159402 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4bc8\" (UniqueName: \"kubernetes.io/projected/c690c8a8-1bd9-45ff-ba62-93cb7f1ce890-kube-api-access-g4bc8\") pod \"openshift-config-operator-7777fb866f-wpch6\" (UID: \"c690c8a8-1bd9-45ff-ba62-93cb7f1ce890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.159437 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lrgnw"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.159436 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-console-config\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.159528 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3a537cbb-d314-4f04-94c8-625c03eb5a68-auth-proxy-config\") pod \"machine-approver-56656f9798-p4tq6\" (UID: \"3a537cbb-d314-4f04-94c8-625c03eb5a68\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.159557 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3a537cbb-d314-4f04-94c8-625c03eb5a68-machine-approver-tls\") pod \"machine-approver-56656f9798-p4tq6\" (UID: \"3a537cbb-d314-4f04-94c8-625c03eb5a68\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.159600 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a537cbb-d314-4f04-94c8-625c03eb5a68-config\") pod \"machine-approver-56656f9798-p4tq6\" (UID: \"3a537cbb-d314-4f04-94c8-625c03eb5a68\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.159976 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.160429 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-console-config\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.161414 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c690c8a8-1bd9-45ff-ba62-93cb7f1ce890-available-featuregates\") pod \"openshift-config-operator-7777fb866f-wpch6\" (UID: \"c690c8a8-1bd9-45ff-ba62-93cb7f1ce890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.161662 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-client-ca\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.161722 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.162265 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-wpch6"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.162574 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-oauth-serving-cert\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.163649 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-trusted-ca-bundle\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.164777 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a537cbb-d314-4f04-94c8-625c03eb5a68-config\") pod \"machine-approver-56656f9798-p4tq6\" (UID: \"3a537cbb-d314-4f04-94c8-625c03eb5a68\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.165053 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-service-ca\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.165124 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3a537cbb-d314-4f04-94c8-625c03eb5a68-auth-proxy-config\") pod \"machine-approver-56656f9798-p4tq6\" (UID: \"3a537cbb-d314-4f04-94c8-625c03eb5a68\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.165705 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.166731 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.169225 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/853f0809-8828-4976-9b04-dd078ab64ced-console-oauth-config\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.169935 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/853f0809-8828-4976-9b04-dd078ab64ced-console-serving-cert\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.170080 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.170809 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3a537cbb-d314-4f04-94c8-625c03eb5a68-machine-approver-tls\") pod \"machine-approver-56656f9798-p4tq6\" (UID: \"3a537cbb-d314-4f04-94c8-625c03eb5a68\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.171210 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nshzl"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.171700 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c690c8a8-1bd9-45ff-ba62-93cb7f1ce890-serving-cert\") pod \"openshift-config-operator-7777fb866f-wpch6\" (UID: \"c690c8a8-1bd9-45ff-ba62-93cb7f1ce890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.173952 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.173989 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2n2xb"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.179967 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.180023 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-b5657"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.180642 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9nccj"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.181620 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xm5cd"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.182601 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.183553 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.184640 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.186454 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n2xzb"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.187147 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.192218 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-w2qlx"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.196973 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rfwp8"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.197167 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.199645 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-q69sb"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.202596 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-w2qlx"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.203745 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.205003 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5q929"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.206634 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.207126 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.207335 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.208616 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.209768 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lrz7m"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.210946 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.212200 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-v4hs9"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.214753 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-w8c9w"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.214928 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.215813 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-v4hs9"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.215930 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-w8c9w" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.247205 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.267520 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.287365 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.293867 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.293905 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.294071 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.294393 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.314004 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.327256 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.347206 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.367824 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.387474 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.406790 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.427626 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.446705 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.467467 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.488035 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.507863 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.528367 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.548727 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.568231 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.589070 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.607942 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.627819 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.648036 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.666934 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.688504 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.708506 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.728022 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.748513 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.767811 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.788062 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.806902 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.832054 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.847901 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.866953 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.887744 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.907926 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.928343 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.947799 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.967969 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.987933 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.007351 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.027310 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.047452 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.067473 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.087967 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.105454 4902 request.go:700] Waited for 1.012915213s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&limit=500&resourceVersion=0 Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.108000 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.127277 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.147881 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.161461 4902 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.161487 4902 secret.go:188] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.161558 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-config podName:01ee90aa-9465-4cd2-97a0-ce735d557649 nodeName:}" failed. No retries permitted until 2026-01-21 14:36:16.661531806 +0000 UTC m=+138.738364835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-config") pod "route-controller-manager-6576b87f9c-xrcxf" (UID: "01ee90aa-9465-4cd2-97a0-ce735d557649") : failed to sync configmap cache: timed out waiting for the condition Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.161585 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01ee90aa-9465-4cd2-97a0-ce735d557649-serving-cert podName:01ee90aa-9465-4cd2-97a0-ce735d557649 nodeName:}" failed. No retries permitted until 2026-01-21 14:36:16.661575737 +0000 UTC m=+138.738408776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/01ee90aa-9465-4cd2-97a0-ce735d557649-serving-cert") pod "route-controller-manager-6576b87f9c-xrcxf" (UID: "01ee90aa-9465-4cd2-97a0-ce735d557649") : failed to sync secret cache: timed out waiting for the condition Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.167148 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.188184 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.206789 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.228321 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.247930 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.268145 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.288618 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.308854 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.347317 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.367505 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371065 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7158f8a-be32-4700-857f-faf9157f99f5-serving-cert\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371096 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/904ff956-5fbf-4e43-aede-3fa612c9bb70-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371129 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1eabb5ac-ae9e-4853-a2ec-2d821a4883f8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jt8f8\" (UID: \"1eabb5ac-ae9e-4853-a2ec-2d821a4883f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371158 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/64d60c19-a655-408a-99e4-becff3e27018-node-pullsecrets\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371227 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2e95c252-bd71-44fe-a8f1-d9a346d8a882-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371471 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw7zz\" (UniqueName: \"kubernetes.io/projected/1eabb5ac-ae9e-4853-a2ec-2d821a4883f8-kube-api-access-vw7zz\") pod \"cluster-samples-operator-665b6dd947-jt8f8\" (UID: \"1eabb5ac-ae9e-4853-a2ec-2d821a4883f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371539 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q56gh\" (UniqueName: \"kubernetes.io/projected/c7158f8a-be32-4700-857f-faf9157f99f5-kube-api-access-q56gh\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371578 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4l45\" (UniqueName: \"kubernetes.io/projected/0c16a673-e56a-49ff-ac34-6910e02214a6-kube-api-access-v4l45\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371603 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89746c70-7e6b-4f62-acb0-25848752b0bf-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-scxh2\" (UID: \"89746c70-7e6b-4f62-acb0-25848752b0bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371634 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrx7w\" (UniqueName: \"kubernetes.io/projected/904ff956-5fbf-4e43-aede-3fa612c9bb70-kube-api-access-rrx7w\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371653 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50ac8539-334d-4811-8b3e-7a2df9e4c931-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371694 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-etcd-serving-ca\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371715 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64d60c19-a655-408a-99e4-becff3e27018-serving-cert\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371750 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gvxn5\" (UID: \"a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371793 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-trusted-ca-bundle\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371817 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/64d60c19-a655-408a-99e4-becff3e27018-audit-dir\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371844 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9l2m\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-kube-api-access-r9l2m\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371862 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-audit\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371881 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-image-import-ca\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371915 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5765190c-206a-481f-a72e-4f119e8881bc-etcd-ca\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371933 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2e95c252-bd71-44fe-a8f1-d9a346d8a882-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371960 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91a268d0-59c0-4e7f-8b78-260d14051e34-config\") pod \"machine-api-operator-5694c8668f-57jmg\" (UID: \"91a268d0-59c0-4e7f-8b78-260d14051e34\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371985 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zp8n\" (UniqueName: \"kubernetes.io/projected/91a268d0-59c0-4e7f-8b78-260d14051e34-kube-api-access-6zp8n\") pod \"machine-api-operator-5694c8668f-57jmg\" (UID: \"91a268d0-59c0-4e7f-8b78-260d14051e34\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372087 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gvxn5\" (UID: \"a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372108 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c16a673-e56a-49ff-ac34-6910e02214a6-audit-dir\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372166 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372185 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5765190c-206a-481f-a72e-4f119e8881bc-etcd-client\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372202 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372220 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372260 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372301 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372401 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2e95c252-bd71-44fe-a8f1-d9a346d8a882-registry-certificates\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372426 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71696f1d-02bf-4fc5-a7f5-8dc351b3bf86-trusted-ca\") pod \"console-operator-58897d9998-9hktz\" (UID: \"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86\") " pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372468 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5765190c-206a-481f-a72e-4f119e8881bc-config\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372485 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwhzd\" (UniqueName: \"kubernetes.io/projected/64d60c19-a655-408a-99e4-becff3e27018-kube-api-access-bwhzd\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372500 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5765190c-206a-481f-a72e-4f119e8881bc-etcd-service-ca\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372544 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/904ff956-5fbf-4e43-aede-3fa612c9bb70-audit-policies\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372567 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/904ff956-5fbf-4e43-aede-3fa612c9bb70-audit-dir\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372620 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50ac8539-334d-4811-8b3e-7a2df9e4c931-serving-cert\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372639 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372728 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-client-ca\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372802 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372829 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f699h\" (UniqueName: \"kubernetes.io/projected/89746c70-7e6b-4f62-acb0-25848752b0bf-kube-api-access-f699h\") pod \"cluster-image-registry-operator-dc59b4c8b-scxh2\" (UID: \"89746c70-7e6b-4f62-acb0-25848752b0bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372868 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/904ff956-5fbf-4e43-aede-3fa612c9bb70-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372887 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372906 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/64d60c19-a655-408a-99e4-becff3e27018-etcd-client\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372945 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-registry-tls\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372961 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64q7c\" (UniqueName: \"kubernetes.io/projected/71696f1d-02bf-4fc5-a7f5-8dc351b3bf86-kube-api-access-64q7c\") pod \"console-operator-58897d9998-9hktz\" (UID: \"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86\") " pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373011 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71696f1d-02bf-4fc5-a7f5-8dc351b3bf86-config\") pod \"console-operator-58897d9998-9hktz\" (UID: \"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86\") " pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373108 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71696f1d-02bf-4fc5-a7f5-8dc351b3bf86-serving-cert\") pod \"console-operator-58897d9998-9hktz\" (UID: \"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86\") " pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373130 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89746c70-7e6b-4f62-acb0-25848752b0bf-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-scxh2\" (UID: \"89746c70-7e6b-4f62-acb0-25848752b0bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373193 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373214 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373620 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/904ff956-5fbf-4e43-aede-3fa612c9bb70-encryption-config\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373645 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ac8539-334d-4811-8b3e-7a2df9e4c931-config\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373693 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50ac8539-334d-4811-8b3e-7a2df9e4c931-service-ca-bundle\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373717 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-config\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373781 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/89746c70-7e6b-4f62-acb0-25848752b0bf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-scxh2\" (UID: \"89746c70-7e6b-4f62-acb0-25848752b0bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373854 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373881 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2hd8\" (UniqueName: \"kubernetes.io/projected/8285f69a-516d-4bdd-9a14-72d966a0b208-kube-api-access-t2hd8\") pod \"downloads-7954f5f757-j7zvj\" (UID: \"8285f69a-516d-4bdd-9a14-72d966a0b208\") " pod="openshift-console/downloads-7954f5f757-j7zvj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373916 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-audit-policies\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373937 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373993 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcl92\" (UniqueName: \"kubernetes.io/projected/a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8-kube-api-access-xcl92\") pod \"openshift-apiserver-operator-796bbdcf4f-gvxn5\" (UID: \"a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.374014 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/91a268d0-59c0-4e7f-8b78-260d14051e34-images\") pod \"machine-api-operator-5694c8668f-57jmg\" (UID: \"91a268d0-59c0-4e7f-8b78-260d14051e34\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.374089 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/91a268d0-59c0-4e7f-8b78-260d14051e34-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-57jmg\" (UID: \"91a268d0-59c0-4e7f-8b78-260d14051e34\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.374107 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5765190c-206a-481f-a72e-4f119e8881bc-serving-cert\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.374168 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/904ff956-5fbf-4e43-aede-3fa612c9bb70-etcd-client\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.374187 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/904ff956-5fbf-4e43-aede-3fa612c9bb70-serving-cert\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.374204 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-bound-sa-token\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.374250 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:16.874230995 +0000 UTC m=+138.951064244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.374290 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e95c252-bd71-44fe-a8f1-d9a346d8a882-trusted-ca\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.374326 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljxl2\" (UniqueName: \"kubernetes.io/projected/5765190c-206a-481f-a72e-4f119e8881bc-kube-api-access-ljxl2\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.374343 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfknp\" (UniqueName: \"kubernetes.io/projected/50ac8539-334d-4811-8b3e-7a2df9e4c931-kube-api-access-tfknp\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.374360 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.374396 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/64d60c19-a655-408a-99e4-becff3e27018-encryption-config\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.374413 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-config\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.386926 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.407202 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.427608 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.447189 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.467458 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.474960 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.475084 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:16.975062493 +0000 UTC m=+139.051895522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475213 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9l2m\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-kube-api-access-r9l2m\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475245 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4449adc-13fa-40ee-a058-f42120e5cbee-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wchr8\" (UID: \"c4449adc-13fa-40ee-a058-f42120e5cbee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475269 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/677296cf-109d-4fc1-b3db-c8312605a5fb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8wfd7\" (UID: \"677296cf-109d-4fc1-b3db-c8312605a5fb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475310 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5765190c-206a-481f-a72e-4f119e8881bc-etcd-ca\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475376 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91a268d0-59c0-4e7f-8b78-260d14051e34-config\") pod \"machine-api-operator-5694c8668f-57jmg\" (UID: \"91a268d0-59c0-4e7f-8b78-260d14051e34\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475408 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zp8n\" (UniqueName: \"kubernetes.io/projected/91a268d0-59c0-4e7f-8b78-260d14051e34-kube-api-access-6zp8n\") pod \"machine-api-operator-5694c8668f-57jmg\" (UID: \"91a268d0-59c0-4e7f-8b78-260d14051e34\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475433 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a94b1199-eac7-4e88-ad39-44936959740c-signing-cabundle\") pod \"service-ca-9c57cc56f-lrz7m\" (UID: \"a94b1199-eac7-4e88-ad39-44936959740c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475455 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/64f0091d-255f-4e9a-a14c-33d240892e51-proxy-tls\") pod \"machine-config-operator-74547568cd-qldcg\" (UID: \"64f0091d-255f-4e9a-a14c-33d240892e51\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475479 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c16a673-e56a-49ff-ac34-6910e02214a6-audit-dir\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475502 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4frg\" (UniqueName: \"kubernetes.io/projected/53985f44-9907-48a1-8912-6163cecceba9-kube-api-access-w4frg\") pod \"dns-default-w2qlx\" (UID: \"53985f44-9907-48a1-8912-6163cecceba9\") " pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475527 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-79b2r\" (UID: \"0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475552 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-stats-auth\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475577 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/179de16d-c6d0-4cda-8d1f-8c2396301175-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xm5cd\" (UID: \"179de16d-c6d0-4cda-8d1f-8c2396301175\") " pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475595 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c16a673-e56a-49ff-ac34-6910e02214a6-audit-dir\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475602 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475796 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475878 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475918 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475939 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5765190c-206a-481f-a72e-4f119e8881bc-etcd-ca\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475957 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ec3e08f-1312-4857-b152-cde8e51aad05-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-67gqb\" (UID: \"2ec3e08f-1312-4857-b152-cde8e51aad05\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.476034 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71696f1d-02bf-4fc5-a7f5-8dc351b3bf86-trusted-ca\") pod \"console-operator-58897d9998-9hktz\" (UID: \"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86\") " pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.476157 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5765190c-206a-481f-a72e-4f119e8881bc-etcd-service-ca\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.476230 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/904ff956-5fbf-4e43-aede-3fa612c9bb70-audit-policies\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477155 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8tnw\" (UniqueName: \"kubernetes.io/projected/a605a533-8d8c-47bc-a04c-0739f97482e6-kube-api-access-d8tnw\") pod \"machine-config-controller-84d6567774-5q929\" (UID: \"a605a533-8d8c-47bc-a04c-0739f97482e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.476979 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5765190c-206a-481f-a72e-4f119e8881bc-etcd-service-ca\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477155 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/904ff956-5fbf-4e43-aede-3fa612c9bb70-audit-policies\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.476306 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91a268d0-59c0-4e7f-8b78-260d14051e34-config\") pod \"machine-api-operator-5694c8668f-57jmg\" (UID: \"91a268d0-59c0-4e7f-8b78-260d14051e34\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477236 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-metrics-certs\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477271 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/92715363-5170-4018-8a70-eb8274f5ffe0-srv-cert\") pod \"catalog-operator-68c6474976-hf96t\" (UID: \"92715363-5170-4018-8a70-eb8274f5ffe0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477305 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0-cert\") pod \"ingress-canary-rfwp8\" (UID: \"95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0\") " pod="openshift-ingress-canary/ingress-canary-rfwp8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477343 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f699h\" (UniqueName: \"kubernetes.io/projected/89746c70-7e6b-4f62-acb0-25848752b0bf-kube-api-access-f699h\") pod \"cluster-image-registry-operator-dc59b4c8b-scxh2\" (UID: \"89746c70-7e6b-4f62-acb0-25848752b0bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477382 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn7jw\" (UniqueName: \"kubernetes.io/projected/2c1970f7-f131-4594-b396-d33bb9776e33-kube-api-access-zn7jw\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477443 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-registry-tls\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477484 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64q7c\" (UniqueName: \"kubernetes.io/projected/71696f1d-02bf-4fc5-a7f5-8dc351b3bf86-kube-api-access-64q7c\") pod \"console-operator-58897d9998-9hktz\" (UID: \"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86\") " pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477695 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477735 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/64d60c19-a655-408a-99e4-becff3e27018-etcd-client\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477770 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjjp5\" (UniqueName: \"kubernetes.io/projected/ef463925-8c6c-4217-9bba-e15e1283c4c8-kube-api-access-hjjp5\") pod \"olm-operator-6b444d44fb-zzmhd\" (UID: \"ef463925-8c6c-4217-9bba-e15e1283c4c8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477799 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a14f9ae8-3c9b-4618-8255-a55408525925-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gdzsm\" (UID: \"a14f9ae8-3c9b-4618-8255-a55408525925\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477833 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71696f1d-02bf-4fc5-a7f5-8dc351b3bf86-config\") pod \"console-operator-58897d9998-9hktz\" (UID: \"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86\") " pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477861 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71696f1d-02bf-4fc5-a7f5-8dc351b3bf86-serving-cert\") pod \"console-operator-58897d9998-9hktz\" (UID: \"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86\") " pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477887 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4449adc-13fa-40ee-a058-f42120e5cbee-config\") pod \"kube-apiserver-operator-766d6c64bb-wchr8\" (UID: \"c4449adc-13fa-40ee-a058-f42120e5cbee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477916 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477951 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/904ff956-5fbf-4e43-aede-3fa612c9bb70-encryption-config\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477981 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ac8539-334d-4811-8b3e-7a2df9e4c931-config\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.478017 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50ac8539-334d-4811-8b3e-7a2df9e4c931-service-ca-bundle\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.478064 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-config\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.478097 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-79b2r\" (UID: \"0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.478166 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-socket-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.478391 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a14f9ae8-3c9b-4618-8255-a55408525925-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gdzsm\" (UID: \"a14f9ae8-3c9b-4618-8255-a55408525925\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.478493 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.478538 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/677296cf-109d-4fc1-b3db-c8312605a5fb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8wfd7\" (UID: \"677296cf-109d-4fc1-b3db-c8312605a5fb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.478576 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/64f0091d-255f-4e9a-a14c-33d240892e51-images\") pod \"machine-config-operator-74547568cd-qldcg\" (UID: \"64f0091d-255f-4e9a-a14c-33d240892e51\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.479059 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.479078 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.479281 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.479741 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71696f1d-02bf-4fc5-a7f5-8dc351b3bf86-config\") pod \"console-operator-58897d9998-9hktz\" (UID: \"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86\") " pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.479893 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:16.979879895 +0000 UTC m=+139.056712924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.480082 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-audit-policies\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.480179 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmkp4\" (UniqueName: \"kubernetes.io/projected/4c2958e3-5395-4efd-8b8f-f3e70fd9fcea-kube-api-access-gmkp4\") pod \"multus-admission-controller-857f4d67dd-q69sb\" (UID: \"4c2958e3-5395-4efd-8b8f-f3e70fd9fcea\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q69sb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.480260 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbn55\" (UniqueName: \"kubernetes.io/projected/95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0-kube-api-access-qbn55\") pod \"ingress-canary-rfwp8\" (UID: \"95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0\") " pod="openshift-ingress-canary/ingress-canary-rfwp8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.480396 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-audit-policies\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.480091 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50ac8539-334d-4811-8b3e-7a2df9e4c931-service-ca-bundle\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.480139 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71696f1d-02bf-4fc5-a7f5-8dc351b3bf86-trusted-ca\") pod \"console-operator-58897d9998-9hktz\" (UID: \"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86\") " pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.480637 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ac8539-334d-4811-8b3e-7a2df9e4c931-config\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481340 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-bound-sa-token\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481424 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/91a268d0-59c0-4e7f-8b78-260d14051e34-images\") pod \"machine-api-operator-5694c8668f-57jmg\" (UID: \"91a268d0-59c0-4e7f-8b78-260d14051e34\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481454 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481494 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/904ff956-5fbf-4e43-aede-3fa612c9bb70-serving-cert\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481566 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-registration-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481600 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljxl2\" (UniqueName: \"kubernetes.io/projected/5765190c-206a-481f-a72e-4f119e8881bc-kube-api-access-ljxl2\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481611 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/64d60c19-a655-408a-99e4-becff3e27018-etcd-client\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481629 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481691 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/64d60c19-a655-408a-99e4-becff3e27018-encryption-config\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481731 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-default-certificate\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481791 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-config\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481821 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh6gd\" (UniqueName: \"kubernetes.io/projected/eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f-kube-api-access-dh6gd\") pod \"packageserver-d55dfcdfc-tgt87\" (UID: \"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481845 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9467c15f-f3fe-4594-b97d-0838d43877d1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qm6gk\" (UID: \"9467c15f-f3fe-4594-b97d-0838d43877d1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481876 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/904ff956-5fbf-4e43-aede-3fa612c9bb70-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481910 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ef463925-8c6c-4217-9bba-e15e1283c4c8-srv-cert\") pod \"olm-operator-6b444d44fb-zzmhd\" (UID: \"ef463925-8c6c-4217-9bba-e15e1283c4c8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481946 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trmth\" (UniqueName: \"kubernetes.io/projected/179de16d-c6d0-4cda-8d1f-8c2396301175-kube-api-access-trmth\") pod \"marketplace-operator-79b997595-xm5cd\" (UID: \"179de16d-c6d0-4cda-8d1f-8c2396301175\") " pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.482003 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsdgj\" (UniqueName: \"kubernetes.io/projected/9467c15f-f3fe-4594-b97d-0838d43877d1-kube-api-access-bsdgj\") pod \"control-plane-machine-set-operator-78cbb6b69f-qm6gk\" (UID: \"9467c15f-f3fe-4594-b97d-0838d43877d1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.482088 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/91a268d0-59c0-4e7f-8b78-260d14051e34-images\") pod \"machine-api-operator-5694c8668f-57jmg\" (UID: \"91a268d0-59c0-4e7f-8b78-260d14051e34\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.482231 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78db9f9d-1963-42d2-9e52-da80ef710af8-serving-cert\") pod \"service-ca-operator-777779d784-nshzl\" (UID: \"78db9f9d-1963-42d2-9e52-da80ef710af8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.482346 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q56gh\" (UniqueName: \"kubernetes.io/projected/c7158f8a-be32-4700-857f-faf9157f99f5-kube-api-access-q56gh\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.482451 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw7zz\" (UniqueName: \"kubernetes.io/projected/1eabb5ac-ae9e-4853-a2ec-2d821a4883f8-kube-api-access-vw7zz\") pod \"cluster-samples-operator-665b6dd947-jt8f8\" (UID: \"1eabb5ac-ae9e-4853-a2ec-2d821a4883f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.482513 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/904ff956-5fbf-4e43-aede-3fa612c9bb70-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.482563 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89746c70-7e6b-4f62-acb0-25848752b0bf-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-scxh2\" (UID: \"89746c70-7e6b-4f62-acb0-25848752b0bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.482682 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrx7w\" (UniqueName: \"kubernetes.io/projected/904ff956-5fbf-4e43-aede-3fa612c9bb70-kube-api-access-rrx7w\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.482761 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn2m5\" (UniqueName: \"kubernetes.io/projected/78db9f9d-1963-42d2-9e52-da80ef710af8-kube-api-access-zn2m5\") pod \"service-ca-operator-777779d784-nshzl\" (UID: \"78db9f9d-1963-42d2-9e52-da80ef710af8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.482852 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gvxn5\" (UID: \"a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.483165 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64d60c19-a655-408a-99e4-becff3e27018-serving-cert\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.483259 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-mountpoint-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.483447 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-trusted-ca-bundle\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.483566 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/031f1783-31bd-4008-ace8-3ede7d0a86de-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-895km\" (UID: \"031f1783-31bd-4008-ace8-3ede7d0a86de\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.483714 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d7faf6fc-58fe-4457-bb7c-510fce0b60a7-trusted-ca\") pod \"ingress-operator-5b745b69d9-lpsnj\" (UID: \"d7faf6fc-58fe-4457-bb7c-510fce0b60a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.483786 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53985f44-9907-48a1-8912-6163cecceba9-config-volume\") pod \"dns-default-w2qlx\" (UID: \"53985f44-9907-48a1-8912-6163cecceba9\") " pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.483857 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-audit\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.483994 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-image-import-ca\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.484097 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dtcw\" (UniqueName: \"kubernetes.io/projected/d7faf6fc-58fe-4457-bb7c-510fce0b60a7-kube-api-access-2dtcw\") pod \"ingress-operator-5b745b69d9-lpsnj\" (UID: \"d7faf6fc-58fe-4457-bb7c-510fce0b60a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.484155 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.482689 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.484292 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4449adc-13fa-40ee-a058-f42120e5cbee-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wchr8\" (UID: \"c4449adc-13fa-40ee-a058-f42120e5cbee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.484379 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jr7b\" (UniqueName: \"kubernetes.io/projected/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-kube-api-access-4jr7b\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.484464 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2e95c252-bd71-44fe-a8f1-d9a346d8a882-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.484538 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/53985f44-9907-48a1-8912-6163cecceba9-metrics-tls\") pod \"dns-default-w2qlx\" (UID: \"53985f44-9907-48a1-8912-6163cecceba9\") " pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.484680 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn78g\" (UniqueName: \"kubernetes.io/projected/0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a-kube-api-access-cn78g\") pod \"kube-storage-version-migrator-operator-b67b599dd-79b2r\" (UID: \"0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.484855 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gvxn5\" (UID: \"a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.484943 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70656800-9429-43df-a1cb-7c8617d23b3f-config-volume\") pod \"collect-profiles-29483430-xwzfw\" (UID: \"70656800-9429-43df-a1cb-7c8617d23b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485200 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485272 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f-webhook-cert\") pod \"packageserver-d55dfcdfc-tgt87\" (UID: \"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485368 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5blcl\" (UniqueName: \"kubernetes.io/projected/64f0091d-255f-4e9a-a14c-33d240892e51-kube-api-access-5blcl\") pod \"machine-config-operator-74547568cd-qldcg\" (UID: \"64f0091d-255f-4e9a-a14c-33d240892e51\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485472 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5765190c-206a-481f-a72e-4f119e8881bc-etcd-client\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485539 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2e95c252-bd71-44fe-a8f1-d9a346d8a882-registry-certificates\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485578 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5765190c-206a-481f-a72e-4f119e8881bc-config\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485593 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-audit\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485614 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwhzd\" (UniqueName: \"kubernetes.io/projected/64d60c19-a655-408a-99e4-becff3e27018-kube-api-access-bwhzd\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485655 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/43c52dc8-25a9-44d5-bea6-ecd091f55d54-metrics-tls\") pod \"dns-operator-744455d44c-b5657\" (UID: \"43c52dc8-25a9-44d5-bea6-ecd091f55d54\") " pod="openshift-dns-operator/dns-operator-744455d44c-b5657" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485691 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/677296cf-109d-4fc1-b3db-c8312605a5fb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8wfd7\" (UID: \"677296cf-109d-4fc1-b3db-c8312605a5fb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485725 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f-tmpfs\") pod \"packageserver-d55dfcdfc-tgt87\" (UID: \"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485756 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/92715363-5170-4018-8a70-eb8274f5ffe0-profile-collector-cert\") pod \"catalog-operator-68c6474976-hf96t\" (UID: \"92715363-5170-4018-8a70-eb8274f5ffe0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485796 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/904ff956-5fbf-4e43-aede-3fa612c9bb70-audit-dir\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485828 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50ac8539-334d-4811-8b3e-7a2df9e4c931-serving-cert\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485864 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485889 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-trusted-ca-bundle\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485917 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-client-ca\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.484393 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89746c70-7e6b-4f62-acb0-25848752b0bf-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-scxh2\" (UID: \"89746c70-7e6b-4f62-acb0-25848752b0bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485953 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c2958e3-5395-4efd-8b8f-f3e70fd9fcea-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-q69sb\" (UID: \"4c2958e3-5395-4efd-8b8f-f3e70fd9fcea\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q69sb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485991 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/904ff956-5fbf-4e43-aede-3fa612c9bb70-audit-dir\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485995 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh7hx\" (UniqueName: \"kubernetes.io/projected/2ec3e08f-1312-4857-b152-cde8e51aad05-kube-api-access-jh7hx\") pod \"package-server-manager-789f6589d5-67gqb\" (UID: \"2ec3e08f-1312-4857-b152-cde8e51aad05\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486061 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/904ff956-5fbf-4e43-aede-3fa612c9bb70-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486092 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486120 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqlkc\" (UniqueName: \"kubernetes.io/projected/031f1783-31bd-4008-ace8-3ede7d0a86de-kube-api-access-mqlkc\") pod \"openshift-controller-manager-operator-756b6f6bc6-895km\" (UID: \"031f1783-31bd-4008-ace8-3ede7d0a86de\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486169 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89746c70-7e6b-4f62-acb0-25848752b0bf-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-scxh2\" (UID: \"89746c70-7e6b-4f62-acb0-25848752b0bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486201 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486226 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd88z\" (UniqueName: \"kubernetes.io/projected/5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a-kube-api-access-rd88z\") pod \"machine-config-server-w8c9w\" (UID: \"5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a\") " pod="openshift-machine-config-operator/machine-config-server-w8c9w" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486248 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f-apiservice-cert\") pod \"packageserver-d55dfcdfc-tgt87\" (UID: \"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486286 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/89746c70-7e6b-4f62-acb0-25848752b0bf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-scxh2\" (UID: \"89746c70-7e6b-4f62-acb0-25848752b0bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486309 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfs4v\" (UniqueName: \"kubernetes.io/projected/a94b1199-eac7-4e88-ad39-44936959740c-kube-api-access-lfs4v\") pod \"service-ca-9c57cc56f-lrz7m\" (UID: \"a94b1199-eac7-4e88-ad39-44936959740c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486333 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2hd8\" (UniqueName: \"kubernetes.io/projected/8285f69a-516d-4bdd-9a14-72d966a0b208-kube-api-access-t2hd8\") pod \"downloads-7954f5f757-j7zvj\" (UID: \"8285f69a-516d-4bdd-9a14-72d966a0b208\") " pod="openshift-console/downloads-7954f5f757-j7zvj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486355 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486373 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcl92\" (UniqueName: \"kubernetes.io/projected/a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8-kube-api-access-xcl92\") pod \"openshift-apiserver-operator-796bbdcf4f-gvxn5\" (UID: \"a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486396 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a14f9ae8-3c9b-4618-8255-a55408525925-config\") pod \"kube-controller-manager-operator-78b949d7b-gdzsm\" (UID: \"a14f9ae8-3c9b-4618-8255-a55408525925\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486417 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/91a268d0-59c0-4e7f-8b78-260d14051e34-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-57jmg\" (UID: \"91a268d0-59c0-4e7f-8b78-260d14051e34\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486447 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5765190c-206a-481f-a72e-4f119e8881bc-serving-cert\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486471 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/904ff956-5fbf-4e43-aede-3fa612c9bb70-etcd-client\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486489 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a605a533-8d8c-47bc-a04c-0739f97482e6-proxy-tls\") pod \"machine-config-controller-84d6567774-5q929\" (UID: \"a605a533-8d8c-47bc-a04c-0739f97482e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486510 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e95c252-bd71-44fe-a8f1-d9a346d8a882-trusted-ca\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486534 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfknp\" (UniqueName: \"kubernetes.io/projected/50ac8539-334d-4811-8b3e-7a2df9e4c931-kube-api-access-tfknp\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486560 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a94b1199-eac7-4e88-ad39-44936959740c-signing-key\") pod \"service-ca-9c57cc56f-lrz7m\" (UID: \"a94b1199-eac7-4e88-ad39-44936959740c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486595 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a-certs\") pod \"machine-config-server-w8c9w\" (UID: \"5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a\") " pod="openshift-machine-config-operator/machine-config-server-w8c9w" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486621 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/64f0091d-255f-4e9a-a14c-33d240892e51-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qldcg\" (UID: \"64f0091d-255f-4e9a-a14c-33d240892e51\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486651 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a-node-bootstrap-token\") pod \"machine-config-server-w8c9w\" (UID: \"5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a\") " pod="openshift-machine-config-operator/machine-config-server-w8c9w" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486715 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7158f8a-be32-4700-857f-faf9157f99f5-serving-cert\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486758 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d7faf6fc-58fe-4457-bb7c-510fce0b60a7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lpsnj\" (UID: \"d7faf6fc-58fe-4457-bb7c-510fce0b60a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486777 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a605a533-8d8c-47bc-a04c-0739f97482e6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5q929\" (UID: \"a605a533-8d8c-47bc-a04c-0739f97482e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486794 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfg9t\" (UniqueName: \"kubernetes.io/projected/70656800-9429-43df-a1cb-7c8617d23b3f-kube-api-access-sfg9t\") pod \"collect-profiles-29483430-xwzfw\" (UID: \"70656800-9429-43df-a1cb-7c8617d23b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486815 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1eabb5ac-ae9e-4853-a2ec-2d821a4883f8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jt8f8\" (UID: \"1eabb5ac-ae9e-4853-a2ec-2d821a4883f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486835 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ef463925-8c6c-4217-9bba-e15e1283c4c8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zzmhd\" (UID: \"ef463925-8c6c-4217-9bba-e15e1283c4c8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486852 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z66wm\" (UniqueName: \"kubernetes.io/projected/92715363-5170-4018-8a70-eb8274f5ffe0-kube-api-access-z66wm\") pod \"catalog-operator-68c6474976-hf96t\" (UID: \"92715363-5170-4018-8a70-eb8274f5ffe0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486871 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/64d60c19-a655-408a-99e4-becff3e27018-node-pullsecrets\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486890 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d7faf6fc-58fe-4457-bb7c-510fce0b60a7-metrics-tls\") pod \"ingress-operator-5b745b69d9-lpsnj\" (UID: \"d7faf6fc-58fe-4457-bb7c-510fce0b60a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486905 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7dkh\" (UniqueName: \"kubernetes.io/projected/29cc0582-bf2f-4e0b-a351-2d933fdbd52f-kube-api-access-j7dkh\") pod \"migrator-59844c95c7-2n2xb\" (UID: \"29cc0582-bf2f-4e0b-a351-2d933fdbd52f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2n2xb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486923 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/179de16d-c6d0-4cda-8d1f-8c2396301175-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xm5cd\" (UID: \"179de16d-c6d0-4cda-8d1f-8c2396301175\") " pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486947 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2e95c252-bd71-44fe-a8f1-d9a346d8a882-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486974 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/031f1783-31bd-4008-ace8-3ede7d0a86de-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-895km\" (UID: \"031f1783-31bd-4008-ace8-3ede7d0a86de\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486998 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4l45\" (UniqueName: \"kubernetes.io/projected/0c16a673-e56a-49ff-ac34-6910e02214a6-kube-api-access-v4l45\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.487017 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-plugins-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.487032 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-csi-data-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.487089 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50ac8539-334d-4811-8b3e-7a2df9e4c931-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.487107 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-etcd-serving-ca\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.487150 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf7xr\" (UniqueName: \"kubernetes.io/projected/43c52dc8-25a9-44d5-bea6-ecd091f55d54-kube-api-access-vf7xr\") pod \"dns-operator-744455d44c-b5657\" (UID: \"43c52dc8-25a9-44d5-bea6-ecd091f55d54\") " pod="openshift-dns-operator/dns-operator-744455d44c-b5657" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.487210 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/64d60c19-a655-408a-99e4-becff3e27018-audit-dir\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.487268 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-service-ca-bundle\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.487334 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/64d60c19-a655-408a-99e4-becff3e27018-audit-dir\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.487396 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5765190c-206a-481f-a72e-4f119e8881bc-config\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.487561 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-etcd-serving-ca\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.487873 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/64d60c19-a655-408a-99e4-becff3e27018-node-pullsecrets\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.488170 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-registry-tls\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.488292 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gvxn5\" (UID: \"a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.488327 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/904ff956-5fbf-4e43-aede-3fa612c9bb70-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.488767 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2e95c252-bd71-44fe-a8f1-d9a346d8a882-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.489414 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70656800-9429-43df-a1cb-7c8617d23b3f-secret-volume\") pod \"collect-profiles-29483430-xwzfw\" (UID: \"70656800-9429-43df-a1cb-7c8617d23b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.489468 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78db9f9d-1963-42d2-9e52-da80ef710af8-config\") pod \"service-ca-operator-777779d784-nshzl\" (UID: \"78db9f9d-1963-42d2-9e52-da80ef710af8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.490126 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50ac8539-334d-4811-8b3e-7a2df9e4c931-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.491332 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e95c252-bd71-44fe-a8f1-d9a346d8a882-trusted-ca\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.491750 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2e95c252-bd71-44fe-a8f1-d9a346d8a882-registry-certificates\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.491980 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.493500 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.493611 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/91a268d0-59c0-4e7f-8b78-260d14051e34-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-57jmg\" (UID: \"91a268d0-59c0-4e7f-8b78-260d14051e34\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.493738 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/904ff956-5fbf-4e43-aede-3fa612c9bb70-encryption-config\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.493741 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5765190c-206a-481f-a72e-4f119e8881bc-etcd-client\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.493798 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1eabb5ac-ae9e-4853-a2ec-2d821a4883f8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jt8f8\" (UID: \"1eabb5ac-ae9e-4853-a2ec-2d821a4883f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.493819 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5765190c-206a-481f-a72e-4f119e8881bc-serving-cert\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.493931 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64d60c19-a655-408a-99e4-becff3e27018-serving-cert\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.494063 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/64d60c19-a655-408a-99e4-becff3e27018-encryption-config\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.494125 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.494336 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.494712 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.494736 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.494897 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/904ff956-5fbf-4e43-aede-3fa612c9bb70-serving-cert\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.495258 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71696f1d-02bf-4fc5-a7f5-8dc351b3bf86-serving-cert\") pod \"console-operator-58897d9998-9hktz\" (UID: \"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86\") " pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.495318 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2e95c252-bd71-44fe-a8f1-d9a346d8a882-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.496156 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/89746c70-7e6b-4f62-acb0-25848752b0bf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-scxh2\" (UID: \"89746c70-7e6b-4f62-acb0-25848752b0bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.496922 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/904ff956-5fbf-4e43-aede-3fa612c9bb70-etcd-client\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.500890 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50ac8539-334d-4811-8b3e-7a2df9e4c931-serving-cert\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.507230 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.527229 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.549378 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.567389 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.587535 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.590735 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.591073 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.091016141 +0000 UTC m=+139.167849190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591380 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591438 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/677296cf-109d-4fc1-b3db-c8312605a5fb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8wfd7\" (UID: \"677296cf-109d-4fc1-b3db-c8312605a5fb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591471 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/64f0091d-255f-4e9a-a14c-33d240892e51-images\") pod \"machine-config-operator-74547568cd-qldcg\" (UID: \"64f0091d-255f-4e9a-a14c-33d240892e51\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591495 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmkp4\" (UniqueName: \"kubernetes.io/projected/4c2958e3-5395-4efd-8b8f-f3e70fd9fcea-kube-api-access-gmkp4\") pod \"multus-admission-controller-857f4d67dd-q69sb\" (UID: \"4c2958e3-5395-4efd-8b8f-f3e70fd9fcea\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q69sb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591517 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbn55\" (UniqueName: \"kubernetes.io/projected/95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0-kube-api-access-qbn55\") pod \"ingress-canary-rfwp8\" (UID: \"95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0\") " pod="openshift-ingress-canary/ingress-canary-rfwp8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591543 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-registration-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591619 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-default-certificate\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591678 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh6gd\" (UniqueName: \"kubernetes.io/projected/eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f-kube-api-access-dh6gd\") pod \"packageserver-d55dfcdfc-tgt87\" (UID: \"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591706 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9467c15f-f3fe-4594-b97d-0838d43877d1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qm6gk\" (UID: \"9467c15f-f3fe-4594-b97d-0838d43877d1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591744 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ef463925-8c6c-4217-9bba-e15e1283c4c8-srv-cert\") pod \"olm-operator-6b444d44fb-zzmhd\" (UID: \"ef463925-8c6c-4217-9bba-e15e1283c4c8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591780 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trmth\" (UniqueName: \"kubernetes.io/projected/179de16d-c6d0-4cda-8d1f-8c2396301175-kube-api-access-trmth\") pod \"marketplace-operator-79b997595-xm5cd\" (UID: \"179de16d-c6d0-4cda-8d1f-8c2396301175\") " pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591815 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsdgj\" (UniqueName: \"kubernetes.io/projected/9467c15f-f3fe-4594-b97d-0838d43877d1-kube-api-access-bsdgj\") pod \"control-plane-machine-set-operator-78cbb6b69f-qm6gk\" (UID: \"9467c15f-f3fe-4594-b97d-0838d43877d1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591849 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78db9f9d-1963-42d2-9e52-da80ef710af8-serving-cert\") pod \"service-ca-operator-777779d784-nshzl\" (UID: \"78db9f9d-1963-42d2-9e52-da80ef710af8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591898 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn2m5\" (UniqueName: \"kubernetes.io/projected/78db9f9d-1963-42d2-9e52-da80ef710af8-kube-api-access-zn2m5\") pod \"service-ca-operator-777779d784-nshzl\" (UID: \"78db9f9d-1963-42d2-9e52-da80ef710af8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.591910 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.091899051 +0000 UTC m=+139.168732090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591955 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-mountpoint-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592012 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/031f1783-31bd-4008-ace8-3ede7d0a86de-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-895km\" (UID: \"031f1783-31bd-4008-ace8-3ede7d0a86de\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592037 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d7faf6fc-58fe-4457-bb7c-510fce0b60a7-trusted-ca\") pod \"ingress-operator-5b745b69d9-lpsnj\" (UID: \"d7faf6fc-58fe-4457-bb7c-510fce0b60a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592083 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53985f44-9907-48a1-8912-6163cecceba9-config-volume\") pod \"dns-default-w2qlx\" (UID: \"53985f44-9907-48a1-8912-6163cecceba9\") " pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592136 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dtcw\" (UniqueName: \"kubernetes.io/projected/d7faf6fc-58fe-4457-bb7c-510fce0b60a7-kube-api-access-2dtcw\") pod \"ingress-operator-5b745b69d9-lpsnj\" (UID: \"d7faf6fc-58fe-4457-bb7c-510fce0b60a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592174 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4449adc-13fa-40ee-a058-f42120e5cbee-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wchr8\" (UID: \"c4449adc-13fa-40ee-a058-f42120e5cbee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592186 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-registration-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592216 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jr7b\" (UniqueName: \"kubernetes.io/projected/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-kube-api-access-4jr7b\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592240 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-mountpoint-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592251 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/53985f44-9907-48a1-8912-6163cecceba9-metrics-tls\") pod \"dns-default-w2qlx\" (UID: \"53985f44-9907-48a1-8912-6163cecceba9\") " pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592370 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn78g\" (UniqueName: \"kubernetes.io/projected/0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a-kube-api-access-cn78g\") pod \"kube-storage-version-migrator-operator-b67b599dd-79b2r\" (UID: \"0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592459 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70656800-9429-43df-a1cb-7c8617d23b3f-config-volume\") pod \"collect-profiles-29483430-xwzfw\" (UID: \"70656800-9429-43df-a1cb-7c8617d23b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592514 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f-webhook-cert\") pod \"packageserver-d55dfcdfc-tgt87\" (UID: \"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592566 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5blcl\" (UniqueName: \"kubernetes.io/projected/64f0091d-255f-4e9a-a14c-33d240892e51-kube-api-access-5blcl\") pod \"machine-config-operator-74547568cd-qldcg\" (UID: \"64f0091d-255f-4e9a-a14c-33d240892e51\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592626 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/43c52dc8-25a9-44d5-bea6-ecd091f55d54-metrics-tls\") pod \"dns-operator-744455d44c-b5657\" (UID: \"43c52dc8-25a9-44d5-bea6-ecd091f55d54\") " pod="openshift-dns-operator/dns-operator-744455d44c-b5657" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592748 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/677296cf-109d-4fc1-b3db-c8312605a5fb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8wfd7\" (UID: \"677296cf-109d-4fc1-b3db-c8312605a5fb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592796 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/677296cf-109d-4fc1-b3db-c8312605a5fb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8wfd7\" (UID: \"677296cf-109d-4fc1-b3db-c8312605a5fb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592798 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f-tmpfs\") pod \"packageserver-d55dfcdfc-tgt87\" (UID: \"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592854 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/92715363-5170-4018-8a70-eb8274f5ffe0-profile-collector-cert\") pod \"catalog-operator-68c6474976-hf96t\" (UID: \"92715363-5170-4018-8a70-eb8274f5ffe0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592887 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh7hx\" (UniqueName: \"kubernetes.io/projected/2ec3e08f-1312-4857-b152-cde8e51aad05-kube-api-access-jh7hx\") pod \"package-server-manager-789f6589d5-67gqb\" (UID: \"2ec3e08f-1312-4857-b152-cde8e51aad05\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592920 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c2958e3-5395-4efd-8b8f-f3e70fd9fcea-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-q69sb\" (UID: \"4c2958e3-5395-4efd-8b8f-f3e70fd9fcea\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q69sb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592940 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqlkc\" (UniqueName: \"kubernetes.io/projected/031f1783-31bd-4008-ace8-3ede7d0a86de-kube-api-access-mqlkc\") pod \"openshift-controller-manager-operator-756b6f6bc6-895km\" (UID: \"031f1783-31bd-4008-ace8-3ede7d0a86de\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592975 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd88z\" (UniqueName: \"kubernetes.io/projected/5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a-kube-api-access-rd88z\") pod \"machine-config-server-w8c9w\" (UID: \"5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a\") " pod="openshift-machine-config-operator/machine-config-server-w8c9w" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593009 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f-apiservice-cert\") pod \"packageserver-d55dfcdfc-tgt87\" (UID: \"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593033 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfs4v\" (UniqueName: \"kubernetes.io/projected/a94b1199-eac7-4e88-ad39-44936959740c-kube-api-access-lfs4v\") pod \"service-ca-9c57cc56f-lrz7m\" (UID: \"a94b1199-eac7-4e88-ad39-44936959740c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593082 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a14f9ae8-3c9b-4618-8255-a55408525925-config\") pod \"kube-controller-manager-operator-78b949d7b-gdzsm\" (UID: \"a14f9ae8-3c9b-4618-8255-a55408525925\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593103 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a605a533-8d8c-47bc-a04c-0739f97482e6-proxy-tls\") pod \"machine-config-controller-84d6567774-5q929\" (UID: \"a605a533-8d8c-47bc-a04c-0739f97482e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593127 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a94b1199-eac7-4e88-ad39-44936959740c-signing-key\") pod \"service-ca-9c57cc56f-lrz7m\" (UID: \"a94b1199-eac7-4e88-ad39-44936959740c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593143 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a-certs\") pod \"machine-config-server-w8c9w\" (UID: \"5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a\") " pod="openshift-machine-config-operator/machine-config-server-w8c9w" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593160 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/64f0091d-255f-4e9a-a14c-33d240892e51-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qldcg\" (UID: \"64f0091d-255f-4e9a-a14c-33d240892e51\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593177 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a-node-bootstrap-token\") pod \"machine-config-server-w8c9w\" (UID: \"5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a\") " pod="openshift-machine-config-operator/machine-config-server-w8c9w" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593205 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d7faf6fc-58fe-4457-bb7c-510fce0b60a7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lpsnj\" (UID: \"d7faf6fc-58fe-4457-bb7c-510fce0b60a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593230 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a605a533-8d8c-47bc-a04c-0739f97482e6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5q929\" (UID: \"a605a533-8d8c-47bc-a04c-0739f97482e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593258 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfg9t\" (UniqueName: \"kubernetes.io/projected/70656800-9429-43df-a1cb-7c8617d23b3f-kube-api-access-sfg9t\") pod \"collect-profiles-29483430-xwzfw\" (UID: \"70656800-9429-43df-a1cb-7c8617d23b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593298 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ef463925-8c6c-4217-9bba-e15e1283c4c8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zzmhd\" (UID: \"ef463925-8c6c-4217-9bba-e15e1283c4c8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593316 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z66wm\" (UniqueName: \"kubernetes.io/projected/92715363-5170-4018-8a70-eb8274f5ffe0-kube-api-access-z66wm\") pod \"catalog-operator-68c6474976-hf96t\" (UID: \"92715363-5170-4018-8a70-eb8274f5ffe0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593334 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d7faf6fc-58fe-4457-bb7c-510fce0b60a7-metrics-tls\") pod \"ingress-operator-5b745b69d9-lpsnj\" (UID: \"d7faf6fc-58fe-4457-bb7c-510fce0b60a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593352 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7dkh\" (UniqueName: \"kubernetes.io/projected/29cc0582-bf2f-4e0b-a351-2d933fdbd52f-kube-api-access-j7dkh\") pod \"migrator-59844c95c7-2n2xb\" (UID: \"29cc0582-bf2f-4e0b-a351-2d933fdbd52f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2n2xb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593369 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/179de16d-c6d0-4cda-8d1f-8c2396301175-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xm5cd\" (UID: \"179de16d-c6d0-4cda-8d1f-8c2396301175\") " pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593395 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/031f1783-31bd-4008-ace8-3ede7d0a86de-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-895km\" (UID: \"031f1783-31bd-4008-ace8-3ede7d0a86de\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593412 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-plugins-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593455 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-csi-data-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593495 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf7xr\" (UniqueName: \"kubernetes.io/projected/43c52dc8-25a9-44d5-bea6-ecd091f55d54-kube-api-access-vf7xr\") pod \"dns-operator-744455d44c-b5657\" (UID: \"43c52dc8-25a9-44d5-bea6-ecd091f55d54\") " pod="openshift-dns-operator/dns-operator-744455d44c-b5657" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593517 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-service-ca-bundle\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593600 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70656800-9429-43df-a1cb-7c8617d23b3f-secret-volume\") pod \"collect-profiles-29483430-xwzfw\" (UID: \"70656800-9429-43df-a1cb-7c8617d23b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593653 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78db9f9d-1963-42d2-9e52-da80ef710af8-config\") pod \"service-ca-operator-777779d784-nshzl\" (UID: \"78db9f9d-1963-42d2-9e52-da80ef710af8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593675 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/677296cf-109d-4fc1-b3db-c8312605a5fb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8wfd7\" (UID: \"677296cf-109d-4fc1-b3db-c8312605a5fb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593719 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4449adc-13fa-40ee-a058-f42120e5cbee-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wchr8\" (UID: \"c4449adc-13fa-40ee-a058-f42120e5cbee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593746 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a94b1199-eac7-4e88-ad39-44936959740c-signing-cabundle\") pod \"service-ca-9c57cc56f-lrz7m\" (UID: \"a94b1199-eac7-4e88-ad39-44936959740c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593765 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/64f0091d-255f-4e9a-a14c-33d240892e51-proxy-tls\") pod \"machine-config-operator-74547568cd-qldcg\" (UID: \"64f0091d-255f-4e9a-a14c-33d240892e51\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593788 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4frg\" (UniqueName: \"kubernetes.io/projected/53985f44-9907-48a1-8912-6163cecceba9-kube-api-access-w4frg\") pod \"dns-default-w2qlx\" (UID: \"53985f44-9907-48a1-8912-6163cecceba9\") " pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593808 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-79b2r\" (UID: \"0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593827 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-stats-auth\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593842 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/179de16d-c6d0-4cda-8d1f-8c2396301175-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xm5cd\" (UID: \"179de16d-c6d0-4cda-8d1f-8c2396301175\") " pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593857 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ec3e08f-1312-4857-b152-cde8e51aad05-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-67gqb\" (UID: \"2ec3e08f-1312-4857-b152-cde8e51aad05\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593892 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8tnw\" (UniqueName: \"kubernetes.io/projected/a605a533-8d8c-47bc-a04c-0739f97482e6-kube-api-access-d8tnw\") pod \"machine-config-controller-84d6567774-5q929\" (UID: \"a605a533-8d8c-47bc-a04c-0739f97482e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593944 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-metrics-certs\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593971 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/92715363-5170-4018-8a70-eb8274f5ffe0-srv-cert\") pod \"catalog-operator-68c6474976-hf96t\" (UID: \"92715363-5170-4018-8a70-eb8274f5ffe0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593992 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0-cert\") pod \"ingress-canary-rfwp8\" (UID: \"95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0\") " pod="openshift-ingress-canary/ingress-canary-rfwp8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.594023 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn7jw\" (UniqueName: \"kubernetes.io/projected/2c1970f7-f131-4594-b396-d33bb9776e33-kube-api-access-zn7jw\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.594072 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjjp5\" (UniqueName: \"kubernetes.io/projected/ef463925-8c6c-4217-9bba-e15e1283c4c8-kube-api-access-hjjp5\") pod \"olm-operator-6b444d44fb-zzmhd\" (UID: \"ef463925-8c6c-4217-9bba-e15e1283c4c8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.594098 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a14f9ae8-3c9b-4618-8255-a55408525925-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gdzsm\" (UID: \"a14f9ae8-3c9b-4618-8255-a55408525925\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.594141 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4449adc-13fa-40ee-a058-f42120e5cbee-config\") pod \"kube-apiserver-operator-766d6c64bb-wchr8\" (UID: \"c4449adc-13fa-40ee-a058-f42120e5cbee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.594175 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-79b2r\" (UID: \"0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.594197 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-socket-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.594221 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a14f9ae8-3c9b-4618-8255-a55408525925-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gdzsm\" (UID: \"a14f9ae8-3c9b-4618-8255-a55408525925\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.594731 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d7faf6fc-58fe-4457-bb7c-510fce0b60a7-trusted-ca\") pod \"ingress-operator-5b745b69d9-lpsnj\" (UID: \"d7faf6fc-58fe-4457-bb7c-510fce0b60a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.595534 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-default-certificate\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.595748 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70656800-9429-43df-a1cb-7c8617d23b3f-config-volume\") pod \"collect-profiles-29483430-xwzfw\" (UID: \"70656800-9429-43df-a1cb-7c8617d23b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.595780 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ef463925-8c6c-4217-9bba-e15e1283c4c8-srv-cert\") pod \"olm-operator-6b444d44fb-zzmhd\" (UID: \"ef463925-8c6c-4217-9bba-e15e1283c4c8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593849 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f-tmpfs\") pod \"packageserver-d55dfcdfc-tgt87\" (UID: \"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.596199 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/031f1783-31bd-4008-ace8-3ede7d0a86de-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-895km\" (UID: \"031f1783-31bd-4008-ace8-3ede7d0a86de\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.596514 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a14f9ae8-3c9b-4618-8255-a55408525925-config\") pod \"kube-controller-manager-operator-78b949d7b-gdzsm\" (UID: \"a14f9ae8-3c9b-4618-8255-a55408525925\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.596590 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78db9f9d-1963-42d2-9e52-da80ef710af8-config\") pod \"service-ca-operator-777779d784-nshzl\" (UID: \"78db9f9d-1963-42d2-9e52-da80ef710af8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.596747 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/677296cf-109d-4fc1-b3db-c8312605a5fb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8wfd7\" (UID: \"677296cf-109d-4fc1-b3db-c8312605a5fb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.596845 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/64f0091d-255f-4e9a-a14c-33d240892e51-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qldcg\" (UID: \"64f0091d-255f-4e9a-a14c-33d240892e51\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.597357 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a605a533-8d8c-47bc-a04c-0739f97482e6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5q929\" (UID: \"a605a533-8d8c-47bc-a04c-0739f97482e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.599299 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/92715363-5170-4018-8a70-eb8274f5ffe0-profile-collector-cert\") pod \"catalog-operator-68c6474976-hf96t\" (UID: \"92715363-5170-4018-8a70-eb8274f5ffe0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.599461 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/43c52dc8-25a9-44d5-bea6-ecd091f55d54-metrics-tls\") pod \"dns-operator-744455d44c-b5657\" (UID: \"43c52dc8-25a9-44d5-bea6-ecd091f55d54\") " pod="openshift-dns-operator/dns-operator-744455d44c-b5657" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.599600 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-csi-data-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.600028 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-79b2r\" (UID: \"0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.600105 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9467c15f-f3fe-4594-b97d-0838d43877d1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qm6gk\" (UID: \"9467c15f-f3fe-4594-b97d-0838d43877d1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.600238 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/031f1783-31bd-4008-ace8-3ede7d0a86de-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-895km\" (UID: \"031f1783-31bd-4008-ace8-3ede7d0a86de\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.600311 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-plugins-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.600392 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a14f9ae8-3c9b-4618-8255-a55408525925-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gdzsm\" (UID: \"a14f9ae8-3c9b-4618-8255-a55408525925\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.601322 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78db9f9d-1963-42d2-9e52-da80ef710af8-serving-cert\") pod \"service-ca-operator-777779d784-nshzl\" (UID: \"78db9f9d-1963-42d2-9e52-da80ef710af8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.601401 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-service-ca-bundle\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.601882 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c2958e3-5395-4efd-8b8f-f3e70fd9fcea-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-q69sb\" (UID: \"4c2958e3-5395-4efd-8b8f-f3e70fd9fcea\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q69sb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.602395 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/179de16d-c6d0-4cda-8d1f-8c2396301175-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xm5cd\" (UID: \"179de16d-c6d0-4cda-8d1f-8c2396301175\") " pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.602432 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4449adc-13fa-40ee-a058-f42120e5cbee-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wchr8\" (UID: \"c4449adc-13fa-40ee-a058-f42120e5cbee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.602510 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-socket-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.602530 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70656800-9429-43df-a1cb-7c8617d23b3f-secret-volume\") pod \"collect-profiles-29483430-xwzfw\" (UID: \"70656800-9429-43df-a1cb-7c8617d23b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.603126 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/179de16d-c6d0-4cda-8d1f-8c2396301175-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xm5cd\" (UID: \"179de16d-c6d0-4cda-8d1f-8c2396301175\") " pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.603524 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/92715363-5170-4018-8a70-eb8274f5ffe0-srv-cert\") pod \"catalog-operator-68c6474976-hf96t\" (UID: \"92715363-5170-4018-8a70-eb8274f5ffe0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.603684 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4449adc-13fa-40ee-a058-f42120e5cbee-config\") pod \"kube-apiserver-operator-766d6c64bb-wchr8\" (UID: \"c4449adc-13fa-40ee-a058-f42120e5cbee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.603902 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-metrics-certs\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.603927 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d7faf6fc-58fe-4457-bb7c-510fce0b60a7-metrics-tls\") pod \"ingress-operator-5b745b69d9-lpsnj\" (UID: \"d7faf6fc-58fe-4457-bb7c-510fce0b60a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.604237 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ec3e08f-1312-4857-b152-cde8e51aad05-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-67gqb\" (UID: \"2ec3e08f-1312-4857-b152-cde8e51aad05\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.604675 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-79b2r\" (UID: \"0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.605289 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ef463925-8c6c-4217-9bba-e15e1283c4c8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zzmhd\" (UID: \"ef463925-8c6c-4217-9bba-e15e1283c4c8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.605516 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-stats-auth\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.606719 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a605a533-8d8c-47bc-a04c-0739f97482e6-proxy-tls\") pod \"machine-config-controller-84d6567774-5q929\" (UID: \"a605a533-8d8c-47bc-a04c-0739f97482e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.607172 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.612480 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a94b1199-eac7-4e88-ad39-44936959740c-signing-key\") pod \"service-ca-9c57cc56f-lrz7m\" (UID: \"a94b1199-eac7-4e88-ad39-44936959740c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.628300 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.647444 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.651017 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a94b1199-eac7-4e88-ad39-44936959740c-signing-cabundle\") pod \"service-ca-9c57cc56f-lrz7m\" (UID: \"a94b1199-eac7-4e88-ad39-44936959740c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.667872 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.687114 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.695263 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.695444 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.19541986 +0000 UTC m=+139.272252889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.695558 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.695665 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ee90aa-9465-4cd2-97a0-ce735d557649-serving-cert\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.695891 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.195883896 +0000 UTC m=+139.272716925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.696012 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-config\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.696275 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f-webhook-cert\") pod \"packageserver-d55dfcdfc-tgt87\" (UID: \"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.696994 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f-apiservice-cert\") pod \"packageserver-d55dfcdfc-tgt87\" (UID: \"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.707281 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.713024 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/64f0091d-255f-4e9a-a14c-33d240892e51-images\") pod \"machine-config-operator-74547568cd-qldcg\" (UID: \"64f0091d-255f-4e9a-a14c-33d240892e51\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.727030 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.747511 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.754155 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/64f0091d-255f-4e9a-a14c-33d240892e51-proxy-tls\") pod \"machine-config-operator-74547568cd-qldcg\" (UID: \"64f0091d-255f-4e9a-a14c-33d240892e51\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.767748 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.787194 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.793747 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0-cert\") pod \"ingress-canary-rfwp8\" (UID: \"95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0\") " pod="openshift-ingress-canary/ingress-canary-rfwp8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.796641 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.796816 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.296794816 +0000 UTC m=+139.373627855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.797615 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.797943 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.297933974 +0000 UTC m=+139.374767003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.808556 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.828193 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.884423 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4bc8\" (UniqueName: \"kubernetes.io/projected/c690c8a8-1bd9-45ff-ba62-93cb7f1ce890-kube-api-access-g4bc8\") pod \"openshift-config-operator-7777fb866f-wpch6\" (UID: \"c690c8a8-1bd9-45ff-ba62-93cb7f1ce890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.898957 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.899420 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.399386613 +0000 UTC m=+139.476219812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.899953 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.900442 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.400423498 +0000 UTC m=+139.477256567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.906583 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpczv\" (UniqueName: \"kubernetes.io/projected/3a537cbb-d314-4f04-94c8-625c03eb5a68-kube-api-access-dpczv\") pod \"machine-approver-56656f9798-p4tq6\" (UID: \"3a537cbb-d314-4f04-94c8-625c03eb5a68\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.926248 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg77p\" (UniqueName: \"kubernetes.io/projected/853f0809-8828-4976-9b04-dd078ab64ced-kube-api-access-xg77p\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.927256 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.935518 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/53985f44-9907-48a1-8912-6163cecceba9-metrics-tls\") pod \"dns-default-w2qlx\" (UID: \"53985f44-9907-48a1-8912-6163cecceba9\") " pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.938677 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.947447 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.956763 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.967291 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.973111 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53985f44-9907-48a1-8912-6163cecceba9-config-volume\") pod \"dns-default-w2qlx\" (UID: \"53985f44-9907-48a1-8912-6163cecceba9\") " pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.988375 4902 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.000961 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.001157 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.501122072 +0000 UTC m=+139.577955101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.001677 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.002064 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.502035652 +0000 UTC m=+139.578868681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.007557 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.027603 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.047790 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.067790 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.080721 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a-certs\") pod \"machine-config-server-w8c9w\" (UID: \"5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a\") " pod="openshift-machine-config-operator/machine-config-server-w8c9w" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.089979 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.100187 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a-node-bootstrap-token\") pod \"machine-config-server-w8c9w\" (UID: \"5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a\") " pod="openshift-machine-config-operator/machine-config-server-w8c9w" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.104818 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.106587 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.606527854 +0000 UTC m=+139.683361063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.106773 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.107517 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.607506637 +0000 UTC m=+139.684339676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.125292 4902 request.go:700] Waited for 1.831098344s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.127414 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.147203 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.167421 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.187454 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.204939 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-wpch6"] Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.210476 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.210730 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.211476 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.71146146 +0000 UTC m=+139.788294489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.226882 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.247269 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.266668 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.271306 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-config\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.287831 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.307120 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.312672 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.313273 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.81324283 +0000 UTC m=+139.890075899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.315254 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbhnr\" (UniqueName: \"kubernetes.io/projected/01ee90aa-9465-4cd2-97a0-ce735d557649-kube-api-access-gbhnr\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.327159 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.336347 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-image-import-ca\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.349906 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.357245 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.367147 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.368507 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-client-ca\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:17 crc kubenswrapper[4902]: W0121 14:36:17.373865 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a537cbb_d314_4f04_94c8_625c03eb5a68.slice/crio-051dcc6bc0cb5ed3e7bc82b96a35dfc4490f6f6a020b93d8511dea11f6ca28c9 WatchSource:0}: Error finding container 051dcc6bc0cb5ed3e7bc82b96a35dfc4490f6f6a020b93d8511dea11f6ca28c9: Status 404 returned error can't find the container with id 051dcc6bc0cb5ed3e7bc82b96a35dfc4490f6f6a020b93d8511dea11f6ca28c9 Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.383068 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9nw4v"] Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.387994 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 14:36:17 crc kubenswrapper[4902]: W0121 14:36:17.389837 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod853f0809_8828_4976_9b04_dd078ab64ced.slice/crio-11dbd86a6b371ca7401386f5e9d390f798d2eff9c897fbde80c73fd4547eac53 WatchSource:0}: Error finding container 11dbd86a6b371ca7401386f5e9d390f798d2eff9c897fbde80c73fd4547eac53: Status 404 returned error can't find the container with id 11dbd86a6b371ca7401386f5e9d390f798d2eff9c897fbde80c73fd4547eac53 Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.391242 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7158f8a-be32-4700-857f-faf9157f99f5-serving-cert\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.407113 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.413979 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.414297 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.914260584 +0000 UTC m=+139.991093743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.414884 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.415323 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.91531008 +0000 UTC m=+139.992143109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.427263 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.433486 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-config\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.447866 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.467227 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.474230 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gvxn5\" (UID: \"a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.480202 4902 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.480350 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-proxy-ca-bundles podName:c7158f8a-be32-4700-857f-faf9157f99f5 nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.980330577 +0000 UTC m=+140.057163616 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-proxy-ca-bundles") pod "controller-manager-879f6c89f-tn2zp" (UID: "c7158f8a-be32-4700-857f-faf9157f99f5") : failed to sync configmap cache: timed out waiting for the condition Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.486578 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.499511 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ee90aa-9465-4cd2-97a0-ce735d557649-serving-cert\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.506763 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.516113 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.517195 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.017149172 +0000 UTC m=+140.093982371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.517855 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-config\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.533421 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.560888 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9l2m\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-kube-api-access-r9l2m\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.581098 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zp8n\" (UniqueName: \"kubernetes.io/projected/91a268d0-59c0-4e7f-8b78-260d14051e34-kube-api-access-6zp8n\") pod \"machine-api-operator-5694c8668f-57jmg\" (UID: \"91a268d0-59c0-4e7f-8b78-260d14051e34\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.601875 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f699h\" (UniqueName: \"kubernetes.io/projected/89746c70-7e6b-4f62-acb0-25848752b0bf-kube-api-access-f699h\") pod \"cluster-image-registry-operator-dc59b4c8b-scxh2\" (UID: \"89746c70-7e6b-4f62-acb0-25848752b0bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.618013 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.618812 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.118777626 +0000 UTC m=+140.195610655 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.621098 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64q7c\" (UniqueName: \"kubernetes.io/projected/71696f1d-02bf-4fc5-a7f5-8dc351b3bf86-kube-api-access-64q7c\") pod \"console-operator-58897d9998-9hktz\" (UID: \"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86\") " pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.641396 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-bound-sa-token\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.661158 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljxl2\" (UniqueName: \"kubernetes.io/projected/5765190c-206a-481f-a72e-4f119e8881bc-kube-api-access-ljxl2\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.682010 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q56gh\" (UniqueName: \"kubernetes.io/projected/c7158f8a-be32-4700-857f-faf9157f99f5-kube-api-access-q56gh\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.707269 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw7zz\" (UniqueName: \"kubernetes.io/projected/1eabb5ac-ae9e-4853-a2ec-2d821a4883f8-kube-api-access-vw7zz\") pod \"cluster-samples-operator-665b6dd947-jt8f8\" (UID: \"1eabb5ac-ae9e-4853-a2ec-2d821a4883f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.719301 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.719501 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.219469529 +0000 UTC m=+140.296302568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.719852 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.720209 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.220196274 +0000 UTC m=+140.297029303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.722210 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrx7w\" (UniqueName: \"kubernetes.io/projected/904ff956-5fbf-4e43-aede-3fa612c9bb70-kube-api-access-rrx7w\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.732191 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.739976 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwhzd\" (UniqueName: \"kubernetes.io/projected/64d60c19-a655-408a-99e4-becff3e27018-kube-api-access-bwhzd\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.760805 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89746c70-7e6b-4f62-acb0-25848752b0bf-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-scxh2\" (UID: \"89746c70-7e6b-4f62-acb0-25848752b0bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.765349 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.780984 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4l45\" (UniqueName: \"kubernetes.io/projected/0c16a673-e56a-49ff-ac34-6910e02214a6-kube-api-access-v4l45\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.784940 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.802464 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.821609 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.822582 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.322554683 +0000 UTC m=+140.399387712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.829688 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcl92\" (UniqueName: \"kubernetes.io/projected/a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8-kube-api-access-xcl92\") pod \"openshift-apiserver-operator-796bbdcf4f-gvxn5\" (UID: \"a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.832502 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.843297 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfknp\" (UniqueName: \"kubernetes.io/projected/50ac8539-334d-4811-8b3e-7a2df9e4c931-kube-api-access-tfknp\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.847397 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.867770 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmkp4\" (UniqueName: \"kubernetes.io/projected/4c2958e3-5395-4efd-8b8f-f3e70fd9fcea-kube-api-access-gmkp4\") pod \"multus-admission-controller-857f4d67dd-q69sb\" (UID: \"4c2958e3-5395-4efd-8b8f-f3e70fd9fcea\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q69sb" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.871437 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.885386 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trmth\" (UniqueName: \"kubernetes.io/projected/179de16d-c6d0-4cda-8d1f-8c2396301175-kube-api-access-trmth\") pod \"marketplace-operator-79b997595-xm5cd\" (UID: \"179de16d-c6d0-4cda-8d1f-8c2396301175\") " pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.891646 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.909828 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbn55\" (UniqueName: \"kubernetes.io/projected/95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0-kube-api-access-qbn55\") pod \"ingress-canary-rfwp8\" (UID: \"95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0\") " pod="openshift-ingress-canary/ingress-canary-rfwp8" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.915308 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.925307 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.925875 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.425860284 +0000 UTC m=+140.502693313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.927332 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh6gd\" (UniqueName: \"kubernetes.io/projected/eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f-kube-api-access-dh6gd\") pod \"packageserver-d55dfcdfc-tgt87\" (UID: \"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.943421 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn2m5\" (UniqueName: \"kubernetes.io/projected/78db9f9d-1963-42d2-9e52-da80ef710af8-kube-api-access-zn2m5\") pod \"service-ca-operator-777779d784-nshzl\" (UID: \"78db9f9d-1963-42d2-9e52-da80ef710af8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.944210 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf"] Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.966729 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.978797 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-x9bhh"] Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.979202 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-q69sb" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.988014 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dtcw\" (UniqueName: \"kubernetes.io/projected/d7faf6fc-58fe-4457-bb7c-510fce0b60a7-kube-api-access-2dtcw\") pod \"ingress-operator-5b745b69d9-lpsnj\" (UID: \"d7faf6fc-58fe-4457-bb7c-510fce0b60a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.003167 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jr7b\" (UniqueName: \"kubernetes.io/projected/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-kube-api-access-4jr7b\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.013866 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.019242 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.027938 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.028289 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.528255435 +0000 UTC m=+140.605088474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.029479 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.029629 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.030455 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.530440249 +0000 UTC m=+140.607273468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.031543 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4449adc-13fa-40ee-a058-f42120e5cbee-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wchr8\" (UID: \"c4449adc-13fa-40ee-a058-f42120e5cbee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.031679 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.036117 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-57jmg"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.046831 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn78g\" (UniqueName: \"kubernetes.io/projected/0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a-kube-api-access-cn78g\") pod \"kube-storage-version-migrator-operator-b67b599dd-79b2r\" (UID: \"0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.058563 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.060881 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.062834 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5blcl\" (UniqueName: \"kubernetes.io/projected/64f0091d-255f-4e9a-a14c-33d240892e51-kube-api-access-5blcl\") pod \"machine-config-operator-74547568cd-qldcg\" (UID: \"64f0091d-255f-4e9a-a14c-33d240892e51\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.067345 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.069201 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" event={"ID":"c690c8a8-1bd9-45ff-ba62-93cb7f1ce890","Type":"ContainerStarted","Data":"6a818a5ecd195b0ebaac59b22f2bfa936d40bae0bdb2704bfc7ef05169b47826"} Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.070203 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" event={"ID":"01ee90aa-9465-4cd2-97a0-ce735d557649","Type":"ContainerStarted","Data":"1d6f20bc21db99ffc3b51f783b09029cf7dec2c4ed9b3a8a2f63bf561b414a3a"} Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.071298 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" event={"ID":"3a537cbb-d314-4f04-94c8-625c03eb5a68","Type":"ContainerStarted","Data":"051dcc6bc0cb5ed3e7bc82b96a35dfc4490f6f6a020b93d8511dea11f6ca28c9"} Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.072241 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9nw4v" event={"ID":"853f0809-8828-4976-9b04-dd078ab64ced","Type":"ContainerStarted","Data":"11dbd86a6b371ca7401386f5e9d390f798d2eff9c897fbde80c73fd4547eac53"} Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.076706 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rfwp8" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.093574 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-9hktz"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.103754 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqlkc\" (UniqueName: \"kubernetes.io/projected/031f1783-31bd-4008-ace8-3ede7d0a86de-kube-api-access-mqlkc\") pod \"openshift-controller-manager-operator-756b6f6bc6-895km\" (UID: \"031f1783-31bd-4008-ace8-3ede7d0a86de\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.112373 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh7hx\" (UniqueName: \"kubernetes.io/projected/2ec3e08f-1312-4857-b152-cde8e51aad05-kube-api-access-jh7hx\") pod \"package-server-manager-789f6589d5-67gqb\" (UID: \"2ec3e08f-1312-4857-b152-cde8e51aad05\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.113262 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2hd8\" (UniqueName: \"kubernetes.io/projected/8285f69a-516d-4bdd-9a14-72d966a0b208-kube-api-access-t2hd8\") pod \"downloads-7954f5f757-j7zvj\" (UID: \"8285f69a-516d-4bdd-9a14-72d966a0b208\") " pod="openshift-console/downloads-7954f5f757-j7zvj" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.123261 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd88z\" (UniqueName: \"kubernetes.io/projected/5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a-kube-api-access-rd88z\") pod \"machine-config-server-w8c9w\" (UID: \"5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a\") " pod="openshift-machine-config-operator/machine-config-server-w8c9w" Jan 21 14:36:18 crc kubenswrapper[4902]: W0121 14:36:18.123979 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91a268d0_59c0_4e7f_8b78_260d14051e34.slice/crio-315499d391964a9a8af5168675d61a0c2395be7e074ca7a7e847750e9e115529 WatchSource:0}: Error finding container 315499d391964a9a8af5168675d61a0c2395be7e074ca7a7e847750e9e115529: Status 404 returned error can't find the container with id 315499d391964a9a8af5168675d61a0c2395be7e074ca7a7e847750e9e115529 Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.127023 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.128401 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n2xzb"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.138080 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.138287 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.638263823 +0000 UTC m=+140.715096852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.138323 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.138841 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.638831932 +0000 UTC m=+140.715664961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.140664 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsdgj\" (UniqueName: \"kubernetes.io/projected/9467c15f-f3fe-4594-b97d-0838d43877d1-kube-api-access-bsdgj\") pod \"control-plane-machine-set-operator-78cbb6b69f-qm6gk\" (UID: \"9467c15f-f3fe-4594-b97d-0838d43877d1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.143911 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfg9t\" (UniqueName: \"kubernetes.io/projected/70656800-9429-43df-a1cb-7c8617d23b3f-kube-api-access-sfg9t\") pod \"collect-profiles-29483430-xwzfw\" (UID: \"70656800-9429-43df-a1cb-7c8617d23b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.145272 4902 request.go:700] Waited for 1.549484367s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/serviceaccounts/service-ca/token Jan 21 14:36:18 crc kubenswrapper[4902]: W0121 14:36:18.146211 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c16a673_e56a_49ff_ac34_6910e02214a6.slice/crio-7b9eaa6ff12a7628df3550e4b5486c4dd30838dd795331af359c3d19256bdd60 WatchSource:0}: Error finding container 7b9eaa6ff12a7628df3550e4b5486c4dd30838dd795331af359c3d19256bdd60: Status 404 returned error can't find the container with id 7b9eaa6ff12a7628df3550e4b5486c4dd30838dd795331af359c3d19256bdd60 Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.170496 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfs4v\" (UniqueName: \"kubernetes.io/projected/a94b1199-eac7-4e88-ad39-44936959740c-kube-api-access-lfs4v\") pod \"service-ca-9c57cc56f-lrz7m\" (UID: \"a94b1199-eac7-4e88-ad39-44936959740c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.184668 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d7faf6fc-58fe-4457-bb7c-510fce0b60a7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lpsnj\" (UID: \"d7faf6fc-58fe-4457-bb7c-510fce0b60a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.205060 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn7jw\" (UniqueName: \"kubernetes.io/projected/2c1970f7-f131-4594-b396-d33bb9776e33-kube-api-access-zn7jw\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.205325 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-j7zvj" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.226034 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8tnw\" (UniqueName: \"kubernetes.io/projected/a605a533-8d8c-47bc-a04c-0739f97482e6-kube-api-access-d8tnw\") pod \"machine-config-controller-84d6567774-5q929\" (UID: \"a605a533-8d8c-47bc-a04c-0739f97482e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.230740 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.242503 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.242877 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.742851148 +0000 UTC m=+140.819684177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.243096 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.243575 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.743563072 +0000 UTC m=+140.820396111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.243643 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/677296cf-109d-4fc1-b3db-c8312605a5fb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8wfd7\" (UID: \"677296cf-109d-4fc1-b3db-c8312605a5fb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.244004 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.252515 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.257976 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.269617 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4frg\" (UniqueName: \"kubernetes.io/projected/53985f44-9907-48a1-8912-6163cecceba9-kube-api-access-w4frg\") pod \"dns-default-w2qlx\" (UID: \"53985f44-9907-48a1-8912-6163cecceba9\") " pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.282141 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.282798 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf7xr\" (UniqueName: \"kubernetes.io/projected/43c52dc8-25a9-44d5-bea6-ecd091f55d54-kube-api-access-vf7xr\") pod \"dns-operator-744455d44c-b5657\" (UID: \"43c52dc8-25a9-44d5-bea6-ecd091f55d54\") " pod="openshift-dns-operator/dns-operator-744455d44c-b5657" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.286496 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.303591 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.317964 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjjp5\" (UniqueName: \"kubernetes.io/projected/ef463925-8c6c-4217-9bba-e15e1283c4c8-kube-api-access-hjjp5\") pod \"olm-operator-6b444d44fb-zzmhd\" (UID: \"ef463925-8c6c-4217-9bba-e15e1283c4c8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.318334 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.329334 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.330457 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.332755 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a14f9ae8-3c9b-4618-8255-a55408525925-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gdzsm\" (UID: \"a14f9ae8-3c9b-4618-8255-a55408525925\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.338761 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.343965 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.344537 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.844521074 +0000 UTC m=+140.921354103 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.344543 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.355623 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.358387 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z66wm\" (UniqueName: \"kubernetes.io/projected/92715363-5170-4018-8a70-eb8274f5ffe0-kube-api-access-z66wm\") pod \"catalog-operator-68c6474976-hf96t\" (UID: \"92715363-5170-4018-8a70-eb8274f5ffe0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.365890 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7dkh\" (UniqueName: \"kubernetes.io/projected/29cc0582-bf2f-4e0b-a351-2d933fdbd52f-kube-api-access-j7dkh\") pod \"migrator-59844c95c7-2n2xb\" (UID: \"29cc0582-bf2f-4e0b-a351-2d933fdbd52f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2n2xb" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.383732 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.400876 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.411538 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-w8c9w" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.415327 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.445539 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.446226 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.946212291 +0000 UTC m=+141.023045330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.501908 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xm5cd"] Jan 21 14:36:18 crc kubenswrapper[4902]: W0121 14:36:18.534380 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod179de16d_c6d0_4cda_8d1f_8c2396301175.slice/crio-55ed63decb6129b185123334a130753c5c33884bc167ffd4431cd04957e60efe WatchSource:0}: Error finding container 55ed63decb6129b185123334a130753c5c33884bc167ffd4431cd04957e60efe: Status 404 returned error can't find the container with id 55ed63decb6129b185123334a130753c5c33884bc167ffd4431cd04957e60efe Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.541279 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-b5657" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.542482 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.547480 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.547828 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:19.047811474 +0000 UTC m=+141.124644503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.573757 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2n2xb" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.593677 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.609662 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.614576 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.637530 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lrgnw"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.651126 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.651517 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:19.151504549 +0000 UTC m=+141.228337578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.656066 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.715451 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rfwp8"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.718828 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-q69sb"] Jan 21 14:36:18 crc kubenswrapper[4902]: W0121 14:36:18.739064 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeff3ea2d_0de6_4bad_81e6_f3cac0c4d48f.slice/crio-a92c48680c9f7a7f33ca1abfb213e98051895aebd64b396770168e61709627c4 WatchSource:0}: Error finding container a92c48680c9f7a7f33ca1abfb213e98051895aebd64b396770168e61709627c4: Status 404 returned error can't find the container with id a92c48680c9f7a7f33ca1abfb213e98051895aebd64b396770168e61709627c4 Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.749397 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-k2wkm"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.752423 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.752859 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:19.252841894 +0000 UTC m=+141.329674913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.805803 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nshzl"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.810101 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.853915 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.854246 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:19.3542329 +0000 UTC m=+141.431065929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:18 crc kubenswrapper[4902]: W0121 14:36:18.934300 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95e79e6b_37ae_4e8d_9f95_65e8a8ae49b0.slice/crio-ccd123fd6ec11b6210e75d00c168e4ead7f7a827673c8f2d9be66d639ada2844 WatchSource:0}: Error finding container ccd123fd6ec11b6210e75d00c168e4ead7f7a827673c8f2d9be66d639ada2844: Status 404 returned error can't find the container with id ccd123fd6ec11b6210e75d00c168e4ead7f7a827673c8f2d9be66d639ada2844 Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.954695 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.955024 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:19.455008156 +0000 UTC m=+141.531841185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.055687 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.056077 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:19.556065282 +0000 UTC m=+141.632898311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.083085 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" event={"ID":"64d60c19-a655-408a-99e4-becff3e27018","Type":"ContainerStarted","Data":"3b5c9cc51e92d048ecd66327c41f5275b88c5ce220a0a780aad28268d57e2dac"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.083862 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-w8c9w" event={"ID":"5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a","Type":"ContainerStarted","Data":"68a4542899a519107a49bb37222c446ee45e60c23805946a9d89c67dbc26ea92"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.084908 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2lccn" event={"ID":"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1","Type":"ContainerStarted","Data":"04667acda1446fd6a37275303b8fbaa4d36de6e8eca36ef82b6f08e047fc408a"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.086640 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" event={"ID":"0c16a673-e56a-49ff-ac34-6910e02214a6","Type":"ContainerStarted","Data":"7b9eaa6ff12a7628df3550e4b5486c4dd30838dd795331af359c3d19256bdd60"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.087898 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rfwp8" event={"ID":"95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0","Type":"ContainerStarted","Data":"ccd123fd6ec11b6210e75d00c168e4ead7f7a827673c8f2d9be66d639ada2844"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.089110 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" event={"ID":"904ff956-5fbf-4e43-aede-3fa612c9bb70","Type":"ContainerStarted","Data":"49e6e600cb34972f6f70cce4ff6a907b9ecc3aa47fdba7eda5ff2372dea787ca"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.090703 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" event={"ID":"a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8","Type":"ContainerStarted","Data":"16dbfe7d3d658bb09c023f421c923ac10d5b14c33945b0e67a500dd1c3ce5395"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.093337 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" event={"ID":"91a268d0-59c0-4e7f-8b78-260d14051e34","Type":"ContainerStarted","Data":"b60e5d0e06ea28ac604e7f129a54b7e66bbf9b95b0af22ce7f62af7abe20d1d5"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.093365 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" event={"ID":"91a268d0-59c0-4e7f-8b78-260d14051e34","Type":"ContainerStarted","Data":"315499d391964a9a8af5168675d61a0c2395be7e074ca7a7e847750e9e115529"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.101615 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9nw4v" event={"ID":"853f0809-8828-4976-9b04-dd078ab64ced","Type":"ContainerStarted","Data":"fff0e780f43c17189c7dce1045515753af56428025b126e2b903e1fb3882c9d0"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.103982 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-q69sb" event={"ID":"4c2958e3-5395-4efd-8b8f-f3e70fd9fcea","Type":"ContainerStarted","Data":"dea58454a77ab195fc7990a7797560df7b651eb42b7155f0958e67090ef3cd08"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.116371 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" event={"ID":"01ee90aa-9465-4cd2-97a0-ce735d557649","Type":"ContainerStarted","Data":"6352bb96995cea97dbd91f19d4ac33bcf83056c8d4e8ed01ff2fda9bf228a144"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.117407 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.135072 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" event={"ID":"3a537cbb-d314-4f04-94c8-625c03eb5a68","Type":"ContainerStarted","Data":"a2021115a06b9247806e6ca8e73e0df80cc17a0ead88a1a22bd38b2a7465b773"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.135027 4902 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xrcxf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.135163 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" podUID="01ee90aa-9465-4cd2-97a0-ce735d557649" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.136698 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" event={"ID":"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f","Type":"ContainerStarted","Data":"a92c48680c9f7a7f33ca1abfb213e98051895aebd64b396770168e61709627c4"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.142304 4902 generic.go:334] "Generic (PLEG): container finished" podID="c690c8a8-1bd9-45ff-ba62-93cb7f1ce890" containerID="2070ddad1c2f5e568896413d3c2579ee26a5f2ee71f94c673a6b2981ac178e55" exitCode=0 Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.142597 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" event={"ID":"c690c8a8-1bd9-45ff-ba62-93cb7f1ce890","Type":"ContainerDied","Data":"2070ddad1c2f5e568896413d3c2579ee26a5f2ee71f94c673a6b2981ac178e55"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.145692 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" event={"ID":"78db9f9d-1963-42d2-9e52-da80ef710af8","Type":"ContainerStarted","Data":"39057a8537a0d733eba895908d7cba8993302422bc45edb3e0f400f26fe34666"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.156184 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.156584 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:19.656569398 +0000 UTC m=+141.733402427 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.173405 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" event={"ID":"64f0091d-255f-4e9a-a14c-33d240892e51","Type":"ContainerStarted","Data":"7830105e01a70233d53b29dd6ad721cea91d7d966cdcf81ab1149690397f310d"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.175769 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" event={"ID":"5765190c-206a-481f-a72e-4f119e8881bc","Type":"ContainerStarted","Data":"af9a4c5417f31d495b933557f861be71016754127f329cd492bd254914a008a6"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.178029 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" event={"ID":"50ac8539-334d-4811-8b3e-7a2df9e4c931","Type":"ContainerStarted","Data":"96901f7c8e0e1a0b90398b171ed2b4422f8be867bf4ce25bf476460739f0265c"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.184776 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" event={"ID":"89746c70-7e6b-4f62-acb0-25848752b0bf","Type":"ContainerStarted","Data":"6b71b6ab643b50a2a8fb6aff5b6c6d934f2132558e0895bc0f3370f5a4c9fe6c"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.197544 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" event={"ID":"179de16d-c6d0-4cda-8d1f-8c2396301175","Type":"ContainerStarted","Data":"55ed63decb6129b185123334a130753c5c33884bc167ffd4431cd04957e60efe"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.208463 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-9hktz" event={"ID":"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86","Type":"ContainerStarted","Data":"025c731fc711aaa053663870e6d80837e939266d5959b9e0ce0d30d685b6a8b7"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.208499 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-9hktz" event={"ID":"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86","Type":"ContainerStarted","Data":"bd4b4c2546706399440225b2783ab05a0e42663479be86605c3e684a1ad16a84"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.209171 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.210709 4902 patch_prober.go:28] interesting pod/console-operator-58897d9998-9hktz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.210760 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-9hktz" podUID="71696f1d-02bf-4fc5-a7f5-8dc351b3bf86" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.243508 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj"] Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.259184 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.261734 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:19.761721192 +0000 UTC m=+141.838554221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.362466 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.362778 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:19.862743676 +0000 UTC m=+141.939576705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.363573 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.364526 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:19.864509726 +0000 UTC m=+141.941342755 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.403589 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-w2qlx"] Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.422103 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-j7zvj"] Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.465179 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.465428 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:19.965392225 +0000 UTC m=+142.042225264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.465539 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.465968 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:19.965956915 +0000 UTC m=+142.042789954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.566473 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.566822 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:20.066806393 +0000 UTC m=+142.143639422 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.631029 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod904ff956_5fbf_4e43_aede_3fa612c9bb70.slice/crio-conmon-ab669bf48465c2d82c87c849c36300ffcf45a563f58cfbdf46cb032576ff4014.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64d60c19_a655_408a_99e4_becff3e27018.slice/crio-conmon-a40291ad90a37546c69056c86f5fd6bb86ff3a4c5ea686b6f4b0bec92e3cd415.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod904ff956_5fbf_4e43_aede_3fa612c9bb70.slice/crio-ab669bf48465c2d82c87c849c36300ffcf45a563f58cfbdf46cb032576ff4014.scope\": RecentStats: unable to find data in memory cache]" Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.669098 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.669894 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:20.169848166 +0000 UTC m=+142.246681195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.770943 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.771383 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:20.271364495 +0000 UTC m=+142.348197524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.874955 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.875571 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:20.375545656 +0000 UTC m=+142.452378685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.983515 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.983720 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:20.483690471 +0000 UTC m=+142.560523500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.993332 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.995155 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:20.495130248 +0000 UTC m=+142.571963277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.030930 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tn2zp"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.043559 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lrz7m"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.052833 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-v4hs9"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.066154 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.066572 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-9nw4v" podStartSLOduration=122.066551352 podStartE2EDuration="2m2.066551352s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:20.064894276 +0000 UTC m=+142.141727315" watchObservedRunningTime="2026-01-21 14:36:20.066551352 +0000 UTC m=+142.143384381" Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.099711 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.100387 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:20.600366835 +0000 UTC m=+142.677199874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.100932 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-9hktz" podStartSLOduration=122.100914063 podStartE2EDuration="2m2.100914063s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:20.099413752 +0000 UTC m=+142.176246791" watchObservedRunningTime="2026-01-21 14:36:20.100914063 +0000 UTC m=+142.177747092" Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.182025 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" podStartSLOduration=122.182002914 podStartE2EDuration="2m2.182002914s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:20.179012213 +0000 UTC m=+142.255845242" watchObservedRunningTime="2026-01-21 14:36:20.182002914 +0000 UTC m=+142.258835943" Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.201100 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.201431 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:20.70141845 +0000 UTC m=+142.778251479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.214227 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8" event={"ID":"1eabb5ac-ae9e-4853-a2ec-2d821a4883f8","Type":"ContainerStarted","Data":"f3e3c696d580ba36d66d8f54bae312874665528fcaca3bc905925c867f9e2fa8"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.215321 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2lccn" event={"ID":"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1","Type":"ContainerStarted","Data":"46d32293e9a1ebe95002d8fc44dedd1b77910021f7c303a17376c725c8c4c09f"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.216986 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" event={"ID":"c7158f8a-be32-4700-857f-faf9157f99f5","Type":"ContainerStarted","Data":"31b5818a193a42b1200764cd8a3a2ec82450c46b99cee82fd307ec9a84582b72"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.218493 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-j7zvj" event={"ID":"8285f69a-516d-4bdd-9a14-72d966a0b208","Type":"ContainerStarted","Data":"f12309a0d6ca0f497ebc178cfaaa2b142c59ceecdbea722a81d5a55f1dfbbbdf"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.219749 4902 generic.go:334] "Generic (PLEG): container finished" podID="904ff956-5fbf-4e43-aede-3fa612c9bb70" containerID="ab669bf48465c2d82c87c849c36300ffcf45a563f58cfbdf46cb032576ff4014" exitCode=0 Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.219831 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" event={"ID":"904ff956-5fbf-4e43-aede-3fa612c9bb70","Type":"ContainerDied","Data":"ab669bf48465c2d82c87c849c36300ffcf45a563f58cfbdf46cb032576ff4014"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.221723 4902 generic.go:334] "Generic (PLEG): container finished" podID="64d60c19-a655-408a-99e4-becff3e27018" containerID="a40291ad90a37546c69056c86f5fd6bb86ff3a4c5ea686b6f4b0bec92e3cd415" exitCode=0 Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.221801 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" event={"ID":"64d60c19-a655-408a-99e4-becff3e27018","Type":"ContainerDied","Data":"a40291ad90a37546c69056c86f5fd6bb86ff3a4c5ea686b6f4b0bec92e3cd415"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.223724 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" event={"ID":"0c16a673-e56a-49ff-ac34-6910e02214a6","Type":"ContainerStarted","Data":"38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.223997 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.224857 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" event={"ID":"3a537cbb-d314-4f04-94c8-625c03eb5a68","Type":"ContainerStarted","Data":"d12d5adaf4cf0a0de2807d4df1984f2a010fb9f9818990386a5c00cca0f58a26"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.226237 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" event={"ID":"ef463925-8c6c-4217-9bba-e15e1283c4c8","Type":"ContainerStarted","Data":"854880c9532f4e83d83dca8a7d08151734f438f7e3dff43a0dca0a285d133256"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.226987 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" event={"ID":"2c1970f7-f131-4594-b396-d33bb9776e33","Type":"ContainerStarted","Data":"609ac5d4fbccd4f8093eadf963d6f2a6133aa02fadebe3c6d80a03852e44360e"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.230168 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" event={"ID":"d7faf6fc-58fe-4457-bb7c-510fce0b60a7","Type":"ContainerStarted","Data":"25a6bde7b5ab658b0501b574c3d07bae218c912f2fa840c9a069625655bdee6a"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.230857 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-w2qlx" event={"ID":"53985f44-9907-48a1-8912-6163cecceba9","Type":"ContainerStarted","Data":"78756f4d434114901247c17bc97805dca0ac35ef01e7b1ced811639a49abfc08"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.232183 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" event={"ID":"89746c70-7e6b-4f62-acb0-25848752b0bf","Type":"ContainerStarted","Data":"cc02839a84c71e426caecaa1a34091b036fa383659a5d16dc87775660009b2f1"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.233987 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-2lccn" podStartSLOduration=122.23397285 podStartE2EDuration="2m2.23397285s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:20.233512645 +0000 UTC m=+142.310345674" watchObservedRunningTime="2026-01-21 14:36:20.23397285 +0000 UTC m=+142.310805879" Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.234538 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" event={"ID":"a94b1199-eac7-4e88-ad39-44936959740c","Type":"ContainerStarted","Data":"481ed30831571fe7fef815e1e3f5d943baba998445b57b31d3dba142f2a78f09"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.242246 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.258818 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.262813 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:20 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:20 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:20 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.263178 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.300800 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" podStartSLOduration=122.300779678 podStartE2EDuration="2m2.300779678s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:20.260205687 +0000 UTC m=+142.337038716" watchObservedRunningTime="2026-01-21 14:36:20.300779678 +0000 UTC m=+142.377612707" Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.303105 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.304906 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:20.804883897 +0000 UTC m=+142.881716936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.336323 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" podStartSLOduration=122.336303619 podStartE2EDuration="2m2.336303619s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:20.33397285 +0000 UTC m=+142.410805879" watchObservedRunningTime="2026-01-21 14:36:20.336303619 +0000 UTC m=+142.413136648" Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.405714 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.407750 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:20.907728662 +0000 UTC m=+142.984561701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.459205 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" podStartSLOduration=122.459179801 podStartE2EDuration="2m2.459179801s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:20.456352526 +0000 UTC m=+142.533185565" watchObservedRunningTime="2026-01-21 14:36:20.459179801 +0000 UTC m=+142.536012830" Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.507387 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.507619 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.007591688 +0000 UTC m=+143.084424717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.507886 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.508277 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.00826362 +0000 UTC m=+143.085096649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.609586 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.609774 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.10974176 +0000 UTC m=+143.186574859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.610130 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.610473 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.110460764 +0000 UTC m=+143.187293793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.711021 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.711278 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.21124557 +0000 UTC m=+143.288078599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.711438 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.711813 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.211754128 +0000 UTC m=+143.288587157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.734879 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-b5657"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.734929 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.739213 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.739283 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.741209 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.744262 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2n2xb"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.745960 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.749746 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.812626 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.812794 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.312767181 +0000 UTC m=+143.389600210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.812961 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.813321 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.31330715 +0000 UTC m=+143.390140179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.903570 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5q929"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.905745 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.907602 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.909398 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.913465 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.913649 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.41361517 +0000 UTC m=+143.490448219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.914404 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.914763 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.414751658 +0000 UTC m=+143.491584687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: W0121 14:36:20.922561 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43c52dc8_25a9_44d5_bea6_ecd091f55d54.slice/crio-6ce89a012e959d0d1c80ae25f0ae117c8a9158c156776e00f960bef70da9a0c8 WatchSource:0}: Error finding container 6ce89a012e959d0d1c80ae25f0ae117c8a9158c156776e00f960bef70da9a0c8: Status 404 returned error can't find the container with id 6ce89a012e959d0d1c80ae25f0ae117c8a9158c156776e00f960bef70da9a0c8 Jan 21 14:36:20 crc kubenswrapper[4902]: W0121 14:36:20.925952 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29cc0582_bf2f_4e0b_a351_2d933fdbd52f.slice/crio-3ee89daab4206d378ab6b73f49d318c45caa0b4e6dfa32ea69e1145e23de646b WatchSource:0}: Error finding container 3ee89daab4206d378ab6b73f49d318c45caa0b4e6dfa32ea69e1145e23de646b: Status 404 returned error can't find the container with id 3ee89daab4206d378ab6b73f49d318c45caa0b4e6dfa32ea69e1145e23de646b Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.954298 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.954555 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.015430 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.015674 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.515626057 +0000 UTC m=+143.592459086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.015850 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.016253 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.516233608 +0000 UTC m=+143.593066637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.117847 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.118013 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.617980067 +0000 UTC m=+143.694813086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.118512 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.118908 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.618898608 +0000 UTC m=+143.695731817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.220252 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.220575 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.720559054 +0000 UTC m=+143.797392083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.264174 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:21 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:21 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:21 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.264251 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.277494 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" event={"ID":"c4449adc-13fa-40ee-a058-f42120e5cbee","Type":"ContainerStarted","Data":"62bba82be51fa967b06d22900e1d09c6c76da1fafd581ece10feac43602701f8"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.281475 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" event={"ID":"5765190c-206a-481f-a72e-4f119e8881bc","Type":"ContainerStarted","Data":"229695f8f6f067abf648c89e99c16820f01cd4732e9afb31b1504ec8504f054d"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.292558 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" event={"ID":"d7faf6fc-58fe-4457-bb7c-510fce0b60a7","Type":"ContainerStarted","Data":"0c3e4aae80e66678adf08af05318fab97caed13e1501234e440d7512ff465be7"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.295523 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" event={"ID":"2ec3e08f-1312-4857-b152-cde8e51aad05","Type":"ContainerStarted","Data":"2b13e8ed7aebb35bc46b7e659e0a6c8e5700791a96dc79f4223d07d90d3d26d1"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.304085 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" event={"ID":"0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a","Type":"ContainerStarted","Data":"5faecf633ad835c06cc6b68b846c03acbac8bd192c5a83038bc668a703642a8e"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.305911 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-j7zvj" event={"ID":"8285f69a-516d-4bdd-9a14-72d966a0b208","Type":"ContainerStarted","Data":"ace45929ee18b8ed6bd996d412540d21898463ca5bd92667bf2681c7fb613a58"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.317416 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" event={"ID":"179de16d-c6d0-4cda-8d1f-8c2396301175","Type":"ContainerStarted","Data":"f47ac0d984bd534f8dbc95c34421c4c7e222580c524d56fef0a86d89726b4ac0"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.318347 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" event={"ID":"a605a533-8d8c-47bc-a04c-0739f97482e6","Type":"ContainerStarted","Data":"728175cf5b005f0e763265b29a94c02e67b5d7f4a0beb6060f3abf9a29d438d1"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.321637 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk" event={"ID":"9467c15f-f3fe-4594-b97d-0838d43877d1","Type":"ContainerStarted","Data":"4071e7dbb12d0b5b7421e91deda493cfa67e6e80b66300dbcf0a816e28b79ddb"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.322371 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.322762 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.822743637 +0000 UTC m=+143.899576666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.324088 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" event={"ID":"92715363-5170-4018-8a70-eb8274f5ffe0","Type":"ContainerStarted","Data":"bfc9408c823c1107405c930c1f3208a46d5e299091b2987803611159f6c61249"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.325850 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" event={"ID":"677296cf-109d-4fc1-b3db-c8312605a5fb","Type":"ContainerStarted","Data":"985054a8b636598e5e894f279de6efdc0a8048e601c8053a559d2a9f7195246d"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.328368 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" event={"ID":"c690c8a8-1bd9-45ff-ba62-93cb7f1ce890","Type":"ContainerStarted","Data":"2f2d30b00791a905e2415cfd98ff0a171ac99c0a6a1631094a55829507d65d4a"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.331010 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" event={"ID":"a14f9ae8-3c9b-4618-8255-a55408525925","Type":"ContainerStarted","Data":"57b65b1371c647ab1701b01ad0b5115915570f20b169bf098d2e945d4ecac6ba"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.334865 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-b5657" event={"ID":"43c52dc8-25a9-44d5-bea6-ecd091f55d54","Type":"ContainerStarted","Data":"6ce89a012e959d0d1c80ae25f0ae117c8a9158c156776e00f960bef70da9a0c8"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.336955 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rfwp8" event={"ID":"95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0","Type":"ContainerStarted","Data":"29c46b2b049bb70511a253a187cfd3e870f8f998a98603437ac414333e5fbd0b"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.361140 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" event={"ID":"031f1783-31bd-4008-ace8-3ede7d0a86de","Type":"ContainerStarted","Data":"65f0fe8a5cf0f9f184876184c7ede6f5a3b3ad412adac5e57a7d116c9d516caa"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.372940 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" event={"ID":"78db9f9d-1963-42d2-9e52-da80ef710af8","Type":"ContainerStarted","Data":"39d128c42c3e2d36432198ccf10990b342a4951d880f9a278870d3aa4ef99268"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.380341 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2n2xb" event={"ID":"29cc0582-bf2f-4e0b-a351-2d933fdbd52f","Type":"ContainerStarted","Data":"3ee89daab4206d378ab6b73f49d318c45caa0b4e6dfa32ea69e1145e23de646b"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.391788 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" podStartSLOduration=123.39177037 podStartE2EDuration="2m3.39177037s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:21.391614334 +0000 UTC m=+143.468447353" watchObservedRunningTime="2026-01-21 14:36:21.39177037 +0000 UTC m=+143.468603399" Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.423264 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.423586 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.923570164 +0000 UTC m=+144.000403193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.427206 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" event={"ID":"70656800-9429-43df-a1cb-7c8617d23b3f","Type":"ContainerStarted","Data":"d99c6757f9658ce32d4704b76f3d35e4415e44f33b1b27def593d2cbcd31f4c9"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.433384 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" event={"ID":"c7158f8a-be32-4700-857f-faf9157f99f5","Type":"ContainerStarted","Data":"367d869d9b3c4b737b065ed87b6bd46066ee2a10f6733ab3b357221abf8fd7a9"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.525557 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.526626 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:22.026612767 +0000 UTC m=+144.103445796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.627571 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.627750 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:22.127724814 +0000 UTC m=+144.204557843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.628225 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.629313 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:22.129300237 +0000 UTC m=+144.206133266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.728608 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.728709 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:22.228693596 +0000 UTC m=+144.305526625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.729017 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.729309 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:22.229302567 +0000 UTC m=+144.306135596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.830720 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.831172 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:22.331153099 +0000 UTC m=+144.407986128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.932803 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.933255 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:22.433237099 +0000 UTC m=+144.510070128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.301903 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:22 crc kubenswrapper[4902]: E0121 14:36:22.302376 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:22.802358314 +0000 UTC m=+144.879191343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.308307 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:22 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:22 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:22 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.308386 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.316702 4902 csr.go:261] certificate signing request csr-kt5h4 is approved, waiting to be issued Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.325509 4902 csr.go:257] certificate signing request csr-kt5h4 is issued Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.333762 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.404037 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:22 crc kubenswrapper[4902]: E0121 14:36:22.404538 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:22.904521537 +0000 UTC m=+144.981354566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.453172 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-w2qlx" event={"ID":"53985f44-9907-48a1-8912-6163cecceba9","Type":"ContainerStarted","Data":"ae7d743409db94cdee09def2038b0ceedb33c18c4ff2f90364d5e897f5a316f8"} Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.454434 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" event={"ID":"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f","Type":"ContainerStarted","Data":"082f22371a847422c526a5df9b818eeea07fe3bedc91572314a7978b2df34897"} Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.456016 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-q69sb" event={"ID":"4c2958e3-5395-4efd-8b8f-f3e70fd9fcea","Type":"ContainerStarted","Data":"f33eb07e6973a4d0cc5ef537f409a1783f74414b529c8c9a6202d35848934821"} Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.457301 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" event={"ID":"64f0091d-255f-4e9a-a14c-33d240892e51","Type":"ContainerStarted","Data":"6c3057c438dc57c5555cbf5cdffec68b45df0f20e85d9c6333890ef6a0f707b9"} Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.458945 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" event={"ID":"91a268d0-59c0-4e7f-8b78-260d14051e34","Type":"ContainerStarted","Data":"a763e53ce94e844368e3dfb1b991dc8758c148ee43f337fbd0c3c870ee0243f8"} Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.460598 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" event={"ID":"50ac8539-334d-4811-8b3e-7a2df9e4c931","Type":"ContainerStarted","Data":"1fb8341a8f9936e50258c5ba8c5eef0a3f9eb4af2891d0507a32127aa4f5c071"} Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.461876 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8" event={"ID":"1eabb5ac-ae9e-4853-a2ec-2d821a4883f8","Type":"ContainerStarted","Data":"145fae805492337d72ae5a2fdbfd9b5b0428fecd9ac8aa1ba38c30b9a9528893"} Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.463409 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" event={"ID":"a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8","Type":"ContainerStarted","Data":"c25180bfc2c3c27bece0d13daf7ce3e485b9e8029cf3503223c6d80beee0b69b"} Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.464686 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-w8c9w" event={"ID":"5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a","Type":"ContainerStarted","Data":"b1bbb1b8d7fdfe70def77ee61b530b99341e6aef5002f6b303b7881e334b9f6a"} Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.466011 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" event={"ID":"a94b1199-eac7-4e88-ad39-44936959740c","Type":"ContainerStarted","Data":"c73818123ed498ca7d09532898accc3e425730f6c65c803a1b8d46ed926ea7e2"} Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.466793 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-j7zvj" Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.468373 4902 patch_prober.go:28] interesting pod/downloads-7954f5f757-j7zvj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.468436 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-j7zvj" podUID="8285f69a-516d-4bdd-9a14-72d966a0b208" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.484135 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" podStartSLOduration=124.484114277 podStartE2EDuration="2m4.484114277s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:22.481485648 +0000 UTC m=+144.558318687" watchObservedRunningTime="2026-01-21 14:36:22.484114277 +0000 UTC m=+144.560947306" Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.500513 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-j7zvj" podStartSLOduration=124.50049474 podStartE2EDuration="2m4.50049474s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:22.49959405 +0000 UTC m=+144.576427089" watchObservedRunningTime="2026-01-21 14:36:22.50049474 +0000 UTC m=+144.577327769" Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.505116 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:22 crc kubenswrapper[4902]: E0121 14:36:22.505579 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:23.005537111 +0000 UTC m=+145.082370160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.554634 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" podStartSLOduration=124.55461452 podStartE2EDuration="2m4.55461452s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:22.539985515 +0000 UTC m=+144.616818564" watchObservedRunningTime="2026-01-21 14:36:22.55461452 +0000 UTC m=+144.631447539" Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.607028 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:22 crc kubenswrapper[4902]: E0121 14:36:22.612183 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:23.112161824 +0000 UTC m=+145.188994843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.708675 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:22 crc kubenswrapper[4902]: E0121 14:36:22.708837 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:23.208814061 +0000 UTC m=+145.285647090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.708891 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:22 crc kubenswrapper[4902]: E0121 14:36:22.709245 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:23.209238585 +0000 UTC m=+145.286071614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.810341 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:22 crc kubenswrapper[4902]: E0121 14:36:22.810775 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:23.310759376 +0000 UTC m=+145.387592415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.911984 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:22 crc kubenswrapper[4902]: E0121 14:36:22.912403 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:23.412387081 +0000 UTC m=+145.489220110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.013003 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:23 crc kubenswrapper[4902]: E0121 14:36:23.013521 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:23.513479148 +0000 UTC m=+145.590312177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.114371 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:23 crc kubenswrapper[4902]: E0121 14:36:23.114702 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:23.614685988 +0000 UTC m=+145.691519017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.216584 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:23 crc kubenswrapper[4902]: E0121 14:36:23.216997 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:23.716977635 +0000 UTC m=+145.793810674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.261682 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:23 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:23 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:23 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.261765 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.317874 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:23 crc kubenswrapper[4902]: E0121 14:36:23.318243 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:23.818228216 +0000 UTC m=+145.895061245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.326920 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-21 14:31:22 +0000 UTC, rotation deadline is 2026-12-11 07:59:50.815113904 +0000 UTC Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.326958 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7769h23m27.488157783s for next certificate rotation Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.418528 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:23 crc kubenswrapper[4902]: E0121 14:36:23.418668 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:23.91865001 +0000 UTC m=+145.995483029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.418826 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:23 crc kubenswrapper[4902]: E0121 14:36:23.419109 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:23.919101645 +0000 UTC m=+145.995934664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.471613 4902 patch_prober.go:28] interesting pod/downloads-7954f5f757-j7zvj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.471676 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-j7zvj" podUID="8285f69a-516d-4bdd-9a14-72d966a0b208" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.485180 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" podStartSLOduration=125.485162648 podStartE2EDuration="2m5.485162648s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:23.483912576 +0000 UTC m=+145.560745605" watchObservedRunningTime="2026-01-21 14:36:23.485162648 +0000 UTC m=+145.561995677" Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.485486 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-rfwp8" podStartSLOduration=9.485482109 podStartE2EDuration="9.485482109s" podCreationTimestamp="2026-01-21 14:36:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:22.556221594 +0000 UTC m=+144.633054623" watchObservedRunningTime="2026-01-21 14:36:23.485482109 +0000 UTC m=+145.562315138" Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.502861 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" podStartSLOduration=125.502844356 podStartE2EDuration="2m5.502844356s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:23.500947282 +0000 UTC m=+145.577780311" watchObservedRunningTime="2026-01-21 14:36:23.502844356 +0000 UTC m=+145.579677385" Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.520529 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:23 crc kubenswrapper[4902]: E0121 14:36:23.520794 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.020764571 +0000 UTC m=+146.097597790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.521416 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:23 crc kubenswrapper[4902]: E0121 14:36:23.521707 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.021694823 +0000 UTC m=+146.098527852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.529206 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-w8c9w" podStartSLOduration=8.529190736 podStartE2EDuration="8.529190736s" podCreationTimestamp="2026-01-21 14:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:23.52901389 +0000 UTC m=+145.605846919" watchObservedRunningTime="2026-01-21 14:36:23.529190736 +0000 UTC m=+145.606023765" Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.555113 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" podStartSLOduration=125.555090322 podStartE2EDuration="2m5.555090322s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:23.552722792 +0000 UTC m=+145.629555821" watchObservedRunningTime="2026-01-21 14:36:23.555090322 +0000 UTC m=+145.631923351" Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.630957 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" podStartSLOduration=125.630939305 podStartE2EDuration="2m5.630939305s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:23.629825637 +0000 UTC m=+145.706658666" watchObservedRunningTime="2026-01-21 14:36:23.630939305 +0000 UTC m=+145.707772334" Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.632658 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" podStartSLOduration=125.632651633 podStartE2EDuration="2m5.632651633s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:23.579451085 +0000 UTC m=+145.656284114" watchObservedRunningTime="2026-01-21 14:36:23.632651633 +0000 UTC m=+145.709484662" Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.634769 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:23 crc kubenswrapper[4902]: E0121 14:36:23.635157 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.135125846 +0000 UTC m=+146.211958925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.661963 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" podStartSLOduration=125.661940413 podStartE2EDuration="2m5.661940413s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:23.660700841 +0000 UTC m=+145.737533870" watchObservedRunningTime="2026-01-21 14:36:23.661940413 +0000 UTC m=+145.738773442" Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.693934 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" podStartSLOduration=125.693916343 podStartE2EDuration="2m5.693916343s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:23.69262399 +0000 UTC m=+145.769457019" watchObservedRunningTime="2026-01-21 14:36:23.693916343 +0000 UTC m=+145.770749372" Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.743833 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:23 crc kubenswrapper[4902]: E0121 14:36:23.744258 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.244240224 +0000 UTC m=+146.321073263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.844762 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:23 crc kubenswrapper[4902]: E0121 14:36:23.845169 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.345152895 +0000 UTC m=+146.421985924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.945835 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:23 crc kubenswrapper[4902]: E0121 14:36:23.946207 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.446194519 +0000 UTC m=+146.523027548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.046535 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:24 crc kubenswrapper[4902]: E0121 14:36:24.046723 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.546698016 +0000 UTC m=+146.623531045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.047162 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:24 crc kubenswrapper[4902]: E0121 14:36:24.047501 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.547487203 +0000 UTC m=+146.624320232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.150365 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:24 crc kubenswrapper[4902]: E0121 14:36:24.150537 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.650510125 +0000 UTC m=+146.727343154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.151000 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:24 crc kubenswrapper[4902]: E0121 14:36:24.151354 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.651337803 +0000 UTC m=+146.728170982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.252774 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:24 crc kubenswrapper[4902]: E0121 14:36:24.253325 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.753304149 +0000 UTC m=+146.830137178 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.268268 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:24 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:24 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:24 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.268332 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.356337 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:24 crc kubenswrapper[4902]: E0121 14:36:24.357176 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.857162349 +0000 UTC m=+146.933995378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.457503 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:24 crc kubenswrapper[4902]: E0121 14:36:24.457715 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.957690276 +0000 UTC m=+147.034523305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.478553 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" event={"ID":"0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a","Type":"ContainerStarted","Data":"71fe55aece327484af9828b1af1a5b805cb4b117d45e52bba336220878a998c5"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.480715 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" event={"ID":"64f0091d-255f-4e9a-a14c-33d240892e51","Type":"ContainerStarted","Data":"a24f95ad5708c5b01be06b7fa4b5c86df3998a9939540c8781947f01bcb24a49"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.482615 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" event={"ID":"904ff956-5fbf-4e43-aede-3fa612c9bb70","Type":"ContainerStarted","Data":"9fad1be2d5ad45addaed2d543aafd9069c0049478775fea68fe2772d9850328a"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.484055 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" event={"ID":"a14f9ae8-3c9b-4618-8255-a55408525925","Type":"ContainerStarted","Data":"98d62d8fce0bbd01890f37dea5c043635816283893340bb4dba1b50f8dda4eeb"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.485490 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-b5657" event={"ID":"43c52dc8-25a9-44d5-bea6-ecd091f55d54","Type":"ContainerStarted","Data":"a34c936cbe9d2177f65ea603131f883feef8a4885ee272f71234971dc9eeaf76"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.487211 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" event={"ID":"677296cf-109d-4fc1-b3db-c8312605a5fb","Type":"ContainerStarted","Data":"5d3fa35637ce99b6e95f1b895ce8cc22e716885755a11ccb7d2fb82b3563b4fd"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.490425 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" event={"ID":"d7faf6fc-58fe-4457-bb7c-510fce0b60a7","Type":"ContainerStarted","Data":"17c0305eb637c5e08d8f644f2ec603806f0dddd00b1694e3fd9a5093f24851ce"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.492033 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" event={"ID":"a605a533-8d8c-47bc-a04c-0739f97482e6","Type":"ContainerStarted","Data":"a056f469ebc3174bb2315735af44c9ceb9220a327ce4872a92786404bb378a1e"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.492077 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" event={"ID":"a605a533-8d8c-47bc-a04c-0739f97482e6","Type":"ContainerStarted","Data":"a51f237ad9632527738df94acc24ebe0204590d7e6cecb281dc33eb89f282193"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.493491 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-q69sb" event={"ID":"4c2958e3-5395-4efd-8b8f-f3e70fd9fcea","Type":"ContainerStarted","Data":"f355a8f80bba580e989a488ee5ecdb142deb96878eb1a0ddc5eade072c5b2f16"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.495349 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" event={"ID":"70656800-9429-43df-a1cb-7c8617d23b3f","Type":"ContainerStarted","Data":"de8fcd8c3571217b412f9ba6c688fc875ba6c7c7eb18b7b87d8ab03820c43542"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.496460 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" event={"ID":"031f1783-31bd-4008-ace8-3ede7d0a86de","Type":"ContainerStarted","Data":"7c797e570cd4e852e50c7663ffad626c90cceaf98256bfe9a699a2f48573dacb"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.498179 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8" event={"ID":"1eabb5ac-ae9e-4853-a2ec-2d821a4883f8","Type":"ContainerStarted","Data":"686aa13cebf66d5e500d72e6f0fb2d508bd7beec304b17aacbf7db351b9ecb8f"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.498845 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" podStartSLOduration=126.498834557 podStartE2EDuration="2m6.498834557s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.49627482 +0000 UTC m=+146.573107849" watchObservedRunningTime="2026-01-21 14:36:24.498834557 +0000 UTC m=+146.575667586" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.503357 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-w2qlx" event={"ID":"53985f44-9907-48a1-8912-6163cecceba9","Type":"ContainerStarted","Data":"8a9d0691cf453da8912a0274448564ba0da760180a0ac16dcfdf50e65e2c28d1"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.503521 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.506453 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" event={"ID":"c4449adc-13fa-40ee-a058-f42120e5cbee","Type":"ContainerStarted","Data":"e7cc9adf6c6ffd87a3ba5cf44aa78e6276647d776afe90236ad2abc49306adbd"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.509874 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" event={"ID":"64d60c19-a655-408a-99e4-becff3e27018","Type":"ContainerStarted","Data":"94d489714adff1e3f3a01c05f46e5c90ec8a2939a9512faab21abd82058a0517"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.512221 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" event={"ID":"92715363-5170-4018-8a70-eb8274f5ffe0","Type":"ContainerStarted","Data":"7287b89ee773336dcfa7454e73ba2ffd95cdfbfccb3263364aa4ebeb9d39bb36"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.512667 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.514639 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" event={"ID":"2ec3e08f-1312-4857-b152-cde8e51aad05","Type":"ContainerStarted","Data":"401f5ff346f059614945345b9f74b97f85bb191a26a285bb372eccef2578c0cb"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.514692 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" event={"ID":"2ec3e08f-1312-4857-b152-cde8e51aad05","Type":"ContainerStarted","Data":"8fbaeb9559c6a66ad097756767cc6caca67b07fc7b64b7940e97a53cd7c2e934"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.514758 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.516089 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2n2xb" event={"ID":"29cc0582-bf2f-4e0b-a351-2d933fdbd52f","Type":"ContainerStarted","Data":"ec856e1dac38a3ac7ad9c0f8b2e34674e85ce45ab6848c35bbe4e309c28a1e1b"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.516120 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2n2xb" event={"ID":"29cc0582-bf2f-4e0b-a351-2d933fdbd52f","Type":"ContainerStarted","Data":"705e84c3b4392269291fa277a420f60eeb0b713869479afcd68ea9dce4fc8152"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.540439 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.542612 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" podStartSLOduration=126.542600476 podStartE2EDuration="2m6.542600476s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.540498145 +0000 UTC m=+146.617331174" watchObservedRunningTime="2026-01-21 14:36:24.542600476 +0000 UTC m=+146.619433505" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.542757 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" event={"ID":"2c1970f7-f131-4594-b396-d33bb9776e33","Type":"ContainerStarted","Data":"98b807a7449c4e4c765b48d7c0ffdf31c9596dab0362dc991807d0d7c743d98d"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.554199 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk" event={"ID":"9467c15f-f3fe-4594-b97d-0838d43877d1","Type":"ContainerStarted","Data":"cb785d5c8758cd5e440b2003decbf278c26896fda13150ac48e1d13a3a94fe6d"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.561288 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:24 crc kubenswrapper[4902]: E0121 14:36:24.565077 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.065061695 +0000 UTC m=+147.141894724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.575375 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" event={"ID":"ef463925-8c6c-4217-9bba-e15e1283c4c8","Type":"ContainerStarted","Data":"5c75b66565cf29879a3061886d41628130979d54e2a480a895d697fc88a6b063"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.576199 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.579807 4902 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zzmhd container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.579869 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" podUID="ef463925-8c6c-4217-9bba-e15e1283c4c8" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.627929 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" podStartSLOduration=126.627907249 podStartE2EDuration="2m6.627907249s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.580708954 +0000 UTC m=+146.657541983" watchObservedRunningTime="2026-01-21 14:36:24.627907249 +0000 UTC m=+146.704740288" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.666590 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:24 crc kubenswrapper[4902]: E0121 14:36:24.668410 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.168383657 +0000 UTC m=+147.245216686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.691604 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" podStartSLOduration=126.691579561 podStartE2EDuration="2m6.691579561s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.634308095 +0000 UTC m=+146.711141124" watchObservedRunningTime="2026-01-21 14:36:24.691579561 +0000 UTC m=+146.768412600" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.763767 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" podStartSLOduration=126.76374693 podStartE2EDuration="2m6.76374693s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.76228166 +0000 UTC m=+146.839114679" watchObservedRunningTime="2026-01-21 14:36:24.76374693 +0000 UTC m=+146.840579959" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.768703 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" podStartSLOduration=126.768688997 podStartE2EDuration="2m6.768688997s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.69363905 +0000 UTC m=+146.770472079" watchObservedRunningTime="2026-01-21 14:36:24.768688997 +0000 UTC m=+146.845522026" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.768948 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:24 crc kubenswrapper[4902]: E0121 14:36:24.769486 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.269465093 +0000 UTC m=+147.346298122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.838564 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" podStartSLOduration=126.838542428 podStartE2EDuration="2m6.838542428s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.799278341 +0000 UTC m=+146.876111370" watchObservedRunningTime="2026-01-21 14:36:24.838542428 +0000 UTC m=+146.915375457" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.839967 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" podStartSLOduration=126.839959345 podStartE2EDuration="2m6.839959345s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.836870711 +0000 UTC m=+146.913703740" watchObservedRunningTime="2026-01-21 14:36:24.839959345 +0000 UTC m=+146.916792374" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.854057 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk" podStartSLOduration=126.854025181 podStartE2EDuration="2m6.854025181s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.851915189 +0000 UTC m=+146.928748218" watchObservedRunningTime="2026-01-21 14:36:24.854025181 +0000 UTC m=+146.930858210" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.870385 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:24 crc kubenswrapper[4902]: E0121 14:36:24.870800 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.370769577 +0000 UTC m=+147.447602596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.878314 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2n2xb" podStartSLOduration=126.878294561 podStartE2EDuration="2m6.878294561s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.877636019 +0000 UTC m=+146.954469048" watchObservedRunningTime="2026-01-21 14:36:24.878294561 +0000 UTC m=+146.955127590" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.912640 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-w2qlx" podStartSLOduration=9.912617701 podStartE2EDuration="9.912617701s" podCreationTimestamp="2026-01-21 14:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.908242313 +0000 UTC m=+146.985075342" watchObservedRunningTime="2026-01-21 14:36:24.912617701 +0000 UTC m=+146.989450730" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.936141 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" podStartSLOduration=126.936114955 podStartE2EDuration="2m6.936114955s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.934289513 +0000 UTC m=+147.011122552" watchObservedRunningTime="2026-01-21 14:36:24.936114955 +0000 UTC m=+147.012947994" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.970309 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" podStartSLOduration=126.97028724 podStartE2EDuration="2m6.97028724s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.967985902 +0000 UTC m=+147.044818941" watchObservedRunningTime="2026-01-21 14:36:24.97028724 +0000 UTC m=+147.047120269" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.971990 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:24 crc kubenswrapper[4902]: E0121 14:36:24.972435 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.472419792 +0000 UTC m=+147.549252831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.993982 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" podStartSLOduration=126.99396197 podStartE2EDuration="2m6.99396197s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.992915245 +0000 UTC m=+147.069748274" watchObservedRunningTime="2026-01-21 14:36:24.99396197 +0000 UTC m=+147.070794999" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.036317 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" podStartSLOduration=127.036300291 podStartE2EDuration="2m7.036300291s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:25.036007651 +0000 UTC m=+147.112840680" watchObservedRunningTime="2026-01-21 14:36:25.036300291 +0000 UTC m=+147.113133310" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.063225 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8" podStartSLOduration=127.06318231 podStartE2EDuration="2m7.06318231s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:25.06318248 +0000 UTC m=+147.140015509" watchObservedRunningTime="2026-01-21 14:36:25.06318231 +0000 UTC m=+147.140015339" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.073922 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.074124 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.574094008 +0000 UTC m=+147.650927047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.074348 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.074731 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.57472188 +0000 UTC m=+147.651554909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.175941 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.176301 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.676250071 +0000 UTC m=+147.753083110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.176415 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.176828 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.67680901 +0000 UTC m=+147.753642039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.268495 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:25 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:25 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:25 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.268574 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.278227 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.278468 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.778437584 +0000 UTC m=+147.855270613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.278623 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.279016 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.779006474 +0000 UTC m=+147.855839683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.380429 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.380648 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.880621338 +0000 UTC m=+147.957454367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.380992 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.381032 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.381107 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.381152 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.381173 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.381594 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.88157352 +0000 UTC m=+147.958406549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.382135 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.391951 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.396656 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.396936 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.482568 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.482717 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.982691168 +0000 UTC m=+148.059524197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.482773 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.483130 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.983121282 +0000 UTC m=+148.059954311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.584520 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.584673 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:26.084648693 +0000 UTC m=+148.161481722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.584769 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.585049 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:26.085030446 +0000 UTC m=+148.161863475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.589743 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" event={"ID":"2c1970f7-f131-4594-b396-d33bb9776e33","Type":"ContainerStarted","Data":"9d49120bb641011a9a5c94f4d523c824799d05732ca91f56c59e8ca3e763dcc2"} Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.594459 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" event={"ID":"64d60c19-a655-408a-99e4-becff3e27018","Type":"ContainerStarted","Data":"55fbb3c9518f031cb1920eb10c5ab585fd897728f0339f8963423381ffea6d31"} Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.602600 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-b5657" event={"ID":"43c52dc8-25a9-44d5-bea6-ecd091f55d54","Type":"ContainerStarted","Data":"f8b5e4e851d84b916aeb4fad715c878d5713caf3d93b6c9eeb8fbd20ef30bf70"} Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.620307 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.623333 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.634383 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.674201 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" podStartSLOduration=127.67418464 podStartE2EDuration="2m7.67418464s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:25.101564537 +0000 UTC m=+147.178397566" watchObservedRunningTime="2026-01-21 14:36:25.67418464 +0000 UTC m=+147.751017669" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.674524 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" podStartSLOduration=127.674519151 podStartE2EDuration="2m7.674519151s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:25.672577755 +0000 UTC m=+147.749410784" watchObservedRunningTime="2026-01-21 14:36:25.674519151 +0000 UTC m=+147.751352180" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.687658 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.689490 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:26.189437115 +0000 UTC m=+148.266270144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.746631 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-q69sb" podStartSLOduration=127.746600547 podStartE2EDuration="2m7.746600547s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:25.745591643 +0000 UTC m=+147.822424682" watchObservedRunningTime="2026-01-21 14:36:25.746600547 +0000 UTC m=+147.823433576" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.795515 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.795932 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:26.295917214 +0000 UTC m=+148.372750243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.808113 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-b5657" podStartSLOduration=127.808087765 podStartE2EDuration="2m7.808087765s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:25.793227023 +0000 UTC m=+147.870060052" watchObservedRunningTime="2026-01-21 14:36:25.808087765 +0000 UTC m=+147.884920794" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.867474 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.899811 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.900262 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:26.400240869 +0000 UTC m=+148.477073908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.940746 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.961094 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.001230 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:26 crc kubenswrapper[4902]: E0121 14:36:26.001634 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:26.501619816 +0000 UTC m=+148.578452845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.053917 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xgf94"] Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.054836 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.061437 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.078153 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xgf94"] Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.103152 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:26 crc kubenswrapper[4902]: E0121 14:36:26.104162 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:26.604146591 +0000 UTC m=+148.680979620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.206293 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-catalog-content\") pod \"certified-operators-xgf94\" (UID: \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\") " pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.206645 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.206670 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kswz\" (UniqueName: \"kubernetes.io/projected/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-kube-api-access-8kswz\") pod \"certified-operators-xgf94\" (UID: \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\") " pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.206686 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-utilities\") pod \"certified-operators-xgf94\" (UID: \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\") " pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:36:26 crc kubenswrapper[4902]: E0121 14:36:26.207136 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:26.707109241 +0000 UTC m=+148.783942260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.246421 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fqq5l"] Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.247372 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.258228 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.271358 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fqq5l"] Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.301247 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:26 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:26 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:26 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.301317 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.309215 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.309467 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kswz\" (UniqueName: \"kubernetes.io/projected/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-kube-api-access-8kswz\") pod \"certified-operators-xgf94\" (UID: \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\") " pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.309500 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc7ccff8-2db2-4663-9565-42f2357e4bda-utilities\") pod \"community-operators-fqq5l\" (UID: \"bc7ccff8-2db2-4663-9565-42f2357e4bda\") " pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.309524 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-utilities\") pod \"certified-operators-xgf94\" (UID: \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\") " pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.309542 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ntv4\" (UniqueName: \"kubernetes.io/projected/bc7ccff8-2db2-4663-9565-42f2357e4bda-kube-api-access-4ntv4\") pod \"community-operators-fqq5l\" (UID: \"bc7ccff8-2db2-4663-9565-42f2357e4bda\") " pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.309566 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc7ccff8-2db2-4663-9565-42f2357e4bda-catalog-content\") pod \"community-operators-fqq5l\" (UID: \"bc7ccff8-2db2-4663-9565-42f2357e4bda\") " pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.309593 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-catalog-content\") pod \"certified-operators-xgf94\" (UID: \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\") " pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:36:26 crc kubenswrapper[4902]: E0121 14:36:26.309877 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:26.809849763 +0000 UTC m=+148.886682812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.310259 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-catalog-content\") pod \"certified-operators-xgf94\" (UID: \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\") " pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.310549 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-utilities\") pod \"certified-operators-xgf94\" (UID: \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\") " pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.392601 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kswz\" (UniqueName: \"kubernetes.io/projected/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-kube-api-access-8kswz\") pod \"certified-operators-xgf94\" (UID: \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\") " pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.399329 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.411701 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.411742 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc7ccff8-2db2-4663-9565-42f2357e4bda-utilities\") pod \"community-operators-fqq5l\" (UID: \"bc7ccff8-2db2-4663-9565-42f2357e4bda\") " pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.411766 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ntv4\" (UniqueName: \"kubernetes.io/projected/bc7ccff8-2db2-4663-9565-42f2357e4bda-kube-api-access-4ntv4\") pod \"community-operators-fqq5l\" (UID: \"bc7ccff8-2db2-4663-9565-42f2357e4bda\") " pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.411793 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc7ccff8-2db2-4663-9565-42f2357e4bda-catalog-content\") pod \"community-operators-fqq5l\" (UID: \"bc7ccff8-2db2-4663-9565-42f2357e4bda\") " pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.416603 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc7ccff8-2db2-4663-9565-42f2357e4bda-utilities\") pod \"community-operators-fqq5l\" (UID: \"bc7ccff8-2db2-4663-9565-42f2357e4bda\") " pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:36:26 crc kubenswrapper[4902]: E0121 14:36:26.416876 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:26.916853039 +0000 UTC m=+148.993686068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.424327 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc7ccff8-2db2-4663-9565-42f2357e4bda-catalog-content\") pod \"community-operators-fqq5l\" (UID: \"bc7ccff8-2db2-4663-9565-42f2357e4bda\") " pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.467437 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fl2j4"] Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.468361 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.508838 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ntv4\" (UniqueName: \"kubernetes.io/projected/bc7ccff8-2db2-4663-9565-42f2357e4bda-kube-api-access-4ntv4\") pod \"community-operators-fqq5l\" (UID: \"bc7ccff8-2db2-4663-9565-42f2357e4bda\") " pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.512710 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.512949 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c88f2d9-944f-408e-bfe3-41c8baac6175-catalog-content\") pod \"certified-operators-fl2j4\" (UID: \"3c88f2d9-944f-408e-bfe3-41c8baac6175\") " pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.512992 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c88f2d9-944f-408e-bfe3-41c8baac6175-utilities\") pod \"certified-operators-fl2j4\" (UID: \"3c88f2d9-944f-408e-bfe3-41c8baac6175\") " pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.513026 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nlk7\" (UniqueName: \"kubernetes.io/projected/3c88f2d9-944f-408e-bfe3-41c8baac6175-kube-api-access-9nlk7\") pod \"certified-operators-fl2j4\" (UID: \"3c88f2d9-944f-408e-bfe3-41c8baac6175\") " pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:36:26 crc kubenswrapper[4902]: E0121 14:36:26.513217 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:27.013191525 +0000 UTC m=+149.090024564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.521373 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fl2j4"] Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.618322 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.619373 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c88f2d9-944f-408e-bfe3-41c8baac6175-catalog-content\") pod \"certified-operators-fl2j4\" (UID: \"3c88f2d9-944f-408e-bfe3-41c8baac6175\") " pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.619717 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c88f2d9-944f-408e-bfe3-41c8baac6175-catalog-content\") pod \"certified-operators-fl2j4\" (UID: \"3c88f2d9-944f-408e-bfe3-41c8baac6175\") " pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.619405 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c88f2d9-944f-408e-bfe3-41c8baac6175-utilities\") pod \"certified-operators-fl2j4\" (UID: \"3c88f2d9-944f-408e-bfe3-41c8baac6175\") " pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.619779 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nlk7\" (UniqueName: \"kubernetes.io/projected/3c88f2d9-944f-408e-bfe3-41c8baac6175-kube-api-access-9nlk7\") pod \"certified-operators-fl2j4\" (UID: \"3c88f2d9-944f-408e-bfe3-41c8baac6175\") " pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.619827 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:26 crc kubenswrapper[4902]: E0121 14:36:26.620123 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:27.120112359 +0000 UTC m=+149.196945388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.620163 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c88f2d9-944f-408e-bfe3-41c8baac6175-utilities\") pod \"certified-operators-fl2j4\" (UID: \"3c88f2d9-944f-408e-bfe3-41c8baac6175\") " pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.636448 4902 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.680595 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-77b9d"] Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.688829 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.691476 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" event={"ID":"2c1970f7-f131-4594-b396-d33bb9776e33","Type":"ContainerStarted","Data":"befc9f7492ac06652b09ddd286943da31e3a5f5dbb26be0910f72aaabab97b0c"} Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.720513 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:26 crc kubenswrapper[4902]: E0121 14:36:26.720868 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:27.220852474 +0000 UTC m=+149.297685503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.749452 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-77b9d"] Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.796641 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nlk7\" (UniqueName: \"kubernetes.io/projected/3c88f2d9-944f-408e-bfe3-41c8baac6175-kube-api-access-9nlk7\") pod \"certified-operators-fl2j4\" (UID: \"3c88f2d9-944f-408e-bfe3-41c8baac6175\") " pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.821758 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008311b3-7361-4466-aacd-01bbaa16f6df-catalog-content\") pod \"community-operators-77b9d\" (UID: \"008311b3-7361-4466-aacd-01bbaa16f6df\") " pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.821882 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.821976 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sp9q\" (UniqueName: \"kubernetes.io/projected/008311b3-7361-4466-aacd-01bbaa16f6df-kube-api-access-2sp9q\") pod \"community-operators-77b9d\" (UID: \"008311b3-7361-4466-aacd-01bbaa16f6df\") " pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.822055 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008311b3-7361-4466-aacd-01bbaa16f6df-utilities\") pod \"community-operators-77b9d\" (UID: \"008311b3-7361-4466-aacd-01bbaa16f6df\") " pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:36:26 crc kubenswrapper[4902]: E0121 14:36:26.823807 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:27.323791792 +0000 UTC m=+149.400624821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.869620 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.923556 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.923769 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sp9q\" (UniqueName: \"kubernetes.io/projected/008311b3-7361-4466-aacd-01bbaa16f6df-kube-api-access-2sp9q\") pod \"community-operators-77b9d\" (UID: \"008311b3-7361-4466-aacd-01bbaa16f6df\") " pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:36:26 crc kubenswrapper[4902]: E0121 14:36:26.923906 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:27.423883364 +0000 UTC m=+149.500716393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.924109 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008311b3-7361-4466-aacd-01bbaa16f6df-utilities\") pod \"community-operators-77b9d\" (UID: \"008311b3-7361-4466-aacd-01bbaa16f6df\") " pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.924164 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008311b3-7361-4466-aacd-01bbaa16f6df-catalog-content\") pod \"community-operators-77b9d\" (UID: \"008311b3-7361-4466-aacd-01bbaa16f6df\") " pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.924256 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.924556 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008311b3-7361-4466-aacd-01bbaa16f6df-utilities\") pod \"community-operators-77b9d\" (UID: \"008311b3-7361-4466-aacd-01bbaa16f6df\") " pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.924577 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008311b3-7361-4466-aacd-01bbaa16f6df-catalog-content\") pod \"community-operators-77b9d\" (UID: \"008311b3-7361-4466-aacd-01bbaa16f6df\") " pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:36:26 crc kubenswrapper[4902]: E0121 14:36:26.924706 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:27.424697472 +0000 UTC m=+149.501530501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.958598 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.959673 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.960138 4902 patch_prober.go:28] interesting pod/console-f9d7485db-9nw4v container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.960185 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9nw4v" podUID="853f0809-8828-4976-9b04-dd078ab64ced" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.985941 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sp9q\" (UniqueName: \"kubernetes.io/projected/008311b3-7361-4466-aacd-01bbaa16f6df-kube-api-access-2sp9q\") pod \"community-operators-77b9d\" (UID: \"008311b3-7361-4466-aacd-01bbaa16f6df\") " pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.025402 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:27 crc kubenswrapper[4902]: E0121 14:36:27.033873 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:27.533850781 +0000 UTC m=+149.610683810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.087456 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.144062 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:27 crc kubenswrapper[4902]: E0121 14:36:27.144416 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:27.644399587 +0000 UTC m=+149.721232616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.477892 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:27 crc kubenswrapper[4902]: E0121 14:36:27.478377 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:27.978356184 +0000 UTC m=+150.055189213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.480706 4902 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-21T14:36:26.636475152Z","Handler":null,"Name":""} Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.483740 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:27 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:27 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:27 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.483781 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.484158 4902 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.484188 4902 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.490546 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xgf94"] Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.501112 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fqq5l"] Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.529145 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fl2j4"] Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.579539 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:27 crc kubenswrapper[4902]: W0121 14:36:27.580231 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c88f2d9_944f_408e_bfe3_41c8baac6175.slice/crio-3ede55dacea16111f6202914e24a4d44b7e914f57126067eef1577b038c06a0b WatchSource:0}: Error finding container 3ede55dacea16111f6202914e24a4d44b7e914f57126067eef1577b038c06a0b: Status 404 returned error can't find the container with id 3ede55dacea16111f6202914e24a4d44b7e914f57126067eef1577b038c06a0b Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.615370 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.615406 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.732401 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"957b30e503d8781975d7142b54c3fab6a51781cda36a2ca1f026e1e15ff8a621"} Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.741183 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"913ba68ed42badf1566d83b31341f33e2cf15048515ef8a81733207860372fdd"} Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.772401 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.772434 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.786617 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.788433 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.798346 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.805374 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" event={"ID":"2c1970f7-f131-4594-b396-d33bb9776e33","Type":"ContainerStarted","Data":"e526b015b8182fec25aa1aa89eb2c511cfba5d47504213dfab85c3a51e848ef1"} Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.810237 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.818441 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a9171c0531db9becb11a908b3a4c754e11832d0e0365adfed9405abc5f51867c"} Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.821883 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.822665 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgf94" event={"ID":"cc91d441-7f4a-45f8-8f71-1f04e4ade80c","Type":"ContainerStarted","Data":"053e278127621d0aa574a001b3d7f98dd3d2a28ff0f85cb3abcc55c7682fa466"} Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.832360 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqq5l" event={"ID":"bc7ccff8-2db2-4663-9565-42f2357e4bda","Type":"ContainerStarted","Data":"f45d37f7ac621a924bdf6d205f6dcfb689dbb7f1904649cc3bbc2a2dac0231b6"} Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.839354 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" podStartSLOduration=12.839338694 podStartE2EDuration="12.839338694s" podCreationTimestamp="2026-01-21 14:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:27.83833554 +0000 UTC m=+149.915168569" watchObservedRunningTime="2026-01-21 14:36:27.839338694 +0000 UTC m=+149.916171723" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.851702 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fl2j4" event={"ID":"3c88f2d9-944f-408e-bfe3-41c8baac6175","Type":"ContainerStarted","Data":"3ede55dacea16111f6202914e24a4d44b7e914f57126067eef1577b038c06a0b"} Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.888525 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.894514 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-77b9d"] Jan 21 14:36:27 crc kubenswrapper[4902]: W0121 14:36:27.926202 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod008311b3_7361_4466_aacd_01bbaa16f6df.slice/crio-e20ca1be73e15aa077e2b594ce74f037b1aa06b9991f1e73b39c1c1eae4ecbca WatchSource:0}: Error finding container e20ca1be73e15aa077e2b594ce74f037b1aa06b9991f1e73b39c1c1eae4ecbca: Status 404 returned error can't find the container with id e20ca1be73e15aa077e2b594ce74f037b1aa06b9991f1e73b39c1c1eae4ecbca Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.968633 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.976433 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.025901 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.063872 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.079064 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.135082 4902 patch_prober.go:28] interesting pod/apiserver-76f77b778f-x9bhh container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 21 14:36:28 crc kubenswrapper[4902]: [+]log ok Jan 21 14:36:28 crc kubenswrapper[4902]: [+]etcd ok Jan 21 14:36:28 crc kubenswrapper[4902]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 21 14:36:28 crc kubenswrapper[4902]: [+]poststarthook/generic-apiserver-start-informers ok Jan 21 14:36:28 crc kubenswrapper[4902]: [+]poststarthook/max-in-flight-filter ok Jan 21 14:36:28 crc kubenswrapper[4902]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 21 14:36:28 crc kubenswrapper[4902]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 21 14:36:28 crc kubenswrapper[4902]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 21 14:36:28 crc kubenswrapper[4902]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 21 14:36:28 crc kubenswrapper[4902]: [+]poststarthook/project.openshift.io-projectcache ok Jan 21 14:36:28 crc kubenswrapper[4902]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 21 14:36:28 crc kubenswrapper[4902]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Jan 21 14:36:28 crc kubenswrapper[4902]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 21 14:36:28 crc kubenswrapper[4902]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 21 14:36:28 crc kubenswrapper[4902]: livez check failed Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.135457 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" podUID="64d60c19-a655-408a-99e4-becff3e27018" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.206154 4902 patch_prober.go:28] interesting pod/downloads-7954f5f757-j7zvj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.206210 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-j7zvj" podUID="8285f69a-516d-4bdd-9a14-72d966a0b208" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.206497 4902 patch_prober.go:28] interesting pod/downloads-7954f5f757-j7zvj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.206551 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-j7zvj" podUID="8285f69a-516d-4bdd-9a14-72d966a0b208" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.254293 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d7hf5"] Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.262843 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.266581 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.266668 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.270848 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:28 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:28 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:28 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.270892 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.286216 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.335907 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19482ae1-f291-4111-83b5-56fa37063508-utilities\") pod \"redhat-marketplace-d7hf5\" (UID: \"19482ae1-f291-4111-83b5-56fa37063508\") " pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.346696 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wc5g\" (UniqueName: \"kubernetes.io/projected/19482ae1-f291-4111-83b5-56fa37063508-kube-api-access-5wc5g\") pod \"redhat-marketplace-d7hf5\" (UID: \"19482ae1-f291-4111-83b5-56fa37063508\") " pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.347524 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19482ae1-f291-4111-83b5-56fa37063508-catalog-content\") pod \"redhat-marketplace-d7hf5\" (UID: \"19482ae1-f291-4111-83b5-56fa37063508\") " pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.384901 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.385762 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.385911 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d7hf5"] Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.385983 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9nccj"] Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.449958 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19482ae1-f291-4111-83b5-56fa37063508-utilities\") pod \"redhat-marketplace-d7hf5\" (UID: \"19482ae1-f291-4111-83b5-56fa37063508\") " pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.450307 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wc5g\" (UniqueName: \"kubernetes.io/projected/19482ae1-f291-4111-83b5-56fa37063508-kube-api-access-5wc5g\") pod \"redhat-marketplace-d7hf5\" (UID: \"19482ae1-f291-4111-83b5-56fa37063508\") " pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.450414 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19482ae1-f291-4111-83b5-56fa37063508-catalog-content\") pod \"redhat-marketplace-d7hf5\" (UID: \"19482ae1-f291-4111-83b5-56fa37063508\") " pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.450889 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19482ae1-f291-4111-83b5-56fa37063508-catalog-content\") pod \"redhat-marketplace-d7hf5\" (UID: \"19482ae1-f291-4111-83b5-56fa37063508\") " pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.451276 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19482ae1-f291-4111-83b5-56fa37063508-utilities\") pod \"redhat-marketplace-d7hf5\" (UID: \"19482ae1-f291-4111-83b5-56fa37063508\") " pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.503312 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wc5g\" (UniqueName: \"kubernetes.io/projected/19482ae1-f291-4111-83b5-56fa37063508-kube-api-access-5wc5g\") pod \"redhat-marketplace-d7hf5\" (UID: \"19482ae1-f291-4111-83b5-56fa37063508\") " pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.624936 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dl5zx"] Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.625277 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.626652 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.645972 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dl5zx"] Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.652987 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-catalog-content\") pod \"redhat-marketplace-dl5zx\" (UID: \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\") " pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.653215 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhvxb\" (UniqueName: \"kubernetes.io/projected/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-kube-api-access-xhvxb\") pod \"redhat-marketplace-dl5zx\" (UID: \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\") " pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.653312 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-utilities\") pod \"redhat-marketplace-dl5zx\" (UID: \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\") " pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.754726 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-catalog-content\") pod \"redhat-marketplace-dl5zx\" (UID: \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\") " pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.754797 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhvxb\" (UniqueName: \"kubernetes.io/projected/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-kube-api-access-xhvxb\") pod \"redhat-marketplace-dl5zx\" (UID: \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\") " pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.754833 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-utilities\") pod \"redhat-marketplace-dl5zx\" (UID: \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\") " pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.755320 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-utilities\") pod \"redhat-marketplace-dl5zx\" (UID: \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\") " pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.755568 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-catalog-content\") pod \"redhat-marketplace-dl5zx\" (UID: \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\") " pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.776843 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhvxb\" (UniqueName: \"kubernetes.io/projected/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-kube-api-access-xhvxb\") pod \"redhat-marketplace-dl5zx\" (UID: \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\") " pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.866036 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5ad6d502e9f3d07aeeec65a6e1e3c4988e877e7cb0d54afdaee4b4dbed4dd820"} Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.866124 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.868725 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"50739c0e40ed1ca48d87578bfa5f765d33859970fe56bbf1abe3197ae90ef763"} Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.871175 4902 generic.go:334] "Generic (PLEG): container finished" podID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" containerID="4e686b959372288a5668349b284ecd38a38ea795d787fa0d477db1901cf9976c" exitCode=0 Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.871406 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgf94" event={"ID":"cc91d441-7f4a-45f8-8f71-1f04e4ade80c","Type":"ContainerDied","Data":"4e686b959372288a5668349b284ecd38a38ea795d787fa0d477db1901cf9976c"} Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.872706 4902 generic.go:334] "Generic (PLEG): container finished" podID="bc7ccff8-2db2-4663-9565-42f2357e4bda" containerID="a1204ec2b5e76cfd0fb6167da34f831607a537ca3ed511cbf74c9c91b780c2f9" exitCode=0 Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.872777 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqq5l" event={"ID":"bc7ccff8-2db2-4663-9565-42f2357e4bda","Type":"ContainerDied","Data":"a1204ec2b5e76cfd0fb6167da34f831607a537ca3ed511cbf74c9c91b780c2f9"} Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.873954 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.888314 4902 generic.go:334] "Generic (PLEG): container finished" podID="008311b3-7361-4466-aacd-01bbaa16f6df" containerID="063b36f18a1bb6d459f21845a8ce4f47fc36116393e41f5afc69b58116e1a5b9" exitCode=0 Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.888702 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77b9d" event={"ID":"008311b3-7361-4466-aacd-01bbaa16f6df","Type":"ContainerDied","Data":"063b36f18a1bb6d459f21845a8ce4f47fc36116393e41f5afc69b58116e1a5b9"} Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.888731 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77b9d" event={"ID":"008311b3-7361-4466-aacd-01bbaa16f6df","Type":"ContainerStarted","Data":"e20ca1be73e15aa077e2b594ce74f037b1aa06b9991f1e73b39c1c1eae4ecbca"} Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.896524 4902 generic.go:334] "Generic (PLEG): container finished" podID="3c88f2d9-944f-408e-bfe3-41c8baac6175" containerID="d21ddf243e31e4b7e0f150105b5b8e874517c71e43e94e38c2969e06db499de6" exitCode=0 Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.896586 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fl2j4" event={"ID":"3c88f2d9-944f-408e-bfe3-41c8baac6175","Type":"ContainerDied","Data":"d21ddf243e31e4b7e0f150105b5b8e874517c71e43e94e38c2969e06db499de6"} Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.899342 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" event={"ID":"2e95c252-bd71-44fe-a8f1-d9a346d8a882","Type":"ContainerStarted","Data":"7547a62e909793d452303b8e38ed4e3709638a07c8cd2df82117a97266265a83"} Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.899372 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" event={"ID":"2e95c252-bd71-44fe-a8f1-d9a346d8a882","Type":"ContainerStarted","Data":"72fa44f70a1a8a5c4b377700f7f908db843af15c5da8c33d09c4e26da32bbe19"} Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.899773 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.910034 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"09559999a949a133e59b5653d492ad8fb626c72ab6bfa2ced952444315ae1b5a"} Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.919551 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.944017 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.972438 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" podStartSLOduration=130.972423208 podStartE2EDuration="2m10.972423208s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:28.970603577 +0000 UTC m=+151.047436606" watchObservedRunningTime="2026-01-21 14:36:28.972423208 +0000 UTC m=+151.049256237" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.191614 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d7hf5"] Jan 21 14:36:29 crc kubenswrapper[4902]: W0121 14:36:29.204639 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19482ae1_f291_4111_83b5_56fa37063508.slice/crio-024d9ea6e07bff7f0ecb8463467da83d20693d50a025a771bbc45b531070e2fd WatchSource:0}: Error finding container 024d9ea6e07bff7f0ecb8463467da83d20693d50a025a771bbc45b531070e2fd: Status 404 returned error can't find the container with id 024d9ea6e07bff7f0ecb8463467da83d20693d50a025a771bbc45b531070e2fd Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.233269 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-98c57"] Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.237312 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.240996 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.265973 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2f8t\" (UniqueName: \"kubernetes.io/projected/ea3b3336-0258-4b66-bd33-dd4e01543236-kube-api-access-t2f8t\") pod \"redhat-operators-98c57\" (UID: \"ea3b3336-0258-4b66-bd33-dd4e01543236\") " pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.266073 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3b3336-0258-4b66-bd33-dd4e01543236-catalog-content\") pod \"redhat-operators-98c57\" (UID: \"ea3b3336-0258-4b66-bd33-dd4e01543236\") " pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.266099 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3b3336-0258-4b66-bd33-dd4e01543236-utilities\") pod \"redhat-operators-98c57\" (UID: \"ea3b3336-0258-4b66-bd33-dd4e01543236\") " pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.267297 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:29 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:29 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:29 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.267376 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.309117 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-98c57"] Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.365871 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dl5zx"] Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.367533 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3b3336-0258-4b66-bd33-dd4e01543236-catalog-content\") pod \"redhat-operators-98c57\" (UID: \"ea3b3336-0258-4b66-bd33-dd4e01543236\") " pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.367677 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3b3336-0258-4b66-bd33-dd4e01543236-utilities\") pod \"redhat-operators-98c57\" (UID: \"ea3b3336-0258-4b66-bd33-dd4e01543236\") " pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.367848 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2f8t\" (UniqueName: \"kubernetes.io/projected/ea3b3336-0258-4b66-bd33-dd4e01543236-kube-api-access-t2f8t\") pod \"redhat-operators-98c57\" (UID: \"ea3b3336-0258-4b66-bd33-dd4e01543236\") " pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.367959 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3b3336-0258-4b66-bd33-dd4e01543236-catalog-content\") pod \"redhat-operators-98c57\" (UID: \"ea3b3336-0258-4b66-bd33-dd4e01543236\") " pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.368066 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3b3336-0258-4b66-bd33-dd4e01543236-utilities\") pod \"redhat-operators-98c57\" (UID: \"ea3b3336-0258-4b66-bd33-dd4e01543236\") " pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.386833 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2f8t\" (UniqueName: \"kubernetes.io/projected/ea3b3336-0258-4b66-bd33-dd4e01543236-kube-api-access-t2f8t\") pod \"redhat-operators-98c57\" (UID: \"ea3b3336-0258-4b66-bd33-dd4e01543236\") " pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.568970 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.624736 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-chl56"] Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.625937 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.640309 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-chl56"] Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.672196 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64be302e-c39a-4e45-8b5d-07b8819a6eb0-utilities\") pod \"redhat-operators-chl56\" (UID: \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\") " pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.672239 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq29v\" (UniqueName: \"kubernetes.io/projected/64be302e-c39a-4e45-8b5d-07b8819a6eb0-kube-api-access-jq29v\") pod \"redhat-operators-chl56\" (UID: \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\") " pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.672351 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64be302e-c39a-4e45-8b5d-07b8819a6eb0-catalog-content\") pod \"redhat-operators-chl56\" (UID: \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\") " pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.703689 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.707238 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.709430 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.709787 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.713255 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.773883 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50d5a74e-3e40-493a-bb17-3de7c5ff8b26-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"50d5a74e-3e40-493a-bb17-3de7c5ff8b26\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.773956 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64be302e-c39a-4e45-8b5d-07b8819a6eb0-catalog-content\") pod \"redhat-operators-chl56\" (UID: \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\") " pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.774148 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64be302e-c39a-4e45-8b5d-07b8819a6eb0-utilities\") pod \"redhat-operators-chl56\" (UID: \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\") " pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.774190 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50d5a74e-3e40-493a-bb17-3de7c5ff8b26-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"50d5a74e-3e40-493a-bb17-3de7c5ff8b26\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.774210 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq29v\" (UniqueName: \"kubernetes.io/projected/64be302e-c39a-4e45-8b5d-07b8819a6eb0-kube-api-access-jq29v\") pod \"redhat-operators-chl56\" (UID: \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\") " pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.774488 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64be302e-c39a-4e45-8b5d-07b8819a6eb0-catalog-content\") pod \"redhat-operators-chl56\" (UID: \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\") " pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.774740 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64be302e-c39a-4e45-8b5d-07b8819a6eb0-utilities\") pod \"redhat-operators-chl56\" (UID: \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\") " pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.823664 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq29v\" (UniqueName: \"kubernetes.io/projected/64be302e-c39a-4e45-8b5d-07b8819a6eb0-kube-api-access-jq29v\") pod \"redhat-operators-chl56\" (UID: \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\") " pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.876257 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50d5a74e-3e40-493a-bb17-3de7c5ff8b26-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"50d5a74e-3e40-493a-bb17-3de7c5ff8b26\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.876360 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50d5a74e-3e40-493a-bb17-3de7c5ff8b26-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"50d5a74e-3e40-493a-bb17-3de7c5ff8b26\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.876474 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50d5a74e-3e40-493a-bb17-3de7c5ff8b26-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"50d5a74e-3e40-493a-bb17-3de7c5ff8b26\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.918163 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50d5a74e-3e40-493a-bb17-3de7c5ff8b26-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"50d5a74e-3e40-493a-bb17-3de7c5ff8b26\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.924962 4902 generic.go:334] "Generic (PLEG): container finished" podID="19482ae1-f291-4111-83b5-56fa37063508" containerID="9b604d27ef105b652ea19c99e2ae291eacdb1348bd4b5e106e90424e329a7180" exitCode=0 Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.925068 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7hf5" event={"ID":"19482ae1-f291-4111-83b5-56fa37063508","Type":"ContainerDied","Data":"9b604d27ef105b652ea19c99e2ae291eacdb1348bd4b5e106e90424e329a7180"} Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.925098 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7hf5" event={"ID":"19482ae1-f291-4111-83b5-56fa37063508","Type":"ContainerStarted","Data":"024d9ea6e07bff7f0ecb8463467da83d20693d50a025a771bbc45b531070e2fd"} Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.927820 4902 generic.go:334] "Generic (PLEG): container finished" podID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" containerID="3c2863c18937166425d91344f3ec1614a7f70129ffe061c9c5ee80eb31756b3f" exitCode=0 Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.928448 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dl5zx" event={"ID":"4504c44c-17da-4a32-ac81-7efc9ec6b1cb","Type":"ContainerDied","Data":"3c2863c18937166425d91344f3ec1614a7f70129ffe061c9c5ee80eb31756b3f"} Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.928478 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dl5zx" event={"ID":"4504c44c-17da-4a32-ac81-7efc9ec6b1cb","Type":"ContainerStarted","Data":"6d22776fe71b564cc70ae18c09c444bfb7b9c6605b6f0f8a041e615143a16c69"} Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.962799 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-98c57"] Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.985305 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:36:30 crc kubenswrapper[4902]: I0121 14:36:30.025384 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:36:30 crc kubenswrapper[4902]: I0121 14:36:30.261343 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:30 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:30 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:30 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:30 crc kubenswrapper[4902]: I0121 14:36:30.261647 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:30 crc kubenswrapper[4902]: I0121 14:36:30.333549 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-chl56"] Jan 21 14:36:30 crc kubenswrapper[4902]: I0121 14:36:30.421112 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 14:36:30 crc kubenswrapper[4902]: W0121 14:36:30.443587 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod50d5a74e_3e40_493a_bb17_3de7c5ff8b26.slice/crio-b77d51c439caba1ec86969bf0f6ee5c490a80cbf97747b1dbe225a9d8179f65a WatchSource:0}: Error finding container b77d51c439caba1ec86969bf0f6ee5c490a80cbf97747b1dbe225a9d8179f65a: Status 404 returned error can't find the container with id b77d51c439caba1ec86969bf0f6ee5c490a80cbf97747b1dbe225a9d8179f65a Jan 21 14:36:30 crc kubenswrapper[4902]: I0121 14:36:30.951616 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98c57" event={"ID":"ea3b3336-0258-4b66-bd33-dd4e01543236","Type":"ContainerDied","Data":"0863c2ef512883dfa5c8cb15d84b8d3e8007faf5a420481b07e81570d0bbc513"} Jan 21 14:36:30 crc kubenswrapper[4902]: I0121 14:36:30.951885 4902 generic.go:334] "Generic (PLEG): container finished" podID="ea3b3336-0258-4b66-bd33-dd4e01543236" containerID="0863c2ef512883dfa5c8cb15d84b8d3e8007faf5a420481b07e81570d0bbc513" exitCode=0 Jan 21 14:36:30 crc kubenswrapper[4902]: I0121 14:36:30.952021 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98c57" event={"ID":"ea3b3336-0258-4b66-bd33-dd4e01543236","Type":"ContainerStarted","Data":"5d200290d772299c202f1a65fa0061ebdcb1ccceea36fa735b536ebf39ba3497"} Jan 21 14:36:30 crc kubenswrapper[4902]: I0121 14:36:30.957298 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"50d5a74e-3e40-493a-bb17-3de7c5ff8b26","Type":"ContainerStarted","Data":"b77d51c439caba1ec86969bf0f6ee5c490a80cbf97747b1dbe225a9d8179f65a"} Jan 21 14:36:30 crc kubenswrapper[4902]: I0121 14:36:30.967475 4902 generic.go:334] "Generic (PLEG): container finished" podID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" containerID="b780515dde8ccd794f02bd3dc6005c6baf519de90ebd8e42d401146a27f9e971" exitCode=0 Jan 21 14:36:30 crc kubenswrapper[4902]: I0121 14:36:30.968174 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chl56" event={"ID":"64be302e-c39a-4e45-8b5d-07b8819a6eb0","Type":"ContainerDied","Data":"b780515dde8ccd794f02bd3dc6005c6baf519de90ebd8e42d401146a27f9e971"} Jan 21 14:36:30 crc kubenswrapper[4902]: I0121 14:36:30.968238 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chl56" event={"ID":"64be302e-c39a-4e45-8b5d-07b8819a6eb0","Type":"ContainerStarted","Data":"c2b8854fe921d56cd0a1e4ec23fb7eafebd1972826e58b8204b172f529d4bbf4"} Jan 21 14:36:31 crc kubenswrapper[4902]: I0121 14:36:31.262620 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:31 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:31 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:31 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:31 crc kubenswrapper[4902]: I0121 14:36:31.262690 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:32 crc kubenswrapper[4902]: I0121 14:36:32.039229 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"50d5a74e-3e40-493a-bb17-3de7c5ff8b26","Type":"ContainerStarted","Data":"71543878641ee45cb3815d79c247489e1669d37bce0ec428e1d26077ad1a012f"} Jan 21 14:36:32 crc kubenswrapper[4902]: I0121 14:36:32.062412 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.062396078 podStartE2EDuration="3.062396078s" podCreationTimestamp="2026-01-21 14:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:32.059285543 +0000 UTC m=+154.136118572" watchObservedRunningTime="2026-01-21 14:36:32.062396078 +0000 UTC m=+154.139229107" Jan 21 14:36:32 crc kubenswrapper[4902]: I0121 14:36:32.262412 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:32 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:32 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:32 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:32 crc kubenswrapper[4902]: I0121 14:36:32.262491 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:32 crc kubenswrapper[4902]: I0121 14:36:32.772172 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:32 crc kubenswrapper[4902]: I0121 14:36:32.777360 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.049401 4902 generic.go:334] "Generic (PLEG): container finished" podID="70656800-9429-43df-a1cb-7c8617d23b3f" containerID="de8fcd8c3571217b412f9ba6c688fc875ba6c7c7eb18b7b87d8ab03820c43542" exitCode=0 Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.049471 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" event={"ID":"70656800-9429-43df-a1cb-7c8617d23b3f","Type":"ContainerDied","Data":"de8fcd8c3571217b412f9ba6c688fc875ba6c7c7eb18b7b87d8ab03820c43542"} Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.051768 4902 generic.go:334] "Generic (PLEG): container finished" podID="50d5a74e-3e40-493a-bb17-3de7c5ff8b26" containerID="71543878641ee45cb3815d79c247489e1669d37bce0ec428e1d26077ad1a012f" exitCode=0 Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.052734 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"50d5a74e-3e40-493a-bb17-3de7c5ff8b26","Type":"ContainerDied","Data":"71543878641ee45cb3815d79c247489e1669d37bce0ec428e1d26077ad1a012f"} Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.265141 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:33 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:33 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:33 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.265217 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.387573 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.588410 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.589493 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.594175 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.595191 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.601986 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.670109 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d6a681c-0b89-4f72-9f57-64c0915af789-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4d6a681c-0b89-4f72-9f57-64c0915af789\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.670209 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d6a681c-0b89-4f72-9f57-64c0915af789-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4d6a681c-0b89-4f72-9f57-64c0915af789\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.771172 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d6a681c-0b89-4f72-9f57-64c0915af789-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4d6a681c-0b89-4f72-9f57-64c0915af789\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.771250 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d6a681c-0b89-4f72-9f57-64c0915af789-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4d6a681c-0b89-4f72-9f57-64c0915af789\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.771335 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d6a681c-0b89-4f72-9f57-64c0915af789-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4d6a681c-0b89-4f72-9f57-64c0915af789\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.792951 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d6a681c-0b89-4f72-9f57-64c0915af789-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4d6a681c-0b89-4f72-9f57-64c0915af789\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.970324 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.263565 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:34 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:34 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:34 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.263625 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.414910 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.476030 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.590787 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70656800-9429-43df-a1cb-7c8617d23b3f-secret-volume\") pod \"70656800-9429-43df-a1cb-7c8617d23b3f\" (UID: \"70656800-9429-43df-a1cb-7c8617d23b3f\") " Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.590866 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70656800-9429-43df-a1cb-7c8617d23b3f-config-volume\") pod \"70656800-9429-43df-a1cb-7c8617d23b3f\" (UID: \"70656800-9429-43df-a1cb-7c8617d23b3f\") " Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.590971 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfg9t\" (UniqueName: \"kubernetes.io/projected/70656800-9429-43df-a1cb-7c8617d23b3f-kube-api-access-sfg9t\") pod \"70656800-9429-43df-a1cb-7c8617d23b3f\" (UID: \"70656800-9429-43df-a1cb-7c8617d23b3f\") " Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.590999 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50d5a74e-3e40-493a-bb17-3de7c5ff8b26-kube-api-access\") pod \"50d5a74e-3e40-493a-bb17-3de7c5ff8b26\" (UID: \"50d5a74e-3e40-493a-bb17-3de7c5ff8b26\") " Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.591017 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50d5a74e-3e40-493a-bb17-3de7c5ff8b26-kubelet-dir\") pod \"50d5a74e-3e40-493a-bb17-3de7c5ff8b26\" (UID: \"50d5a74e-3e40-493a-bb17-3de7c5ff8b26\") " Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.592222 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50d5a74e-3e40-493a-bb17-3de7c5ff8b26-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "50d5a74e-3e40-493a-bb17-3de7c5ff8b26" (UID: "50d5a74e-3e40-493a-bb17-3de7c5ff8b26"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.592673 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70656800-9429-43df-a1cb-7c8617d23b3f-config-volume" (OuterVolumeSpecName: "config-volume") pod "70656800-9429-43df-a1cb-7c8617d23b3f" (UID: "70656800-9429-43df-a1cb-7c8617d23b3f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.600843 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.604515 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70656800-9429-43df-a1cb-7c8617d23b3f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "70656800-9429-43df-a1cb-7c8617d23b3f" (UID: "70656800-9429-43df-a1cb-7c8617d23b3f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.610255 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50d5a74e-3e40-493a-bb17-3de7c5ff8b26-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "50d5a74e-3e40-493a-bb17-3de7c5ff8b26" (UID: "50d5a74e-3e40-493a-bb17-3de7c5ff8b26"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.607707 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70656800-9429-43df-a1cb-7c8617d23b3f-kube-api-access-sfg9t" (OuterVolumeSpecName: "kube-api-access-sfg9t") pod "70656800-9429-43df-a1cb-7c8617d23b3f" (UID: "70656800-9429-43df-a1cb-7c8617d23b3f"). InnerVolumeSpecName "kube-api-access-sfg9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.693734 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfg9t\" (UniqueName: \"kubernetes.io/projected/70656800-9429-43df-a1cb-7c8617d23b3f-kube-api-access-sfg9t\") on node \"crc\" DevicePath \"\"" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.693783 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50d5a74e-3e40-493a-bb17-3de7c5ff8b26-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.693793 4902 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50d5a74e-3e40-493a-bb17-3de7c5ff8b26-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.693803 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70656800-9429-43df-a1cb-7c8617d23b3f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.693814 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70656800-9429-43df-a1cb-7c8617d23b3f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 14:36:35 crc kubenswrapper[4902]: I0121 14:36:35.126382 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4d6a681c-0b89-4f72-9f57-64c0915af789","Type":"ContainerStarted","Data":"968f9e6a19298b7a86bae544ca30fb68936bb23e6bab950c272feb412b841333"} Jan 21 14:36:35 crc kubenswrapper[4902]: I0121 14:36:35.154485 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"50d5a74e-3e40-493a-bb17-3de7c5ff8b26","Type":"ContainerDied","Data":"b77d51c439caba1ec86969bf0f6ee5c490a80cbf97747b1dbe225a9d8179f65a"} Jan 21 14:36:35 crc kubenswrapper[4902]: I0121 14:36:35.154542 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b77d51c439caba1ec86969bf0f6ee5c490a80cbf97747b1dbe225a9d8179f65a" Jan 21 14:36:35 crc kubenswrapper[4902]: I0121 14:36:35.154650 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:36:35 crc kubenswrapper[4902]: I0121 14:36:35.176081 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" event={"ID":"70656800-9429-43df-a1cb-7c8617d23b3f","Type":"ContainerDied","Data":"d99c6757f9658ce32d4704b76f3d35e4415e44f33b1b27def593d2cbcd31f4c9"} Jan 21 14:36:35 crc kubenswrapper[4902]: I0121 14:36:35.176173 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d99c6757f9658ce32d4704b76f3d35e4415e44f33b1b27def593d2cbcd31f4c9" Jan 21 14:36:35 crc kubenswrapper[4902]: I0121 14:36:35.176266 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:35 crc kubenswrapper[4902]: I0121 14:36:35.264575 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:35 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:35 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:35 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:35 crc kubenswrapper[4902]: I0121 14:36:35.264646 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:36 crc kubenswrapper[4902]: I0121 14:36:36.209179 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4d6a681c-0b89-4f72-9f57-64c0915af789","Type":"ContainerStarted","Data":"a824c02aab82ea190dd1e12ccf4ee2855e18f36e4ecd719a6e1b635979dd07b4"} Jan 21 14:36:36 crc kubenswrapper[4902]: I0121 14:36:36.263487 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:36 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:36 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:36 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:36 crc kubenswrapper[4902]: I0121 14:36:36.263557 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:36 crc kubenswrapper[4902]: I0121 14:36:36.958680 4902 patch_prober.go:28] interesting pod/console-f9d7485db-9nw4v container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 21 14:36:36 crc kubenswrapper[4902]: I0121 14:36:36.958776 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9nw4v" podUID="853f0809-8828-4976-9b04-dd078ab64ced" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 21 14:36:37 crc kubenswrapper[4902]: I0121 14:36:37.230624 4902 generic.go:334] "Generic (PLEG): container finished" podID="4d6a681c-0b89-4f72-9f57-64c0915af789" containerID="a824c02aab82ea190dd1e12ccf4ee2855e18f36e4ecd719a6e1b635979dd07b4" exitCode=0 Jan 21 14:36:37 crc kubenswrapper[4902]: I0121 14:36:37.230670 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4d6a681c-0b89-4f72-9f57-64c0915af789","Type":"ContainerDied","Data":"a824c02aab82ea190dd1e12ccf4ee2855e18f36e4ecd719a6e1b635979dd07b4"} Jan 21 14:36:37 crc kubenswrapper[4902]: I0121 14:36:37.261851 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:37 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:37 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:37 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:37 crc kubenswrapper[4902]: I0121 14:36:37.261937 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:38 crc kubenswrapper[4902]: I0121 14:36:38.212630 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-j7zvj" Jan 21 14:36:38 crc kubenswrapper[4902]: I0121 14:36:38.304968 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:38 crc kubenswrapper[4902]: I0121 14:36:38.308289 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:40 crc kubenswrapper[4902]: I0121 14:36:40.540604 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:36:40 crc kubenswrapper[4902]: I0121 14:36:40.546698 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:36:40 crc kubenswrapper[4902]: I0121 14:36:40.637607 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:36:43 crc kubenswrapper[4902]: I0121 14:36:43.030853 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:36:43 crc kubenswrapper[4902]: I0121 14:36:43.082382 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d6a681c-0b89-4f72-9f57-64c0915af789-kubelet-dir\") pod \"4d6a681c-0b89-4f72-9f57-64c0915af789\" (UID: \"4d6a681c-0b89-4f72-9f57-64c0915af789\") " Jan 21 14:36:43 crc kubenswrapper[4902]: I0121 14:36:43.082520 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d6a681c-0b89-4f72-9f57-64c0915af789-kube-api-access\") pod \"4d6a681c-0b89-4f72-9f57-64c0915af789\" (UID: \"4d6a681c-0b89-4f72-9f57-64c0915af789\") " Jan 21 14:36:43 crc kubenswrapper[4902]: I0121 14:36:43.083232 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d6a681c-0b89-4f72-9f57-64c0915af789-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4d6a681c-0b89-4f72-9f57-64c0915af789" (UID: "4d6a681c-0b89-4f72-9f57-64c0915af789"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:36:43 crc kubenswrapper[4902]: I0121 14:36:43.088138 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d6a681c-0b89-4f72-9f57-64c0915af789-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4d6a681c-0b89-4f72-9f57-64c0915af789" (UID: "4d6a681c-0b89-4f72-9f57-64c0915af789"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:36:43 crc kubenswrapper[4902]: I0121 14:36:43.184133 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d6a681c-0b89-4f72-9f57-64c0915af789-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 14:36:43 crc kubenswrapper[4902]: I0121 14:36:43.184179 4902 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d6a681c-0b89-4f72-9f57-64c0915af789-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:36:43 crc kubenswrapper[4902]: I0121 14:36:43.288275 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4d6a681c-0b89-4f72-9f57-64c0915af789","Type":"ContainerDied","Data":"968f9e6a19298b7a86bae544ca30fb68936bb23e6bab950c272feb412b841333"} Jan 21 14:36:43 crc kubenswrapper[4902]: I0121 14:36:43.288325 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="968f9e6a19298b7a86bae544ca30fb68936bb23e6bab950c272feb412b841333" Jan 21 14:36:43 crc kubenswrapper[4902]: I0121 14:36:43.288322 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:36:45 crc kubenswrapper[4902]: I0121 14:36:45.159710 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kq588"] Jan 21 14:36:45 crc kubenswrapper[4902]: I0121 14:36:45.304623 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kq588" event={"ID":"05d94e6a-249a-484c-8895-085e81f1dfaa","Type":"ContainerStarted","Data":"c6e48ac868a0c03714792fe2441343448e65f2b639ab804ec1c8fcf4b54f624f"} Jan 21 14:36:46 crc kubenswrapper[4902]: I0121 14:36:46.312161 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kq588" event={"ID":"05d94e6a-249a-484c-8895-085e81f1dfaa","Type":"ContainerStarted","Data":"5ec995d56589a9eddf5c407fa139e3266611a239641b68415251855851035bca"} Jan 21 14:36:46 crc kubenswrapper[4902]: I0121 14:36:46.724780 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tn2zp"] Jan 21 14:36:46 crc kubenswrapper[4902]: I0121 14:36:46.725337 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" podUID="c7158f8a-be32-4700-857f-faf9157f99f5" containerName="controller-manager" containerID="cri-o://367d869d9b3c4b737b065ed87b6bd46066ee2a10f6733ab3b357221abf8fd7a9" gracePeriod=30 Jan 21 14:36:46 crc kubenswrapper[4902]: I0121 14:36:46.741052 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf"] Jan 21 14:36:46 crc kubenswrapper[4902]: I0121 14:36:46.741300 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" podUID="01ee90aa-9465-4cd2-97a0-ce735d557649" containerName="route-controller-manager" containerID="cri-o://6352bb96995cea97dbd91f19d4ac33bcf83056c8d4e8ed01ff2fda9bf228a144" gracePeriod=30 Jan 21 14:36:46 crc kubenswrapper[4902]: I0121 14:36:46.976596 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:46 crc kubenswrapper[4902]: I0121 14:36:46.980439 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:47 crc kubenswrapper[4902]: I0121 14:36:47.320167 4902 generic.go:334] "Generic (PLEG): container finished" podID="c7158f8a-be32-4700-857f-faf9157f99f5" containerID="367d869d9b3c4b737b065ed87b6bd46066ee2a10f6733ab3b357221abf8fd7a9" exitCode=0 Jan 21 14:36:47 crc kubenswrapper[4902]: I0121 14:36:47.320210 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" event={"ID":"c7158f8a-be32-4700-857f-faf9157f99f5","Type":"ContainerDied","Data":"367d869d9b3c4b737b065ed87b6bd46066ee2a10f6733ab3b357221abf8fd7a9"} Jan 21 14:36:47 crc kubenswrapper[4902]: I0121 14:36:47.321752 4902 generic.go:334] "Generic (PLEG): container finished" podID="01ee90aa-9465-4cd2-97a0-ce735d557649" containerID="6352bb96995cea97dbd91f19d4ac33bcf83056c8d4e8ed01ff2fda9bf228a144" exitCode=0 Jan 21 14:36:47 crc kubenswrapper[4902]: I0121 14:36:47.322356 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" event={"ID":"01ee90aa-9465-4cd2-97a0-ce735d557649","Type":"ContainerDied","Data":"6352bb96995cea97dbd91f19d4ac33bcf83056c8d4e8ed01ff2fda9bf228a144"} Jan 21 14:36:47 crc kubenswrapper[4902]: I0121 14:36:47.733223 4902 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xrcxf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 21 14:36:47 crc kubenswrapper[4902]: I0121 14:36:47.733284 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" podUID="01ee90aa-9465-4cd2-97a0-ce735d557649" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 21 14:36:47 crc kubenswrapper[4902]: I0121 14:36:47.769658 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:36:47 crc kubenswrapper[4902]: I0121 14:36:47.769721 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:36:47 crc kubenswrapper[4902]: I0121 14:36:47.827574 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:48 crc kubenswrapper[4902]: I0121 14:36:48.283602 4902 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tn2zp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 21 14:36:48 crc kubenswrapper[4902]: I0121 14:36:48.283689 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" podUID="c7158f8a-be32-4700-857f-faf9157f99f5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 21 14:36:57 crc kubenswrapper[4902]: I0121 14:36:57.660317 4902 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 14:36:57 crc kubenswrapper[4902]: I0121 14:36:57.661022 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 14:36:57 crc kubenswrapper[4902]: I0121 14:36:57.733637 4902 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xrcxf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 21 14:36:57 crc kubenswrapper[4902]: I0121 14:36:57.733700 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" podUID="01ee90aa-9465-4cd2-97a0-ce735d557649" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 21 14:36:58 crc kubenswrapper[4902]: I0121 14:36:58.307163 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" Jan 21 14:36:59 crc kubenswrapper[4902]: I0121 14:36:59.283447 4902 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tn2zp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 14:36:59 crc kubenswrapper[4902]: I0121 14:36:59.283557 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" podUID="c7158f8a-be32-4700-857f-faf9157f99f5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 14:37:06 crc kubenswrapper[4902]: I0121 14:37:06.079207 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:37:07 crc kubenswrapper[4902]: I0121 14:37:07.733110 4902 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xrcxf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 21 14:37:07 crc kubenswrapper[4902]: I0121 14:37:07.733468 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" podUID="01ee90aa-9465-4cd2-97a0-ce735d557649" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 21 14:37:08 crc kubenswrapper[4902]: E0121 14:37:08.722772 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 14:37:08 crc kubenswrapper[4902]: E0121 14:37:08.722981 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jq29v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-chl56_openshift-marketplace(64be302e-c39a-4e45-8b5d-07b8819a6eb0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:37:08 crc kubenswrapper[4902]: E0121 14:37:08.724190 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-chl56" podUID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" Jan 21 14:37:09 crc kubenswrapper[4902]: I0121 14:37:09.282985 4902 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tn2zp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 14:37:09 crc kubenswrapper[4902]: I0121 14:37:09.283098 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" podUID="c7158f8a-be32-4700-857f-faf9157f99f5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 14:37:09 crc kubenswrapper[4902]: E0121 14:37:09.646941 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-chl56" podUID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.393959 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 14:37:10 crc kubenswrapper[4902]: E0121 14:37:10.394319 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70656800-9429-43df-a1cb-7c8617d23b3f" containerName="collect-profiles" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.394332 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="70656800-9429-43df-a1cb-7c8617d23b3f" containerName="collect-profiles" Jan 21 14:37:10 crc kubenswrapper[4902]: E0121 14:37:10.394342 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d6a681c-0b89-4f72-9f57-64c0915af789" containerName="pruner" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.394348 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d6a681c-0b89-4f72-9f57-64c0915af789" containerName="pruner" Jan 21 14:37:10 crc kubenswrapper[4902]: E0121 14:37:10.394378 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50d5a74e-3e40-493a-bb17-3de7c5ff8b26" containerName="pruner" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.394385 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="50d5a74e-3e40-493a-bb17-3de7c5ff8b26" containerName="pruner" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.394511 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="50d5a74e-3e40-493a-bb17-3de7c5ff8b26" containerName="pruner" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.394546 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d6a681c-0b89-4f72-9f57-64c0915af789" containerName="pruner" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.394555 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="70656800-9429-43df-a1cb-7c8617d23b3f" containerName="collect-profiles" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.395062 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.398894 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.399174 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.400704 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.480888 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d86c450a-56ce-4439-8396-f6d87fee149c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d86c450a-56ce-4439-8396-f6d87fee149c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.481432 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d86c450a-56ce-4439-8396-f6d87fee149c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d86c450a-56ce-4439-8396-f6d87fee149c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.583232 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d86c450a-56ce-4439-8396-f6d87fee149c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d86c450a-56ce-4439-8396-f6d87fee149c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.583403 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d86c450a-56ce-4439-8396-f6d87fee149c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d86c450a-56ce-4439-8396-f6d87fee149c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.583524 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d86c450a-56ce-4439-8396-f6d87fee149c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d86c450a-56ce-4439-8396-f6d87fee149c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.615164 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d86c450a-56ce-4439-8396-f6d87fee149c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d86c450a-56ce-4439-8396-f6d87fee149c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.746248 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:37:12 crc kubenswrapper[4902]: E0121 14:37:12.509623 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 14:37:12 crc kubenswrapper[4902]: E0121 14:37:12.509806 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8kswz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-xgf94_openshift-marketplace(cc91d441-7f4a-45f8-8f71-1f04e4ade80c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:37:12 crc kubenswrapper[4902]: E0121 14:37:12.511590 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-xgf94" podUID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" Jan 21 14:37:12 crc kubenswrapper[4902]: E0121 14:37:12.995622 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 14:37:12 crc kubenswrapper[4902]: E0121 14:37:12.995835 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4ntv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-fqq5l_openshift-marketplace(bc7ccff8-2db2-4663-9565-42f2357e4bda): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:37:12 crc kubenswrapper[4902]: E0121 14:37:12.997029 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-fqq5l" podUID="bc7ccff8-2db2-4663-9565-42f2357e4bda" Jan 21 14:37:14 crc kubenswrapper[4902]: I0121 14:37:14.786300 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 14:37:14 crc kubenswrapper[4902]: I0121 14:37:14.787300 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:14 crc kubenswrapper[4902]: I0121 14:37:14.799981 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 14:37:14 crc kubenswrapper[4902]: I0121 14:37:14.849438 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84af95e1-2275-49b2-987c-afa33fb32734-var-lock\") pod \"installer-9-crc\" (UID: \"84af95e1-2275-49b2-987c-afa33fb32734\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:14 crc kubenswrapper[4902]: I0121 14:37:14.849503 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84af95e1-2275-49b2-987c-afa33fb32734-kubelet-dir\") pod \"installer-9-crc\" (UID: \"84af95e1-2275-49b2-987c-afa33fb32734\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:14 crc kubenswrapper[4902]: I0121 14:37:14.849532 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84af95e1-2275-49b2-987c-afa33fb32734-kube-api-access\") pod \"installer-9-crc\" (UID: \"84af95e1-2275-49b2-987c-afa33fb32734\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:14 crc kubenswrapper[4902]: I0121 14:37:14.951579 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84af95e1-2275-49b2-987c-afa33fb32734-var-lock\") pod \"installer-9-crc\" (UID: \"84af95e1-2275-49b2-987c-afa33fb32734\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:14 crc kubenswrapper[4902]: I0121 14:37:14.951683 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84af95e1-2275-49b2-987c-afa33fb32734-kubelet-dir\") pod \"installer-9-crc\" (UID: \"84af95e1-2275-49b2-987c-afa33fb32734\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:14 crc kubenswrapper[4902]: I0121 14:37:14.951730 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84af95e1-2275-49b2-987c-afa33fb32734-kube-api-access\") pod \"installer-9-crc\" (UID: \"84af95e1-2275-49b2-987c-afa33fb32734\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:14 crc kubenswrapper[4902]: I0121 14:37:14.951743 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84af95e1-2275-49b2-987c-afa33fb32734-var-lock\") pod \"installer-9-crc\" (UID: \"84af95e1-2275-49b2-987c-afa33fb32734\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:14 crc kubenswrapper[4902]: I0121 14:37:14.951817 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84af95e1-2275-49b2-987c-afa33fb32734-kubelet-dir\") pod \"installer-9-crc\" (UID: \"84af95e1-2275-49b2-987c-afa33fb32734\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:14 crc kubenswrapper[4902]: I0121 14:37:14.972505 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84af95e1-2275-49b2-987c-afa33fb32734-kube-api-access\") pod \"installer-9-crc\" (UID: \"84af95e1-2275-49b2-987c-afa33fb32734\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.104625 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.443766 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-fqq5l" podUID="bc7ccff8-2db2-4663-9565-42f2357e4bda" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.443925 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-xgf94" podUID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.547920 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.548289 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xhvxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-dl5zx_openshift-marketplace(4504c44c-17da-4a32-ac81-7efc9ec6b1cb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.550090 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-dl5zx" podUID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.571510 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.612827 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-758f874869-2jb7w"] Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.613644 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7158f8a-be32-4700-857f-faf9157f99f5" containerName="controller-manager" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.613656 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7158f8a-be32-4700-857f-faf9157f99f5" containerName="controller-manager" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.613746 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7158f8a-be32-4700-857f-faf9157f99f5" containerName="controller-manager" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.614103 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.623463 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.623592 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5wc5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-d7hf5_openshift-marketplace(19482ae1-f291-4111-83b5-56fa37063508): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.624860 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-d7hf5" podUID="19482ae1-f291-4111-83b5-56fa37063508" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.625498 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-758f874869-2jb7w"] Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.629653 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.629762 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t2f8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-98c57_openshift-marketplace(ea3b3336-0258-4b66-bd33-dd4e01543236): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.630978 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-98c57" podUID="ea3b3336-0258-4b66-bd33-dd4e01543236" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.646751 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.646905 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2sp9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-77b9d_openshift-marketplace(008311b3-7361-4466-aacd-01bbaa16f6df): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.648640 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-77b9d" podUID="008311b3-7361-4466-aacd-01bbaa16f6df" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.661446 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7158f8a-be32-4700-857f-faf9157f99f5-serving-cert\") pod \"c7158f8a-be32-4700-857f-faf9157f99f5\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.661541 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-proxy-ca-bundles\") pod \"c7158f8a-be32-4700-857f-faf9157f99f5\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.661642 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-client-ca\") pod \"c7158f8a-be32-4700-857f-faf9157f99f5\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.661664 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q56gh\" (UniqueName: \"kubernetes.io/projected/c7158f8a-be32-4700-857f-faf9157f99f5-kube-api-access-q56gh\") pod \"c7158f8a-be32-4700-857f-faf9157f99f5\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.662994 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-config\") pod \"c7158f8a-be32-4700-857f-faf9157f99f5\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.663539 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-config\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.663543 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c7158f8a-be32-4700-857f-faf9157f99f5" (UID: "c7158f8a-be32-4700-857f-faf9157f99f5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.663628 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7db6f852-8480-412f-a9bf-9afd18c41d83-serving-cert\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.663729 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwrwj\" (UniqueName: \"kubernetes.io/projected/7db6f852-8480-412f-a9bf-9afd18c41d83-kube-api-access-wwrwj\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.663750 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-proxy-ca-bundles\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.663781 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-client-ca\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.663926 4902 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.668968 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-client-ca" (OuterVolumeSpecName: "client-ca") pod "c7158f8a-be32-4700-857f-faf9157f99f5" (UID: "c7158f8a-be32-4700-857f-faf9157f99f5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.670730 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.670908 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9nlk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-fl2j4_openshift-marketplace(3c88f2d9-944f-408e-bfe3-41c8baac6175): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.672326 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-fl2j4" podUID="3c88f2d9-944f-408e-bfe3-41c8baac6175" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.677843 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7158f8a-be32-4700-857f-faf9157f99f5-kube-api-access-q56gh" (OuterVolumeSpecName: "kube-api-access-q56gh") pod "c7158f8a-be32-4700-857f-faf9157f99f5" (UID: "c7158f8a-be32-4700-857f-faf9157f99f5"). InnerVolumeSpecName "kube-api-access-q56gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.678412 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-config" (OuterVolumeSpecName: "config") pod "c7158f8a-be32-4700-857f-faf9157f99f5" (UID: "c7158f8a-be32-4700-857f-faf9157f99f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.680296 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7158f8a-be32-4700-857f-faf9157f99f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c7158f8a-be32-4700-857f-faf9157f99f5" (UID: "c7158f8a-be32-4700-857f-faf9157f99f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.765383 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7db6f852-8480-412f-a9bf-9afd18c41d83-serving-cert\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.765468 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwrwj\" (UniqueName: \"kubernetes.io/projected/7db6f852-8480-412f-a9bf-9afd18c41d83-kube-api-access-wwrwj\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.765496 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-proxy-ca-bundles\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.765518 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-client-ca\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.765585 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-config\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.765640 4902 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.765654 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q56gh\" (UniqueName: \"kubernetes.io/projected/c7158f8a-be32-4700-857f-faf9157f99f5-kube-api-access-q56gh\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.765666 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.765677 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7158f8a-be32-4700-857f-faf9157f99f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.767749 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-config\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.768150 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-proxy-ca-bundles\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.769586 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-client-ca\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.776691 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7db6f852-8480-412f-a9bf-9afd18c41d83-serving-cert\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.779806 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.784911 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwrwj\" (UniqueName: \"kubernetes.io/projected/7db6f852-8480-412f-a9bf-9afd18c41d83-kube-api-access-wwrwj\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.866478 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbhnr\" (UniqueName: \"kubernetes.io/projected/01ee90aa-9465-4cd2-97a0-ce735d557649-kube-api-access-gbhnr\") pod \"01ee90aa-9465-4cd2-97a0-ce735d557649\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.866557 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ee90aa-9465-4cd2-97a0-ce735d557649-serving-cert\") pod \"01ee90aa-9465-4cd2-97a0-ce735d557649\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.866601 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-client-ca\") pod \"01ee90aa-9465-4cd2-97a0-ce735d557649\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.866656 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-config\") pod \"01ee90aa-9465-4cd2-97a0-ce735d557649\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.867997 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-client-ca" (OuterVolumeSpecName: "client-ca") pod "01ee90aa-9465-4cd2-97a0-ce735d557649" (UID: "01ee90aa-9465-4cd2-97a0-ce735d557649"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.868064 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-config" (OuterVolumeSpecName: "config") pod "01ee90aa-9465-4cd2-97a0-ce735d557649" (UID: "01ee90aa-9465-4cd2-97a0-ce735d557649"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.870225 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ee90aa-9465-4cd2-97a0-ce735d557649-kube-api-access-gbhnr" (OuterVolumeSpecName: "kube-api-access-gbhnr") pod "01ee90aa-9465-4cd2-97a0-ce735d557649" (UID: "01ee90aa-9465-4cd2-97a0-ce735d557649"). InnerVolumeSpecName "kube-api-access-gbhnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.870777 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ee90aa-9465-4cd2-97a0-ce735d557649-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ee90aa-9465-4cd2-97a0-ce735d557649" (UID: "01ee90aa-9465-4cd2-97a0-ce735d557649"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.955922 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.968250 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbhnr\" (UniqueName: \"kubernetes.io/projected/01ee90aa-9465-4cd2-97a0-ce735d557649-kube-api-access-gbhnr\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.968308 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ee90aa-9465-4cd2-97a0-ce735d557649-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.968326 4902 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.968343 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.995966 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.000410 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.183616 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-758f874869-2jb7w"] Jan 21 14:37:16 crc kubenswrapper[4902]: W0121 14:37:16.195082 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7db6f852_8480_412f_a9bf_9afd18c41d83.slice/crio-1430f5f7f582f37ed906dc3199394088ef56b752688bdfa0d7374651d056e2d3 WatchSource:0}: Error finding container 1430f5f7f582f37ed906dc3199394088ef56b752688bdfa0d7374651d056e2d3: Status 404 returned error can't find the container with id 1430f5f7f582f37ed906dc3199394088ef56b752688bdfa0d7374651d056e2d3 Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.494440 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kq588" event={"ID":"05d94e6a-249a-484c-8895-085e81f1dfaa","Type":"ContainerStarted","Data":"89f9eec850349a8d70abaf29a6e16ed37c20bae82e8785de31be9800941385f7"} Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.498468 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.498513 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" event={"ID":"01ee90aa-9465-4cd2-97a0-ce735d557649","Type":"ContainerDied","Data":"1d6f20bc21db99ffc3b51f783b09029cf7dec2c4ed9b3a8a2f63bf561b414a3a"} Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.498578 4902 scope.go:117] "RemoveContainer" containerID="6352bb96995cea97dbd91f19d4ac33bcf83056c8d4e8ed01ff2fda9bf228a144" Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.504937 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" event={"ID":"7db6f852-8480-412f-a9bf-9afd18c41d83","Type":"ContainerStarted","Data":"7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e"} Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.504974 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" event={"ID":"7db6f852-8480-412f-a9bf-9afd18c41d83","Type":"ContainerStarted","Data":"1430f5f7f582f37ed906dc3199394088ef56b752688bdfa0d7374651d056e2d3"} Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.505836 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.508248 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d86c450a-56ce-4439-8396-f6d87fee149c","Type":"ContainerStarted","Data":"417166d5735591f122acc8577f8a8ed9b2f4076fce7e42a14e97f0765b04b1d6"} Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.508309 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d86c450a-56ce-4439-8396-f6d87fee149c","Type":"ContainerStarted","Data":"3eed38a6c389fd9ea1b4b51ae79af02a7862cc31d751100e19c8ae0ae07b17e7"} Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.515098 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"84af95e1-2275-49b2-987c-afa33fb32734","Type":"ContainerStarted","Data":"ce71892c9cc4a5eca454b5acdd2876bc8fdf1542a231264709d1d8546488cc23"} Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.515155 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"84af95e1-2275-49b2-987c-afa33fb32734","Type":"ContainerStarted","Data":"1e0e3b99d3e199bf0a5109aed2aaf9e421c0eab2e9ccba48ebac5e8687fa5207"} Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.518064 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.518536 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" event={"ID":"c7158f8a-be32-4700-857f-faf9157f99f5","Type":"ContainerDied","Data":"31b5818a193a42b1200764cd8a3a2ec82450c46b99cee82fd307ec9a84582b72"} Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.518576 4902 scope.go:117] "RemoveContainer" containerID="367d869d9b3c4b737b065ed87b6bd46066ee2a10f6733ab3b357221abf8fd7a9" Jan 21 14:37:16 crc kubenswrapper[4902]: E0121 14:37:16.520698 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-77b9d" podUID="008311b3-7361-4466-aacd-01bbaa16f6df" Jan 21 14:37:16 crc kubenswrapper[4902]: E0121 14:37:16.520694 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-dl5zx" podUID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" Jan 21 14:37:16 crc kubenswrapper[4902]: E0121 14:37:16.520861 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-fl2j4" podUID="3c88f2d9-944f-408e-bfe3-41c8baac6175" Jan 21 14:37:16 crc kubenswrapper[4902]: E0121 14:37:16.526363 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-d7hf5" podUID="19482ae1-f291-4111-83b5-56fa37063508" Jan 21 14:37:16 crc kubenswrapper[4902]: E0121 14:37:16.526425 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-98c57" podUID="ea3b3336-0258-4b66-bd33-dd4e01543236" Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.526472 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.548689 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-kq588" podStartSLOduration=178.54866189 podStartE2EDuration="2m58.54866189s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:37:16.538769841 +0000 UTC m=+198.615602870" watchObservedRunningTime="2026-01-21 14:37:16.54866189 +0000 UTC m=+198.625494919" Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.660326 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tn2zp"] Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.662672 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tn2zp"] Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.695059 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf"] Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.701405 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf"] Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.715085 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=6.715067197 podStartE2EDuration="6.715067197s" podCreationTimestamp="2026-01-21 14:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:37:16.713726651 +0000 UTC m=+198.790559680" watchObservedRunningTime="2026-01-21 14:37:16.715067197 +0000 UTC m=+198.791900226" Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.775803 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" podStartSLOduration=10.775787846 podStartE2EDuration="10.775787846s" podCreationTimestamp="2026-01-21 14:37:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:37:16.773170877 +0000 UTC m=+198.850003916" watchObservedRunningTime="2026-01-21 14:37:16.775787846 +0000 UTC m=+198.852620865" Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.776256 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.776248662 podStartE2EDuration="2.776248662s" podCreationTimestamp="2026-01-21 14:37:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:37:16.753247544 +0000 UTC m=+198.830080573" watchObservedRunningTime="2026-01-21 14:37:16.776248662 +0000 UTC m=+198.853081691" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.525811 4902 generic.go:334] "Generic (PLEG): container finished" podID="d86c450a-56ce-4439-8396-f6d87fee149c" containerID="417166d5735591f122acc8577f8a8ed9b2f4076fce7e42a14e97f0765b04b1d6" exitCode=0 Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.526027 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d86c450a-56ce-4439-8396-f6d87fee149c","Type":"ContainerDied","Data":"417166d5735591f122acc8577f8a8ed9b2f4076fce7e42a14e97f0765b04b1d6"} Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.745176 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f"] Jan 21 14:37:17 crc kubenswrapper[4902]: E0121 14:37:17.745511 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ee90aa-9465-4cd2-97a0-ce735d557649" containerName="route-controller-manager" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.745523 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ee90aa-9465-4cd2-97a0-ce735d557649" containerName="route-controller-manager" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.745640 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="01ee90aa-9465-4cd2-97a0-ce735d557649" containerName="route-controller-manager" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.746017 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.749411 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.750246 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.750335 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.750483 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.750652 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.751245 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.757824 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f"] Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.773134 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.773200 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.798515 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64242610-9b91-49bc-9400-12298973aad0-client-ca\") pod \"route-controller-manager-677957ff68-7pb2f\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.798600 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64242610-9b91-49bc-9400-12298973aad0-serving-cert\") pod \"route-controller-manager-677957ff68-7pb2f\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.798666 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xh6p\" (UniqueName: \"kubernetes.io/projected/64242610-9b91-49bc-9400-12298973aad0-kube-api-access-5xh6p\") pod \"route-controller-manager-677957ff68-7pb2f\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.798710 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64242610-9b91-49bc-9400-12298973aad0-config\") pod \"route-controller-manager-677957ff68-7pb2f\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.899472 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xh6p\" (UniqueName: \"kubernetes.io/projected/64242610-9b91-49bc-9400-12298973aad0-kube-api-access-5xh6p\") pod \"route-controller-manager-677957ff68-7pb2f\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.899857 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64242610-9b91-49bc-9400-12298973aad0-config\") pod \"route-controller-manager-677957ff68-7pb2f\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.899907 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64242610-9b91-49bc-9400-12298973aad0-client-ca\") pod \"route-controller-manager-677957ff68-7pb2f\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.899934 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64242610-9b91-49bc-9400-12298973aad0-serving-cert\") pod \"route-controller-manager-677957ff68-7pb2f\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.901031 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64242610-9b91-49bc-9400-12298973aad0-client-ca\") pod \"route-controller-manager-677957ff68-7pb2f\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.901073 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64242610-9b91-49bc-9400-12298973aad0-config\") pod \"route-controller-manager-677957ff68-7pb2f\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.905755 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64242610-9b91-49bc-9400-12298973aad0-serving-cert\") pod \"route-controller-manager-677957ff68-7pb2f\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.915427 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xh6p\" (UniqueName: \"kubernetes.io/projected/64242610-9b91-49bc-9400-12298973aad0-kube-api-access-5xh6p\") pod \"route-controller-manager-677957ff68-7pb2f\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:18 crc kubenswrapper[4902]: I0121 14:37:18.062598 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:18 crc kubenswrapper[4902]: I0121 14:37:18.306129 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ee90aa-9465-4cd2-97a0-ce735d557649" path="/var/lib/kubelet/pods/01ee90aa-9465-4cd2-97a0-ce735d557649/volumes" Jan 21 14:37:18 crc kubenswrapper[4902]: I0121 14:37:18.307107 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7158f8a-be32-4700-857f-faf9157f99f5" path="/var/lib/kubelet/pods/c7158f8a-be32-4700-857f-faf9157f99f5/volumes" Jan 21 14:37:18 crc kubenswrapper[4902]: I0121 14:37:18.474921 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f"] Jan 21 14:37:18 crc kubenswrapper[4902]: W0121 14:37:18.483598 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64242610_9b91_49bc_9400_12298973aad0.slice/crio-471419efc12598c3a121f35256027b6584df5df609cdadc31ab16086370f4330 WatchSource:0}: Error finding container 471419efc12598c3a121f35256027b6584df5df609cdadc31ab16086370f4330: Status 404 returned error can't find the container with id 471419efc12598c3a121f35256027b6584df5df609cdadc31ab16086370f4330 Jan 21 14:37:18 crc kubenswrapper[4902]: I0121 14:37:18.541815 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" event={"ID":"64242610-9b91-49bc-9400-12298973aad0","Type":"ContainerStarted","Data":"471419efc12598c3a121f35256027b6584df5df609cdadc31ab16086370f4330"} Jan 21 14:37:18 crc kubenswrapper[4902]: I0121 14:37:18.813517 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:37:18 crc kubenswrapper[4902]: I0121 14:37:18.915196 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d86c450a-56ce-4439-8396-f6d87fee149c-kube-api-access\") pod \"d86c450a-56ce-4439-8396-f6d87fee149c\" (UID: \"d86c450a-56ce-4439-8396-f6d87fee149c\") " Jan 21 14:37:18 crc kubenswrapper[4902]: I0121 14:37:18.915335 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d86c450a-56ce-4439-8396-f6d87fee149c-kubelet-dir\") pod \"d86c450a-56ce-4439-8396-f6d87fee149c\" (UID: \"d86c450a-56ce-4439-8396-f6d87fee149c\") " Jan 21 14:37:18 crc kubenswrapper[4902]: I0121 14:37:18.915444 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d86c450a-56ce-4439-8396-f6d87fee149c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d86c450a-56ce-4439-8396-f6d87fee149c" (UID: "d86c450a-56ce-4439-8396-f6d87fee149c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:37:18 crc kubenswrapper[4902]: I0121 14:37:18.915633 4902 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d86c450a-56ce-4439-8396-f6d87fee149c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:18 crc kubenswrapper[4902]: I0121 14:37:18.922134 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d86c450a-56ce-4439-8396-f6d87fee149c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d86c450a-56ce-4439-8396-f6d87fee149c" (UID: "d86c450a-56ce-4439-8396-f6d87fee149c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:19 crc kubenswrapper[4902]: I0121 14:37:19.017246 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d86c450a-56ce-4439-8396-f6d87fee149c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:19 crc kubenswrapper[4902]: I0121 14:37:19.550700 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" event={"ID":"64242610-9b91-49bc-9400-12298973aad0","Type":"ContainerStarted","Data":"763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152"} Jan 21 14:37:19 crc kubenswrapper[4902]: I0121 14:37:19.552748 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:19 crc kubenswrapper[4902]: I0121 14:37:19.555183 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d86c450a-56ce-4439-8396-f6d87fee149c","Type":"ContainerDied","Data":"3eed38a6c389fd9ea1b4b51ae79af02a7862cc31d751100e19c8ae0ae07b17e7"} Jan 21 14:37:19 crc kubenswrapper[4902]: I0121 14:37:19.555212 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3eed38a6c389fd9ea1b4b51ae79af02a7862cc31d751100e19c8ae0ae07b17e7" Jan 21 14:37:19 crc kubenswrapper[4902]: I0121 14:37:19.555276 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:37:19 crc kubenswrapper[4902]: I0121 14:37:19.561202 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:19 crc kubenswrapper[4902]: I0121 14:37:19.575739 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" podStartSLOduration=13.57571361 podStartE2EDuration="13.57571361s" podCreationTimestamp="2026-01-21 14:37:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:37:19.572886114 +0000 UTC m=+201.649719143" watchObservedRunningTime="2026-01-21 14:37:19.57571361 +0000 UTC m=+201.652546649" Jan 21 14:37:24 crc kubenswrapper[4902]: I0121 14:37:24.588971 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chl56" event={"ID":"64be302e-c39a-4e45-8b5d-07b8819a6eb0","Type":"ContainerStarted","Data":"b0e5bd21eb45121c58536e203537cf733407f5259b12126eae2bd654f50021a5"} Jan 21 14:37:25 crc kubenswrapper[4902]: I0121 14:37:25.597520 4902 generic.go:334] "Generic (PLEG): container finished" podID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" containerID="b0e5bd21eb45121c58536e203537cf733407f5259b12126eae2bd654f50021a5" exitCode=0 Jan 21 14:37:25 crc kubenswrapper[4902]: I0121 14:37:25.597612 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chl56" event={"ID":"64be302e-c39a-4e45-8b5d-07b8819a6eb0","Type":"ContainerDied","Data":"b0e5bd21eb45121c58536e203537cf733407f5259b12126eae2bd654f50021a5"} Jan 21 14:37:27 crc kubenswrapper[4902]: I0121 14:37:27.608481 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgf94" event={"ID":"cc91d441-7f4a-45f8-8f71-1f04e4ade80c","Type":"ContainerStarted","Data":"d24c33bc95b69d74e25ab4cc6e01c313a3384aa99b80fde04e7056ebb32a4780"} Jan 21 14:37:27 crc kubenswrapper[4902]: I0121 14:37:27.612383 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chl56" event={"ID":"64be302e-c39a-4e45-8b5d-07b8819a6eb0","Type":"ContainerStarted","Data":"f05bf1ddcb70474108853c8a55dc880b6e2d426b5f8cf04726fd5568c5d20f31"} Jan 21 14:37:27 crc kubenswrapper[4902]: I0121 14:37:27.656905 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-chl56" podStartSLOduration=2.543483582 podStartE2EDuration="58.656889857s" podCreationTimestamp="2026-01-21 14:36:29 +0000 UTC" firstStartedPulling="2026-01-21 14:36:30.976454967 +0000 UTC m=+153.053288006" lastFinishedPulling="2026-01-21 14:37:27.089861252 +0000 UTC m=+209.166694281" observedRunningTime="2026-01-21 14:37:27.654543116 +0000 UTC m=+209.731376145" watchObservedRunningTime="2026-01-21 14:37:27.656889857 +0000 UTC m=+209.733722886" Jan 21 14:37:28 crc kubenswrapper[4902]: I0121 14:37:28.617371 4902 generic.go:334] "Generic (PLEG): container finished" podID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" containerID="32894c3de1e75b38e7274c12ebe204f101b1ece066ced856e2483329caa616b0" exitCode=0 Jan 21 14:37:28 crc kubenswrapper[4902]: I0121 14:37:28.617452 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dl5zx" event={"ID":"4504c44c-17da-4a32-ac81-7efc9ec6b1cb","Type":"ContainerDied","Data":"32894c3de1e75b38e7274c12ebe204f101b1ece066ced856e2483329caa616b0"} Jan 21 14:37:28 crc kubenswrapper[4902]: I0121 14:37:28.619817 4902 generic.go:334] "Generic (PLEG): container finished" podID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" containerID="d24c33bc95b69d74e25ab4cc6e01c313a3384aa99b80fde04e7056ebb32a4780" exitCode=0 Jan 21 14:37:28 crc kubenswrapper[4902]: I0121 14:37:28.619849 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgf94" event={"ID":"cc91d441-7f4a-45f8-8f71-1f04e4ade80c","Type":"ContainerDied","Data":"d24c33bc95b69d74e25ab4cc6e01c313a3384aa99b80fde04e7056ebb32a4780"} Jan 21 14:37:29 crc kubenswrapper[4902]: I0121 14:37:29.628414 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgf94" event={"ID":"cc91d441-7f4a-45f8-8f71-1f04e4ade80c","Type":"ContainerStarted","Data":"dafc51f1d7f9ba142fe8d6c07ed9585d44e582c8f889e035fd698495242522fb"} Jan 21 14:37:29 crc kubenswrapper[4902]: I0121 14:37:29.631453 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dl5zx" event={"ID":"4504c44c-17da-4a32-ac81-7efc9ec6b1cb","Type":"ContainerStarted","Data":"02dbc8575387c68b7070c5eedfc10c67111d8864380f2d313d7bb3003fd6c4e6"} Jan 21 14:37:29 crc kubenswrapper[4902]: I0121 14:37:29.652015 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xgf94" podStartSLOduration=3.4561969550000002 podStartE2EDuration="1m3.651998245s" podCreationTimestamp="2026-01-21 14:36:26 +0000 UTC" firstStartedPulling="2026-01-21 14:36:28.87365728 +0000 UTC m=+150.950490309" lastFinishedPulling="2026-01-21 14:37:29.06945857 +0000 UTC m=+211.146291599" observedRunningTime="2026-01-21 14:37:29.649626424 +0000 UTC m=+211.726459463" watchObservedRunningTime="2026-01-21 14:37:29.651998245 +0000 UTC m=+211.728831274" Jan 21 14:37:29 crc kubenswrapper[4902]: I0121 14:37:29.675844 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dl5zx" podStartSLOduration=2.606489773 podStartE2EDuration="1m1.675827941s" podCreationTimestamp="2026-01-21 14:36:28 +0000 UTC" firstStartedPulling="2026-01-21 14:36:29.929754593 +0000 UTC m=+152.006587622" lastFinishedPulling="2026-01-21 14:37:28.999092761 +0000 UTC m=+211.075925790" observedRunningTime="2026-01-21 14:37:29.67316543 +0000 UTC m=+211.749998459" watchObservedRunningTime="2026-01-21 14:37:29.675827941 +0000 UTC m=+211.752660970" Jan 21 14:37:29 crc kubenswrapper[4902]: I0121 14:37:29.986697 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:37:29 crc kubenswrapper[4902]: I0121 14:37:29.986753 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:37:30 crc kubenswrapper[4902]: I0121 14:37:30.637529 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqq5l" event={"ID":"bc7ccff8-2db2-4663-9565-42f2357e4bda","Type":"ContainerStarted","Data":"b890da03655fa68fe9c4fc2736b49d628cd2dfdd1eb8c53e0f6a92826e80b3e8"} Jan 21 14:37:30 crc kubenswrapper[4902]: I0121 14:37:30.642796 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7hf5" event={"ID":"19482ae1-f291-4111-83b5-56fa37063508","Type":"ContainerStarted","Data":"f826ca325a2cf505b8c815980387763af7f3ba9503a5207a8722972de88aec84"} Jan 21 14:37:31 crc kubenswrapper[4902]: I0121 14:37:31.276247 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-chl56" podUID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" containerName="registry-server" probeResult="failure" output=< Jan 21 14:37:31 crc kubenswrapper[4902]: timeout: failed to connect service ":50051" within 1s Jan 21 14:37:31 crc kubenswrapper[4902]: > Jan 21 14:37:31 crc kubenswrapper[4902]: I0121 14:37:31.651660 4902 generic.go:334] "Generic (PLEG): container finished" podID="008311b3-7361-4466-aacd-01bbaa16f6df" containerID="226f8033dfc960b2048f5a5746cfbc89e10af030144bd958ae6325089083fcbc" exitCode=0 Jan 21 14:37:31 crc kubenswrapper[4902]: I0121 14:37:31.651745 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77b9d" event={"ID":"008311b3-7361-4466-aacd-01bbaa16f6df","Type":"ContainerDied","Data":"226f8033dfc960b2048f5a5746cfbc89e10af030144bd958ae6325089083fcbc"} Jan 21 14:37:31 crc kubenswrapper[4902]: I0121 14:37:31.656338 4902 generic.go:334] "Generic (PLEG): container finished" podID="19482ae1-f291-4111-83b5-56fa37063508" containerID="f826ca325a2cf505b8c815980387763af7f3ba9503a5207a8722972de88aec84" exitCode=0 Jan 21 14:37:31 crc kubenswrapper[4902]: I0121 14:37:31.656391 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7hf5" event={"ID":"19482ae1-f291-4111-83b5-56fa37063508","Type":"ContainerDied","Data":"f826ca325a2cf505b8c815980387763af7f3ba9503a5207a8722972de88aec84"} Jan 21 14:37:31 crc kubenswrapper[4902]: I0121 14:37:31.661654 4902 generic.go:334] "Generic (PLEG): container finished" podID="bc7ccff8-2db2-4663-9565-42f2357e4bda" containerID="b890da03655fa68fe9c4fc2736b49d628cd2dfdd1eb8c53e0f6a92826e80b3e8" exitCode=0 Jan 21 14:37:31 crc kubenswrapper[4902]: I0121 14:37:31.661785 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqq5l" event={"ID":"bc7ccff8-2db2-4663-9565-42f2357e4bda","Type":"ContainerDied","Data":"b890da03655fa68fe9c4fc2736b49d628cd2dfdd1eb8c53e0f6a92826e80b3e8"} Jan 21 14:37:32 crc kubenswrapper[4902]: I0121 14:37:32.667522 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7hf5" event={"ID":"19482ae1-f291-4111-83b5-56fa37063508","Type":"ContainerStarted","Data":"62550e0c563096e2eaf93d50c31f0aa211becf7530f77d8467b535ecccc978a4"} Jan 21 14:37:32 crc kubenswrapper[4902]: I0121 14:37:32.669623 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98c57" event={"ID":"ea3b3336-0258-4b66-bd33-dd4e01543236","Type":"ContainerStarted","Data":"8cab10508e95ad152396df1527b6a482788a8695c58e273be61d4a6811398d99"} Jan 21 14:37:32 crc kubenswrapper[4902]: I0121 14:37:32.671801 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqq5l" event={"ID":"bc7ccff8-2db2-4663-9565-42f2357e4bda","Type":"ContainerStarted","Data":"d0628adcdeba28f3f91c06f4ebfb0013ccce0fcc7b38a42868bbe4850a301bc0"} Jan 21 14:37:32 crc kubenswrapper[4902]: I0121 14:37:32.686576 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d7hf5" podStartSLOduration=2.579450808 podStartE2EDuration="1m4.686550553s" podCreationTimestamp="2026-01-21 14:36:28 +0000 UTC" firstStartedPulling="2026-01-21 14:36:29.927971892 +0000 UTC m=+152.004804921" lastFinishedPulling="2026-01-21 14:37:32.035071637 +0000 UTC m=+214.111904666" observedRunningTime="2026-01-21 14:37:32.68149294 +0000 UTC m=+214.758325969" watchObservedRunningTime="2026-01-21 14:37:32.686550553 +0000 UTC m=+214.763383582" Jan 21 14:37:32 crc kubenswrapper[4902]: I0121 14:37:32.695871 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77b9d" event={"ID":"008311b3-7361-4466-aacd-01bbaa16f6df","Type":"ContainerStarted","Data":"318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed"} Jan 21 14:37:32 crc kubenswrapper[4902]: I0121 14:37:32.721298 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fqq5l" podStartSLOduration=3.5085660279999997 podStartE2EDuration="1m6.721279862s" podCreationTimestamp="2026-01-21 14:36:26 +0000 UTC" firstStartedPulling="2026-01-21 14:36:28.874085705 +0000 UTC m=+150.950918724" lastFinishedPulling="2026-01-21 14:37:32.086799499 +0000 UTC m=+214.163632558" observedRunningTime="2026-01-21 14:37:32.718861409 +0000 UTC m=+214.795694438" watchObservedRunningTime="2026-01-21 14:37:32.721279862 +0000 UTC m=+214.798112891" Jan 21 14:37:32 crc kubenswrapper[4902]: I0121 14:37:32.746362 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-77b9d" podStartSLOduration=3.361184334 podStartE2EDuration="1m6.74633047s" podCreationTimestamp="2026-01-21 14:36:26 +0000 UTC" firstStartedPulling="2026-01-21 14:36:28.892057202 +0000 UTC m=+150.968890231" lastFinishedPulling="2026-01-21 14:37:32.277203338 +0000 UTC m=+214.354036367" observedRunningTime="2026-01-21 14:37:32.742445957 +0000 UTC m=+214.819278996" watchObservedRunningTime="2026-01-21 14:37:32.74633047 +0000 UTC m=+214.823163499" Jan 21 14:37:33 crc kubenswrapper[4902]: I0121 14:37:33.702427 4902 generic.go:334] "Generic (PLEG): container finished" podID="ea3b3336-0258-4b66-bd33-dd4e01543236" containerID="8cab10508e95ad152396df1527b6a482788a8695c58e273be61d4a6811398d99" exitCode=0 Jan 21 14:37:33 crc kubenswrapper[4902]: I0121 14:37:33.702497 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98c57" event={"ID":"ea3b3336-0258-4b66-bd33-dd4e01543236","Type":"ContainerDied","Data":"8cab10508e95ad152396df1527b6a482788a8695c58e273be61d4a6811398d99"} Jan 21 14:37:33 crc kubenswrapper[4902]: I0121 14:37:33.851008 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gzb8l"] Jan 21 14:37:33 crc kubenswrapper[4902]: E0121 14:37:33.851237 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d86c450a-56ce-4439-8396-f6d87fee149c" containerName="pruner" Jan 21 14:37:33 crc kubenswrapper[4902]: I0121 14:37:33.851251 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d86c450a-56ce-4439-8396-f6d87fee149c" containerName="pruner" Jan 21 14:37:33 crc kubenswrapper[4902]: I0121 14:37:33.851351 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d86c450a-56ce-4439-8396-f6d87fee149c" containerName="pruner" Jan 21 14:37:33 crc kubenswrapper[4902]: I0121 14:37:33.851709 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:33 crc kubenswrapper[4902]: I0121 14:37:33.876453 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gzb8l"] Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.022328 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf7v7\" (UniqueName: \"kubernetes.io/projected/bb2b422b-c8b3-48ec-901a-e9da16f653fa-kube-api-access-nf7v7\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.022368 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bb2b422b-c8b3-48ec-901a-e9da16f653fa-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.022400 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.022449 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bb2b422b-c8b3-48ec-901a-e9da16f653fa-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.022509 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb2b422b-c8b3-48ec-901a-e9da16f653fa-trusted-ca\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.022540 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bb2b422b-c8b3-48ec-901a-e9da16f653fa-registry-tls\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.022575 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bb2b422b-c8b3-48ec-901a-e9da16f653fa-bound-sa-token\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.022616 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bb2b422b-c8b3-48ec-901a-e9da16f653fa-registry-certificates\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.048035 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.124029 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bb2b422b-c8b3-48ec-901a-e9da16f653fa-registry-certificates\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.124131 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf7v7\" (UniqueName: \"kubernetes.io/projected/bb2b422b-c8b3-48ec-901a-e9da16f653fa-kube-api-access-nf7v7\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.124149 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bb2b422b-c8b3-48ec-901a-e9da16f653fa-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.124182 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bb2b422b-c8b3-48ec-901a-e9da16f653fa-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.124232 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb2b422b-c8b3-48ec-901a-e9da16f653fa-trusted-ca\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.124254 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bb2b422b-c8b3-48ec-901a-e9da16f653fa-registry-tls\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.124279 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bb2b422b-c8b3-48ec-901a-e9da16f653fa-bound-sa-token\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.125438 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bb2b422b-c8b3-48ec-901a-e9da16f653fa-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.125688 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bb2b422b-c8b3-48ec-901a-e9da16f653fa-registry-certificates\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.125958 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb2b422b-c8b3-48ec-901a-e9da16f653fa-trusted-ca\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.131872 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bb2b422b-c8b3-48ec-901a-e9da16f653fa-registry-tls\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.132998 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bb2b422b-c8b3-48ec-901a-e9da16f653fa-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.147439 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bb2b422b-c8b3-48ec-901a-e9da16f653fa-bound-sa-token\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.148162 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf7v7\" (UniqueName: \"kubernetes.io/projected/bb2b422b-c8b3-48ec-901a-e9da16f653fa-kube-api-access-nf7v7\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.166441 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.635282 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gzb8l"] Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.722134 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fl2j4" event={"ID":"3c88f2d9-944f-408e-bfe3-41c8baac6175","Type":"ContainerStarted","Data":"d47b9202768c23da8b9ddf08fa74c03fd7af32e18474775bda06b918e62fb8d2"} Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.723713 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" event={"ID":"bb2b422b-c8b3-48ec-901a-e9da16f653fa","Type":"ContainerStarted","Data":"3f908700c8cf853274ca5eeb6feb0f07e0be5242f0793e3f63013efe713ad8fe"} Jan 21 14:37:35 crc kubenswrapper[4902]: I0121 14:37:35.731242 4902 generic.go:334] "Generic (PLEG): container finished" podID="3c88f2d9-944f-408e-bfe3-41c8baac6175" containerID="d47b9202768c23da8b9ddf08fa74c03fd7af32e18474775bda06b918e62fb8d2" exitCode=0 Jan 21 14:37:35 crc kubenswrapper[4902]: I0121 14:37:35.731328 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fl2j4" event={"ID":"3c88f2d9-944f-408e-bfe3-41c8baac6175","Type":"ContainerDied","Data":"d47b9202768c23da8b9ddf08fa74c03fd7af32e18474775bda06b918e62fb8d2"} Jan 21 14:37:35 crc kubenswrapper[4902]: I0121 14:37:35.733275 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" event={"ID":"bb2b422b-c8b3-48ec-901a-e9da16f653fa","Type":"ContainerStarted","Data":"fab65958ab03b4830e04b8ca296f80ab4d88555ba18b09a05ef3c49b008e6ad5"} Jan 21 14:37:35 crc kubenswrapper[4902]: I0121 14:37:35.733933 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:36 crc kubenswrapper[4902]: I0121 14:37:36.401012 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:37:36 crc kubenswrapper[4902]: I0121 14:37:36.401083 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:37:36 crc kubenswrapper[4902]: I0121 14:37:36.450621 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:37:36 crc kubenswrapper[4902]: I0121 14:37:36.470154 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" podStartSLOduration=3.4701363069999998 podStartE2EDuration="3.470136307s" podCreationTimestamp="2026-01-21 14:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:37:35.771703763 +0000 UTC m=+217.848536782" watchObservedRunningTime="2026-01-21 14:37:36.470136307 +0000 UTC m=+218.546969336" Jan 21 14:37:36 crc kubenswrapper[4902]: I0121 14:37:36.618968 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:37:36 crc kubenswrapper[4902]: I0121 14:37:36.619013 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:37:36 crc kubenswrapper[4902]: I0121 14:37:36.659454 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:37:36 crc kubenswrapper[4902]: I0121 14:37:36.782314 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:37:37 crc kubenswrapper[4902]: I0121 14:37:37.088962 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:37:37 crc kubenswrapper[4902]: I0121 14:37:37.089143 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:37:37 crc kubenswrapper[4902]: I0121 14:37:37.133172 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:37:37 crc kubenswrapper[4902]: I0121 14:37:37.780455 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:37:38 crc kubenswrapper[4902]: I0121 14:37:38.626328 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:37:38 crc kubenswrapper[4902]: I0121 14:37:38.628320 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:37:38 crc kubenswrapper[4902]: I0121 14:37:38.672988 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:37:38 crc kubenswrapper[4902]: I0121 14:37:38.783036 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:37:38 crc kubenswrapper[4902]: I0121 14:37:38.951881 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:37:38 crc kubenswrapper[4902]: I0121 14:37:38.951942 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:37:39 crc kubenswrapper[4902]: I0121 14:37:39.006584 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:37:39 crc kubenswrapper[4902]: I0121 14:37:39.792849 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:37:40 crc kubenswrapper[4902]: I0121 14:37:40.032271 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:37:40 crc kubenswrapper[4902]: I0121 14:37:40.079233 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:37:40 crc kubenswrapper[4902]: I0121 14:37:40.230275 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-77b9d"] Jan 21 14:37:40 crc kubenswrapper[4902]: I0121 14:37:40.758313 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98c57" event={"ID":"ea3b3336-0258-4b66-bd33-dd4e01543236","Type":"ContainerStarted","Data":"326f8caa7a461344deaec70d776c5a6beda6a87d6c452e21917d3f11867ce5f4"} Jan 21 14:37:40 crc kubenswrapper[4902]: I0121 14:37:40.762728 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fl2j4" event={"ID":"3c88f2d9-944f-408e-bfe3-41c8baac6175","Type":"ContainerStarted","Data":"86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19"} Jan 21 14:37:40 crc kubenswrapper[4902]: I0121 14:37:40.763570 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-77b9d" podUID="008311b3-7361-4466-aacd-01bbaa16f6df" containerName="registry-server" containerID="cri-o://318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed" gracePeriod=2 Jan 21 14:37:40 crc kubenswrapper[4902]: I0121 14:37:40.783464 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-98c57" podStartSLOduration=2.877360427 podStartE2EDuration="1m11.783441707s" podCreationTimestamp="2026-01-21 14:36:29 +0000 UTC" firstStartedPulling="2026-01-21 14:36:30.966507951 +0000 UTC m=+153.043340980" lastFinishedPulling="2026-01-21 14:37:39.872589231 +0000 UTC m=+221.949422260" observedRunningTime="2026-01-21 14:37:40.780476796 +0000 UTC m=+222.857309845" watchObservedRunningTime="2026-01-21 14:37:40.783441707 +0000 UTC m=+222.860274736" Jan 21 14:37:40 crc kubenswrapper[4902]: I0121 14:37:40.800036 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fl2j4" podStartSLOduration=3.802091478 podStartE2EDuration="1m14.800019035s" podCreationTimestamp="2026-01-21 14:36:26 +0000 UTC" firstStartedPulling="2026-01-21 14:36:28.897768285 +0000 UTC m=+150.974601314" lastFinishedPulling="2026-01-21 14:37:39.895695842 +0000 UTC m=+221.972528871" observedRunningTime="2026-01-21 14:37:40.799572839 +0000 UTC m=+222.876405868" watchObservedRunningTime="2026-01-21 14:37:40.800019035 +0000 UTC m=+222.876852064" Jan 21 14:37:40 crc kubenswrapper[4902]: E0121 14:37:40.831293 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod008311b3_7361_4466_aacd_01bbaa16f6df.slice/crio-318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed.scope\": RecentStats: unable to find data in memory cache]" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.259024 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.425593 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008311b3-7361-4466-aacd-01bbaa16f6df-utilities\") pod \"008311b3-7361-4466-aacd-01bbaa16f6df\" (UID: \"008311b3-7361-4466-aacd-01bbaa16f6df\") " Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.425915 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sp9q\" (UniqueName: \"kubernetes.io/projected/008311b3-7361-4466-aacd-01bbaa16f6df-kube-api-access-2sp9q\") pod \"008311b3-7361-4466-aacd-01bbaa16f6df\" (UID: \"008311b3-7361-4466-aacd-01bbaa16f6df\") " Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.425975 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008311b3-7361-4466-aacd-01bbaa16f6df-catalog-content\") pod \"008311b3-7361-4466-aacd-01bbaa16f6df\" (UID: \"008311b3-7361-4466-aacd-01bbaa16f6df\") " Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.426670 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008311b3-7361-4466-aacd-01bbaa16f6df-utilities" (OuterVolumeSpecName: "utilities") pod "008311b3-7361-4466-aacd-01bbaa16f6df" (UID: "008311b3-7361-4466-aacd-01bbaa16f6df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.427724 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008311b3-7361-4466-aacd-01bbaa16f6df-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.440831 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/008311b3-7361-4466-aacd-01bbaa16f6df-kube-api-access-2sp9q" (OuterVolumeSpecName: "kube-api-access-2sp9q") pod "008311b3-7361-4466-aacd-01bbaa16f6df" (UID: "008311b3-7361-4466-aacd-01bbaa16f6df"). InnerVolumeSpecName "kube-api-access-2sp9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.498926 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008311b3-7361-4466-aacd-01bbaa16f6df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "008311b3-7361-4466-aacd-01bbaa16f6df" (UID: "008311b3-7361-4466-aacd-01bbaa16f6df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.528769 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sp9q\" (UniqueName: \"kubernetes.io/projected/008311b3-7361-4466-aacd-01bbaa16f6df-kube-api-access-2sp9q\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.528806 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008311b3-7361-4466-aacd-01bbaa16f6df-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.769752 4902 generic.go:334] "Generic (PLEG): container finished" podID="008311b3-7361-4466-aacd-01bbaa16f6df" containerID="318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed" exitCode=0 Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.769833 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77b9d" event={"ID":"008311b3-7361-4466-aacd-01bbaa16f6df","Type":"ContainerDied","Data":"318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed"} Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.769858 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.769890 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77b9d" event={"ID":"008311b3-7361-4466-aacd-01bbaa16f6df","Type":"ContainerDied","Data":"e20ca1be73e15aa077e2b594ce74f037b1aa06b9991f1e73b39c1c1eae4ecbca"} Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.769920 4902 scope.go:117] "RemoveContainer" containerID="318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.787960 4902 scope.go:117] "RemoveContainer" containerID="226f8033dfc960b2048f5a5746cfbc89e10af030144bd958ae6325089083fcbc" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.801460 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-77b9d"] Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.805952 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-77b9d"] Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.826012 4902 scope.go:117] "RemoveContainer" containerID="063b36f18a1bb6d459f21845a8ce4f47fc36116393e41f5afc69b58116e1a5b9" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.839947 4902 scope.go:117] "RemoveContainer" containerID="318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed" Jan 21 14:37:41 crc kubenswrapper[4902]: E0121 14:37:41.840359 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed\": container with ID starting with 318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed not found: ID does not exist" containerID="318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.840466 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed"} err="failed to get container status \"318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed\": rpc error: code = NotFound desc = could not find container \"318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed\": container with ID starting with 318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed not found: ID does not exist" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.840573 4902 scope.go:117] "RemoveContainer" containerID="226f8033dfc960b2048f5a5746cfbc89e10af030144bd958ae6325089083fcbc" Jan 21 14:37:41 crc kubenswrapper[4902]: E0121 14:37:41.841114 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"226f8033dfc960b2048f5a5746cfbc89e10af030144bd958ae6325089083fcbc\": container with ID starting with 226f8033dfc960b2048f5a5746cfbc89e10af030144bd958ae6325089083fcbc not found: ID does not exist" containerID="226f8033dfc960b2048f5a5746cfbc89e10af030144bd958ae6325089083fcbc" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.841175 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"226f8033dfc960b2048f5a5746cfbc89e10af030144bd958ae6325089083fcbc"} err="failed to get container status \"226f8033dfc960b2048f5a5746cfbc89e10af030144bd958ae6325089083fcbc\": rpc error: code = NotFound desc = could not find container \"226f8033dfc960b2048f5a5746cfbc89e10af030144bd958ae6325089083fcbc\": container with ID starting with 226f8033dfc960b2048f5a5746cfbc89e10af030144bd958ae6325089083fcbc not found: ID does not exist" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.841223 4902 scope.go:117] "RemoveContainer" containerID="063b36f18a1bb6d459f21845a8ce4f47fc36116393e41f5afc69b58116e1a5b9" Jan 21 14:37:41 crc kubenswrapper[4902]: E0121 14:37:41.862099 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"063b36f18a1bb6d459f21845a8ce4f47fc36116393e41f5afc69b58116e1a5b9\": container with ID starting with 063b36f18a1bb6d459f21845a8ce4f47fc36116393e41f5afc69b58116e1a5b9 not found: ID does not exist" containerID="063b36f18a1bb6d459f21845a8ce4f47fc36116393e41f5afc69b58116e1a5b9" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.862163 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"063b36f18a1bb6d459f21845a8ce4f47fc36116393e41f5afc69b58116e1a5b9"} err="failed to get container status \"063b36f18a1bb6d459f21845a8ce4f47fc36116393e41f5afc69b58116e1a5b9\": rpc error: code = NotFound desc = could not find container \"063b36f18a1bb6d459f21845a8ce4f47fc36116393e41f5afc69b58116e1a5b9\": container with ID starting with 063b36f18a1bb6d459f21845a8ce4f47fc36116393e41f5afc69b58116e1a5b9 not found: ID does not exist" Jan 21 14:37:42 crc kubenswrapper[4902]: I0121 14:37:42.302493 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="008311b3-7361-4466-aacd-01bbaa16f6df" path="/var/lib/kubelet/pods/008311b3-7361-4466-aacd-01bbaa16f6df/volumes" Jan 21 14:37:42 crc kubenswrapper[4902]: I0121 14:37:42.628417 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dl5zx"] Jan 21 14:37:42 crc kubenswrapper[4902]: I0121 14:37:42.629053 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dl5zx" podUID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" containerName="registry-server" containerID="cri-o://02dbc8575387c68b7070c5eedfc10c67111d8864380f2d313d7bb3003fd6c4e6" gracePeriod=2 Jan 21 14:37:43 crc kubenswrapper[4902]: I0121 14:37:43.625877 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-chl56"] Jan 21 14:37:43 crc kubenswrapper[4902]: I0121 14:37:43.626376 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-chl56" podUID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" containerName="registry-server" containerID="cri-o://f05bf1ddcb70474108853c8a55dc880b6e2d426b5f8cf04726fd5568c5d20f31" gracePeriod=2 Jan 21 14:37:43 crc kubenswrapper[4902]: I0121 14:37:43.935285 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fl2j4"] Jan 21 14:37:43 crc kubenswrapper[4902]: I0121 14:37:43.935700 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fl2j4" podUID="3c88f2d9-944f-408e-bfe3-41c8baac6175" containerName="registry-server" containerID="cri-o://86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19" gracePeriod=30 Jan 21 14:37:43 crc kubenswrapper[4902]: I0121 14:37:43.941202 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xgf94"] Jan 21 14:37:43 crc kubenswrapper[4902]: I0121 14:37:43.941503 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xgf94" podUID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" containerName="registry-server" containerID="cri-o://dafc51f1d7f9ba142fe8d6c07ed9585d44e582c8f889e035fd698495242522fb" gracePeriod=30 Jan 21 14:37:43 crc kubenswrapper[4902]: I0121 14:37:43.961545 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fqq5l"] Jan 21 14:37:43 crc kubenswrapper[4902]: I0121 14:37:43.961813 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fqq5l" podUID="bc7ccff8-2db2-4663-9565-42f2357e4bda" containerName="registry-server" containerID="cri-o://d0628adcdeba28f3f91c06f4ebfb0013ccce0fcc7b38a42868bbe4850a301bc0" gracePeriod=30 Jan 21 14:37:43 crc kubenswrapper[4902]: E0121 14:37:43.965364 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0628adcdeba28f3f91c06f4ebfb0013ccce0fcc7b38a42868bbe4850a301bc0" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 14:37:43 crc kubenswrapper[4902]: E0121 14:37:43.968602 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0628adcdeba28f3f91c06f4ebfb0013ccce0fcc7b38a42868bbe4850a301bc0" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 14:37:43 crc kubenswrapper[4902]: E0121 14:37:43.970439 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0628adcdeba28f3f91c06f4ebfb0013ccce0fcc7b38a42868bbe4850a301bc0" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 14:37:43 crc kubenswrapper[4902]: E0121 14:37:43.970490 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/community-operators-fqq5l" podUID="bc7ccff8-2db2-4663-9565-42f2357e4bda" containerName="registry-server" Jan 21 14:37:43 crc kubenswrapper[4902]: I0121 14:37:43.981307 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xm5cd"] Jan 21 14:37:43 crc kubenswrapper[4902]: I0121 14:37:43.981562 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" podUID="179de16d-c6d0-4cda-8d1f-8c2396301175" containerName="marketplace-operator" containerID="cri-o://f47ac0d984bd534f8dbc95c34421c4c7e222580c524d56fef0a86d89726b4ac0" gracePeriod=30 Jan 21 14:37:43 crc kubenswrapper[4902]: I0121 14:37:43.992515 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d7hf5"] Jan 21 14:37:43 crc kubenswrapper[4902]: I0121 14:37:43.992887 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d7hf5" podUID="19482ae1-f291-4111-83b5-56fa37063508" containerName="registry-server" containerID="cri-o://62550e0c563096e2eaf93d50c31f0aa211becf7530f77d8467b535ecccc978a4" gracePeriod=30 Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.008752 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-z4vkp"] Jan 21 14:37:44 crc kubenswrapper[4902]: E0121 14:37:44.009595 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008311b3-7361-4466-aacd-01bbaa16f6df" containerName="extract-utilities" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.009622 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="008311b3-7361-4466-aacd-01bbaa16f6df" containerName="extract-utilities" Jan 21 14:37:44 crc kubenswrapper[4902]: E0121 14:37:44.009640 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008311b3-7361-4466-aacd-01bbaa16f6df" containerName="extract-content" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.009648 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="008311b3-7361-4466-aacd-01bbaa16f6df" containerName="extract-content" Jan 21 14:37:44 crc kubenswrapper[4902]: E0121 14:37:44.009659 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008311b3-7361-4466-aacd-01bbaa16f6df" containerName="registry-server" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.009670 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="008311b3-7361-4466-aacd-01bbaa16f6df" containerName="registry-server" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.009917 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="008311b3-7361-4466-aacd-01bbaa16f6df" containerName="registry-server" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.010646 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.023088 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-98c57"] Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.023611 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-98c57" podUID="ea3b3336-0258-4b66-bd33-dd4e01543236" containerName="registry-server" containerID="cri-o://326f8caa7a461344deaec70d776c5a6beda6a87d6c452e21917d3f11867ce5f4" gracePeriod=30 Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.028707 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-z4vkp"] Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.185169 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/021a0823-715d-4b67-b5b2-b52ec6d6c7e8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-z4vkp\" (UID: \"021a0823-715d-4b67-b5b2-b52ec6d6c7e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.185540 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zddwg\" (UniqueName: \"kubernetes.io/projected/021a0823-715d-4b67-b5b2-b52ec6d6c7e8-kube-api-access-zddwg\") pod \"marketplace-operator-79b997595-z4vkp\" (UID: \"021a0823-715d-4b67-b5b2-b52ec6d6c7e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.185567 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/021a0823-715d-4b67-b5b2-b52ec6d6c7e8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-z4vkp\" (UID: \"021a0823-715d-4b67-b5b2-b52ec6d6c7e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.219448 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n2xzb"] Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.286554 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zddwg\" (UniqueName: \"kubernetes.io/projected/021a0823-715d-4b67-b5b2-b52ec6d6c7e8-kube-api-access-zddwg\") pod \"marketplace-operator-79b997595-z4vkp\" (UID: \"021a0823-715d-4b67-b5b2-b52ec6d6c7e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.286633 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/021a0823-715d-4b67-b5b2-b52ec6d6c7e8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-z4vkp\" (UID: \"021a0823-715d-4b67-b5b2-b52ec6d6c7e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.286682 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/021a0823-715d-4b67-b5b2-b52ec6d6c7e8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-z4vkp\" (UID: \"021a0823-715d-4b67-b5b2-b52ec6d6c7e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.289131 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/021a0823-715d-4b67-b5b2-b52ec6d6c7e8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-z4vkp\" (UID: \"021a0823-715d-4b67-b5b2-b52ec6d6c7e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.293887 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/021a0823-715d-4b67-b5b2-b52ec6d6c7e8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-z4vkp\" (UID: \"021a0823-715d-4b67-b5b2-b52ec6d6c7e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.322614 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zddwg\" (UniqueName: \"kubernetes.io/projected/021a0823-715d-4b67-b5b2-b52ec6d6c7e8-kube-api-access-zddwg\") pod \"marketplace-operator-79b997595-z4vkp\" (UID: \"021a0823-715d-4b67-b5b2-b52ec6d6c7e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.387495 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.620671 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.711564 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c88f2d9-944f-408e-bfe3-41c8baac6175-catalog-content\") pod \"3c88f2d9-944f-408e-bfe3-41c8baac6175\" (UID: \"3c88f2d9-944f-408e-bfe3-41c8baac6175\") " Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.711651 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c88f2d9-944f-408e-bfe3-41c8baac6175-utilities\") pod \"3c88f2d9-944f-408e-bfe3-41c8baac6175\" (UID: \"3c88f2d9-944f-408e-bfe3-41c8baac6175\") " Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.711705 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nlk7\" (UniqueName: \"kubernetes.io/projected/3c88f2d9-944f-408e-bfe3-41c8baac6175-kube-api-access-9nlk7\") pod \"3c88f2d9-944f-408e-bfe3-41c8baac6175\" (UID: \"3c88f2d9-944f-408e-bfe3-41c8baac6175\") " Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.714585 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c88f2d9-944f-408e-bfe3-41c8baac6175-utilities" (OuterVolumeSpecName: "utilities") pod "3c88f2d9-944f-408e-bfe3-41c8baac6175" (UID: "3c88f2d9-944f-408e-bfe3-41c8baac6175"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.719766 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c88f2d9-944f-408e-bfe3-41c8baac6175-kube-api-access-9nlk7" (OuterVolumeSpecName: "kube-api-access-9nlk7") pod "3c88f2d9-944f-408e-bfe3-41c8baac6175" (UID: "3c88f2d9-944f-408e-bfe3-41c8baac6175"). InnerVolumeSpecName "kube-api-access-9nlk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.813088 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c88f2d9-944f-408e-bfe3-41c8baac6175-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.813429 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nlk7\" (UniqueName: \"kubernetes.io/projected/3c88f2d9-944f-408e-bfe3-41c8baac6175-kube-api-access-9nlk7\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.816622 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c88f2d9-944f-408e-bfe3-41c8baac6175-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c88f2d9-944f-408e-bfe3-41c8baac6175" (UID: "3c88f2d9-944f-408e-bfe3-41c8baac6175"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.826278 4902 generic.go:334] "Generic (PLEG): container finished" podID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" containerID="02dbc8575387c68b7070c5eedfc10c67111d8864380f2d313d7bb3003fd6c4e6" exitCode=0 Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.830026 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dl5zx" event={"ID":"4504c44c-17da-4a32-ac81-7efc9ec6b1cb","Type":"ContainerDied","Data":"02dbc8575387c68b7070c5eedfc10c67111d8864380f2d313d7bb3003fd6c4e6"} Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.835057 4902 generic.go:334] "Generic (PLEG): container finished" podID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" containerID="dafc51f1d7f9ba142fe8d6c07ed9585d44e582c8f889e035fd698495242522fb" exitCode=0 Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.835204 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgf94" event={"ID":"cc91d441-7f4a-45f8-8f71-1f04e4ade80c","Type":"ContainerDied","Data":"dafc51f1d7f9ba142fe8d6c07ed9585d44e582c8f889e035fd698495242522fb"} Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.836874 4902 generic.go:334] "Generic (PLEG): container finished" podID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" containerID="f05bf1ddcb70474108853c8a55dc880b6e2d426b5f8cf04726fd5568c5d20f31" exitCode=0 Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.836980 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chl56" event={"ID":"64be302e-c39a-4e45-8b5d-07b8819a6eb0","Type":"ContainerDied","Data":"f05bf1ddcb70474108853c8a55dc880b6e2d426b5f8cf04726fd5568c5d20f31"} Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.838195 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-98c57_ea3b3336-0258-4b66-bd33-dd4e01543236/registry-server/0.log" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.839467 4902 generic.go:334] "Generic (PLEG): container finished" podID="ea3b3336-0258-4b66-bd33-dd4e01543236" containerID="326f8caa7a461344deaec70d776c5a6beda6a87d6c452e21917d3f11867ce5f4" exitCode=1 Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.839571 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98c57" event={"ID":"ea3b3336-0258-4b66-bd33-dd4e01543236","Type":"ContainerDied","Data":"326f8caa7a461344deaec70d776c5a6beda6a87d6c452e21917d3f11867ce5f4"} Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.841453 4902 generic.go:334] "Generic (PLEG): container finished" podID="bc7ccff8-2db2-4663-9565-42f2357e4bda" containerID="d0628adcdeba28f3f91c06f4ebfb0013ccce0fcc7b38a42868bbe4850a301bc0" exitCode=0 Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.841536 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqq5l" event={"ID":"bc7ccff8-2db2-4663-9565-42f2357e4bda","Type":"ContainerDied","Data":"d0628adcdeba28f3f91c06f4ebfb0013ccce0fcc7b38a42868bbe4850a301bc0"} Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.843449 4902 generic.go:334] "Generic (PLEG): container finished" podID="3c88f2d9-944f-408e-bfe3-41c8baac6175" containerID="86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19" exitCode=0 Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.843599 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fl2j4" event={"ID":"3c88f2d9-944f-408e-bfe3-41c8baac6175","Type":"ContainerDied","Data":"86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19"} Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.843684 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.843923 4902 scope.go:117] "RemoveContainer" containerID="86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.843911 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fl2j4" event={"ID":"3c88f2d9-944f-408e-bfe3-41c8baac6175","Type":"ContainerDied","Data":"3ede55dacea16111f6202914e24a4d44b7e914f57126067eef1577b038c06a0b"} Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.845357 4902 generic.go:334] "Generic (PLEG): container finished" podID="179de16d-c6d0-4cda-8d1f-8c2396301175" containerID="f47ac0d984bd534f8dbc95c34421c4c7e222580c524d56fef0a86d89726b4ac0" exitCode=0 Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.845449 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" event={"ID":"179de16d-c6d0-4cda-8d1f-8c2396301175","Type":"ContainerDied","Data":"f47ac0d984bd534f8dbc95c34421c4c7e222580c524d56fef0a86d89726b4ac0"} Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.850823 4902 generic.go:334] "Generic (PLEG): container finished" podID="19482ae1-f291-4111-83b5-56fa37063508" containerID="62550e0c563096e2eaf93d50c31f0aa211becf7530f77d8467b535ecccc978a4" exitCode=0 Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.850882 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7hf5" event={"ID":"19482ae1-f291-4111-83b5-56fa37063508","Type":"ContainerDied","Data":"62550e0c563096e2eaf93d50c31f0aa211becf7530f77d8467b535ecccc978a4"} Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.883453 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-98c57_ea3b3336-0258-4b66-bd33-dd4e01543236/registry-server/0.log" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.884310 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.895006 4902 scope.go:117] "RemoveContainer" containerID="d47b9202768c23da8b9ddf08fa74c03fd7af32e18474775bda06b918e62fb8d2" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.897611 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fl2j4"] Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.898697 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.901864 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fl2j4"] Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.914438 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c88f2d9-944f-408e-bfe3-41c8baac6175-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.947677 4902 scope.go:117] "RemoveContainer" containerID="d21ddf243e31e4b7e0f150105b5b8e874517c71e43e94e38c2969e06db499de6" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.989837 4902 scope.go:117] "RemoveContainer" containerID="86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19" Jan 21 14:37:44 crc kubenswrapper[4902]: E0121 14:37:44.990374 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19\": container with ID starting with 86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19 not found: ID does not exist" containerID="86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.990408 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19"} err="failed to get container status \"86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19\": rpc error: code = NotFound desc = could not find container \"86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19\": container with ID starting with 86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19 not found: ID does not exist" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.990433 4902 scope.go:117] "RemoveContainer" containerID="d47b9202768c23da8b9ddf08fa74c03fd7af32e18474775bda06b918e62fb8d2" Jan 21 14:37:44 crc kubenswrapper[4902]: E0121 14:37:44.990845 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d47b9202768c23da8b9ddf08fa74c03fd7af32e18474775bda06b918e62fb8d2\": container with ID starting with d47b9202768c23da8b9ddf08fa74c03fd7af32e18474775bda06b918e62fb8d2 not found: ID does not exist" containerID="d47b9202768c23da8b9ddf08fa74c03fd7af32e18474775bda06b918e62fb8d2" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.990859 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d47b9202768c23da8b9ddf08fa74c03fd7af32e18474775bda06b918e62fb8d2"} err="failed to get container status \"d47b9202768c23da8b9ddf08fa74c03fd7af32e18474775bda06b918e62fb8d2\": rpc error: code = NotFound desc = could not find container \"d47b9202768c23da8b9ddf08fa74c03fd7af32e18474775bda06b918e62fb8d2\": container with ID starting with d47b9202768c23da8b9ddf08fa74c03fd7af32e18474775bda06b918e62fb8d2 not found: ID does not exist" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.990870 4902 scope.go:117] "RemoveContainer" containerID="d21ddf243e31e4b7e0f150105b5b8e874517c71e43e94e38c2969e06db499de6" Jan 21 14:37:44 crc kubenswrapper[4902]: E0121 14:37:44.991799 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d21ddf243e31e4b7e0f150105b5b8e874517c71e43e94e38c2969e06db499de6\": container with ID starting with d21ddf243e31e4b7e0f150105b5b8e874517c71e43e94e38c2969e06db499de6 not found: ID does not exist" containerID="d21ddf243e31e4b7e0f150105b5b8e874517c71e43e94e38c2969e06db499de6" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.991816 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21ddf243e31e4b7e0f150105b5b8e874517c71e43e94e38c2969e06db499de6"} err="failed to get container status \"d21ddf243e31e4b7e0f150105b5b8e874517c71e43e94e38c2969e06db499de6\": rpc error: code = NotFound desc = could not find container \"d21ddf243e31e4b7e0f150105b5b8e874517c71e43e94e38c2969e06db499de6\": container with ID starting with d21ddf243e31e4b7e0f150105b5b8e874517c71e43e94e38c2969e06db499de6 not found: ID does not exist" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.015274 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3b3336-0258-4b66-bd33-dd4e01543236-utilities\") pod \"ea3b3336-0258-4b66-bd33-dd4e01543236\" (UID: \"ea3b3336-0258-4b66-bd33-dd4e01543236\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.015324 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2f8t\" (UniqueName: \"kubernetes.io/projected/ea3b3336-0258-4b66-bd33-dd4e01543236-kube-api-access-t2f8t\") pod \"ea3b3336-0258-4b66-bd33-dd4e01543236\" (UID: \"ea3b3336-0258-4b66-bd33-dd4e01543236\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.015347 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kswz\" (UniqueName: \"kubernetes.io/projected/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-kube-api-access-8kswz\") pod \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\" (UID: \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.015387 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3b3336-0258-4b66-bd33-dd4e01543236-catalog-content\") pod \"ea3b3336-0258-4b66-bd33-dd4e01543236\" (UID: \"ea3b3336-0258-4b66-bd33-dd4e01543236\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.015460 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-catalog-content\") pod \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\" (UID: \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.015488 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-utilities\") pod \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\" (UID: \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.016265 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea3b3336-0258-4b66-bd33-dd4e01543236-utilities" (OuterVolumeSpecName: "utilities") pod "ea3b3336-0258-4b66-bd33-dd4e01543236" (UID: "ea3b3336-0258-4b66-bd33-dd4e01543236"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.021688 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3b3336-0258-4b66-bd33-dd4e01543236-kube-api-access-t2f8t" (OuterVolumeSpecName: "kube-api-access-t2f8t") pod "ea3b3336-0258-4b66-bd33-dd4e01543236" (UID: "ea3b3336-0258-4b66-bd33-dd4e01543236"). InnerVolumeSpecName "kube-api-access-t2f8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.021745 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-kube-api-access-8kswz" (OuterVolumeSpecName: "kube-api-access-8kswz") pod "cc91d441-7f4a-45f8-8f71-1f04e4ade80c" (UID: "cc91d441-7f4a-45f8-8f71-1f04e4ade80c"). InnerVolumeSpecName "kube-api-access-8kswz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.022085 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-utilities" (OuterVolumeSpecName: "utilities") pod "cc91d441-7f4a-45f8-8f71-1f04e4ade80c" (UID: "cc91d441-7f4a-45f8-8f71-1f04e4ade80c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.029650 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3b3336-0258-4b66-bd33-dd4e01543236-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.029688 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2f8t\" (UniqueName: \"kubernetes.io/projected/ea3b3336-0258-4b66-bd33-dd4e01543236-kube-api-access-t2f8t\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.029702 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kswz\" (UniqueName: \"kubernetes.io/projected/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-kube-api-access-8kswz\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.029717 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.068798 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc91d441-7f4a-45f8-8f71-1f04e4ade80c" (UID: "cc91d441-7f4a-45f8-8f71-1f04e4ade80c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.111882 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.130937 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.152445 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.158242 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.198995 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea3b3336-0258-4b66-bd33-dd4e01543236-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea3b3336-0258-4b66-bd33-dd4e01543236" (UID: "ea3b3336-0258-4b66-bd33-dd4e01543236"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.232600 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/179de16d-c6d0-4cda-8d1f-8c2396301175-marketplace-trusted-ca\") pod \"179de16d-c6d0-4cda-8d1f-8c2396301175\" (UID: \"179de16d-c6d0-4cda-8d1f-8c2396301175\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.232660 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/179de16d-c6d0-4cda-8d1f-8c2396301175-marketplace-operator-metrics\") pod \"179de16d-c6d0-4cda-8d1f-8c2396301175\" (UID: \"179de16d-c6d0-4cda-8d1f-8c2396301175\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.232730 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trmth\" (UniqueName: \"kubernetes.io/projected/179de16d-c6d0-4cda-8d1f-8c2396301175-kube-api-access-trmth\") pod \"179de16d-c6d0-4cda-8d1f-8c2396301175\" (UID: \"179de16d-c6d0-4cda-8d1f-8c2396301175\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.233115 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3b3336-0258-4b66-bd33-dd4e01543236-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.233162 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/179de16d-c6d0-4cda-8d1f-8c2396301175-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "179de16d-c6d0-4cda-8d1f-8c2396301175" (UID: "179de16d-c6d0-4cda-8d1f-8c2396301175"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.236162 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/179de16d-c6d0-4cda-8d1f-8c2396301175-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "179de16d-c6d0-4cda-8d1f-8c2396301175" (UID: "179de16d-c6d0-4cda-8d1f-8c2396301175"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.236285 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/179de16d-c6d0-4cda-8d1f-8c2396301175-kube-api-access-trmth" (OuterVolumeSpecName: "kube-api-access-trmth") pod "179de16d-c6d0-4cda-8d1f-8c2396301175" (UID: "179de16d-c6d0-4cda-8d1f-8c2396301175"). InnerVolumeSpecName "kube-api-access-trmth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.284973 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.287838 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.334308 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc7ccff8-2db2-4663-9565-42f2357e4bda-utilities\") pod \"bc7ccff8-2db2-4663-9565-42f2357e4bda\" (UID: \"bc7ccff8-2db2-4663-9565-42f2357e4bda\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.334385 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19482ae1-f291-4111-83b5-56fa37063508-utilities\") pod \"19482ae1-f291-4111-83b5-56fa37063508\" (UID: \"19482ae1-f291-4111-83b5-56fa37063508\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.334453 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19482ae1-f291-4111-83b5-56fa37063508-catalog-content\") pod \"19482ae1-f291-4111-83b5-56fa37063508\" (UID: \"19482ae1-f291-4111-83b5-56fa37063508\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.334531 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ntv4\" (UniqueName: \"kubernetes.io/projected/bc7ccff8-2db2-4663-9565-42f2357e4bda-kube-api-access-4ntv4\") pod \"bc7ccff8-2db2-4663-9565-42f2357e4bda\" (UID: \"bc7ccff8-2db2-4663-9565-42f2357e4bda\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.334558 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wc5g\" (UniqueName: \"kubernetes.io/projected/19482ae1-f291-4111-83b5-56fa37063508-kube-api-access-5wc5g\") pod \"19482ae1-f291-4111-83b5-56fa37063508\" (UID: \"19482ae1-f291-4111-83b5-56fa37063508\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.334590 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc7ccff8-2db2-4663-9565-42f2357e4bda-catalog-content\") pod \"bc7ccff8-2db2-4663-9565-42f2357e4bda\" (UID: \"bc7ccff8-2db2-4663-9565-42f2357e4bda\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.335503 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc7ccff8-2db2-4663-9565-42f2357e4bda-utilities" (OuterVolumeSpecName: "utilities") pod "bc7ccff8-2db2-4663-9565-42f2357e4bda" (UID: "bc7ccff8-2db2-4663-9565-42f2357e4bda"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.339219 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19482ae1-f291-4111-83b5-56fa37063508-kube-api-access-5wc5g" (OuterVolumeSpecName: "kube-api-access-5wc5g") pod "19482ae1-f291-4111-83b5-56fa37063508" (UID: "19482ae1-f291-4111-83b5-56fa37063508"). InnerVolumeSpecName "kube-api-access-5wc5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.339474 4902 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/179de16d-c6d0-4cda-8d1f-8c2396301175-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.339517 4902 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/179de16d-c6d0-4cda-8d1f-8c2396301175-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.339533 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trmth\" (UniqueName: \"kubernetes.io/projected/179de16d-c6d0-4cda-8d1f-8c2396301175-kube-api-access-trmth\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.340095 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19482ae1-f291-4111-83b5-56fa37063508-utilities" (OuterVolumeSpecName: "utilities") pod "19482ae1-f291-4111-83b5-56fa37063508" (UID: "19482ae1-f291-4111-83b5-56fa37063508"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.340594 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc7ccff8-2db2-4663-9565-42f2357e4bda-kube-api-access-4ntv4" (OuterVolumeSpecName: "kube-api-access-4ntv4") pod "bc7ccff8-2db2-4663-9565-42f2357e4bda" (UID: "bc7ccff8-2db2-4663-9565-42f2357e4bda"). InnerVolumeSpecName "kube-api-access-4ntv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.353361 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-z4vkp"] Jan 21 14:37:45 crc kubenswrapper[4902]: W0121 14:37:45.360248 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod021a0823_715d_4b67_b5b2_b52ec6d6c7e8.slice/crio-c3f8fd209d74ae56cf38d9f7ce9254888506eeb74c63efd98ea97d301dddaac3 WatchSource:0}: Error finding container c3f8fd209d74ae56cf38d9f7ce9254888506eeb74c63efd98ea97d301dddaac3: Status 404 returned error can't find the container with id c3f8fd209d74ae56cf38d9f7ce9254888506eeb74c63efd98ea97d301dddaac3 Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.360825 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19482ae1-f291-4111-83b5-56fa37063508-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19482ae1-f291-4111-83b5-56fa37063508" (UID: "19482ae1-f291-4111-83b5-56fa37063508"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.391794 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc7ccff8-2db2-4663-9565-42f2357e4bda-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc7ccff8-2db2-4663-9565-42f2357e4bda" (UID: "bc7ccff8-2db2-4663-9565-42f2357e4bda"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.440883 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64be302e-c39a-4e45-8b5d-07b8819a6eb0-utilities\") pod \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\" (UID: \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441023 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64be302e-c39a-4e45-8b5d-07b8819a6eb0-catalog-content\") pod \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\" (UID: \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441061 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-catalog-content\") pod \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\" (UID: \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441081 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-utilities\") pod \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\" (UID: \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441118 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhvxb\" (UniqueName: \"kubernetes.io/projected/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-kube-api-access-xhvxb\") pod \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\" (UID: \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441142 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq29v\" (UniqueName: \"kubernetes.io/projected/64be302e-c39a-4e45-8b5d-07b8819a6eb0-kube-api-access-jq29v\") pod \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\" (UID: \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441403 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ntv4\" (UniqueName: \"kubernetes.io/projected/bc7ccff8-2db2-4663-9565-42f2357e4bda-kube-api-access-4ntv4\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441416 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wc5g\" (UniqueName: \"kubernetes.io/projected/19482ae1-f291-4111-83b5-56fa37063508-kube-api-access-5wc5g\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441425 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc7ccff8-2db2-4663-9565-42f2357e4bda-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441435 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc7ccff8-2db2-4663-9565-42f2357e4bda-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441443 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19482ae1-f291-4111-83b5-56fa37063508-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441452 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19482ae1-f291-4111-83b5-56fa37063508-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441799 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64be302e-c39a-4e45-8b5d-07b8819a6eb0-utilities" (OuterVolumeSpecName: "utilities") pod "64be302e-c39a-4e45-8b5d-07b8819a6eb0" (UID: "64be302e-c39a-4e45-8b5d-07b8819a6eb0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441891 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-utilities" (OuterVolumeSpecName: "utilities") pod "4504c44c-17da-4a32-ac81-7efc9ec6b1cb" (UID: "4504c44c-17da-4a32-ac81-7efc9ec6b1cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.443657 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-kube-api-access-xhvxb" (OuterVolumeSpecName: "kube-api-access-xhvxb") pod "4504c44c-17da-4a32-ac81-7efc9ec6b1cb" (UID: "4504c44c-17da-4a32-ac81-7efc9ec6b1cb"). InnerVolumeSpecName "kube-api-access-xhvxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.443956 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64be302e-c39a-4e45-8b5d-07b8819a6eb0-kube-api-access-jq29v" (OuterVolumeSpecName: "kube-api-access-jq29v") pod "64be302e-c39a-4e45-8b5d-07b8819a6eb0" (UID: "64be302e-c39a-4e45-8b5d-07b8819a6eb0"). InnerVolumeSpecName "kube-api-access-jq29v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.468168 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4504c44c-17da-4a32-ac81-7efc9ec6b1cb" (UID: "4504c44c-17da-4a32-ac81-7efc9ec6b1cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.544331 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.544376 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.544389 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhvxb\" (UniqueName: \"kubernetes.io/projected/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-kube-api-access-xhvxb\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.544406 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jq29v\" (UniqueName: \"kubernetes.io/projected/64be302e-c39a-4e45-8b5d-07b8819a6eb0-kube-api-access-jq29v\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.544418 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64be302e-c39a-4e45-8b5d-07b8819a6eb0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.587599 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64be302e-c39a-4e45-8b5d-07b8819a6eb0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64be302e-c39a-4e45-8b5d-07b8819a6eb0" (UID: "64be302e-c39a-4e45-8b5d-07b8819a6eb0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.645400 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64be302e-c39a-4e45-8b5d-07b8819a6eb0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.856796 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" event={"ID":"021a0823-715d-4b67-b5b2-b52ec6d6c7e8","Type":"ContainerStarted","Data":"c3f8fd209d74ae56cf38d9f7ce9254888506eeb74c63efd98ea97d301dddaac3"} Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.858566 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-98c57_ea3b3336-0258-4b66-bd33-dd4e01543236/registry-server/0.log" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.859859 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98c57" event={"ID":"ea3b3336-0258-4b66-bd33-dd4e01543236","Type":"ContainerDied","Data":"5d200290d772299c202f1a65fa0061ebdcb1ccceea36fa735b536ebf39ba3497"} Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.859924 4902 scope.go:117] "RemoveContainer" containerID="326f8caa7a461344deaec70d776c5a6beda6a87d6c452e21917d3f11867ce5f4" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.859877 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.863227 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.863221 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqq5l" event={"ID":"bc7ccff8-2db2-4663-9565-42f2357e4bda","Type":"ContainerDied","Data":"f45d37f7ac621a924bdf6d205f6dcfb689dbb7f1904649cc3bbc2a2dac0231b6"} Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.866165 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.866640 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" event={"ID":"179de16d-c6d0-4cda-8d1f-8c2396301175","Type":"ContainerDied","Data":"55ed63decb6129b185123334a130753c5c33884bc167ffd4431cd04957e60efe"} Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.871662 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7hf5" event={"ID":"19482ae1-f291-4111-83b5-56fa37063508","Type":"ContainerDied","Data":"024d9ea6e07bff7f0ecb8463467da83d20693d50a025a771bbc45b531070e2fd"} Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.871677 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.875388 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dl5zx" event={"ID":"4504c44c-17da-4a32-ac81-7efc9ec6b1cb","Type":"ContainerDied","Data":"6d22776fe71b564cc70ae18c09c444bfb7b9c6605b6f0f8a041e615143a16c69"} Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.875522 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.880136 4902 scope.go:117] "RemoveContainer" containerID="8cab10508e95ad152396df1527b6a482788a8695c58e273be61d4a6811398d99" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.880287 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chl56" event={"ID":"64be302e-c39a-4e45-8b5d-07b8819a6eb0","Type":"ContainerDied","Data":"c2b8854fe921d56cd0a1e4ec23fb7eafebd1972826e58b8204b172f529d4bbf4"} Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.880391 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.884753 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgf94" event={"ID":"cc91d441-7f4a-45f8-8f71-1f04e4ade80c","Type":"ContainerDied","Data":"053e278127621d0aa574a001b3d7f98dd3d2a28ff0f85cb3abcc55c7682fa466"} Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.884868 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.901763 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-98c57"] Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.906172 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-98c57"] Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.917035 4902 scope.go:117] "RemoveContainer" containerID="0863c2ef512883dfa5c8cb15d84b8d3e8007faf5a420481b07e81570d0bbc513" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.921837 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fqq5l"] Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.925096 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fqq5l"] Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.948148 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xm5cd"] Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.950975 4902 scope.go:117] "RemoveContainer" containerID="d0628adcdeba28f3f91c06f4ebfb0013ccce0fcc7b38a42868bbe4850a301bc0" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.954554 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xm5cd"] Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.962901 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d7hf5"] Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.967303 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d7hf5"] Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.972945 4902 scope.go:117] "RemoveContainer" containerID="b890da03655fa68fe9c4fc2736b49d628cd2dfdd1eb8c53e0f6a92826e80b3e8" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.979966 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dl5zx"] Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.985264 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dl5zx"] Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.991521 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-chl56"] Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.996310 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-chl56"] Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.998972 4902 scope.go:117] "RemoveContainer" containerID="a1204ec2b5e76cfd0fb6167da34f831607a537ca3ed511cbf74c9c91b780c2f9" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.013174 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xgf94"] Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.016315 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xgf94"] Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.019592 4902 scope.go:117] "RemoveContainer" containerID="f47ac0d984bd534f8dbc95c34421c4c7e222580c524d56fef0a86d89726b4ac0" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.044310 4902 scope.go:117] "RemoveContainer" containerID="62550e0c563096e2eaf93d50c31f0aa211becf7530f77d8467b535ecccc978a4" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.057219 4902 scope.go:117] "RemoveContainer" containerID="f826ca325a2cf505b8c815980387763af7f3ba9503a5207a8722972de88aec84" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.070251 4902 scope.go:117] "RemoveContainer" containerID="9b604d27ef105b652ea19c99e2ae291eacdb1348bd4b5e106e90424e329a7180" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.086354 4902 scope.go:117] "RemoveContainer" containerID="02dbc8575387c68b7070c5eedfc10c67111d8864380f2d313d7bb3003fd6c4e6" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.098994 4902 scope.go:117] "RemoveContainer" containerID="32894c3de1e75b38e7274c12ebe204f101b1ece066ced856e2483329caa616b0" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.110378 4902 scope.go:117] "RemoveContainer" containerID="3c2863c18937166425d91344f3ec1614a7f70129ffe061c9c5ee80eb31756b3f" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.122275 4902 scope.go:117] "RemoveContainer" containerID="f05bf1ddcb70474108853c8a55dc880b6e2d426b5f8cf04726fd5568c5d20f31" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.302642 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="179de16d-c6d0-4cda-8d1f-8c2396301175" path="/var/lib/kubelet/pods/179de16d-c6d0-4cda-8d1f-8c2396301175/volumes" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.303358 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19482ae1-f291-4111-83b5-56fa37063508" path="/var/lib/kubelet/pods/19482ae1-f291-4111-83b5-56fa37063508/volumes" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.304327 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c88f2d9-944f-408e-bfe3-41c8baac6175" path="/var/lib/kubelet/pods/3c88f2d9-944f-408e-bfe3-41c8baac6175/volumes" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.305912 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" path="/var/lib/kubelet/pods/4504c44c-17da-4a32-ac81-7efc9ec6b1cb/volumes" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.306623 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" path="/var/lib/kubelet/pods/64be302e-c39a-4e45-8b5d-07b8819a6eb0/volumes" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.307734 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc7ccff8-2db2-4663-9565-42f2357e4bda" path="/var/lib/kubelet/pods/bc7ccff8-2db2-4663-9565-42f2357e4bda/volumes" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.308424 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" path="/var/lib/kubelet/pods/cc91d441-7f4a-45f8-8f71-1f04e4ade80c/volumes" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.309065 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea3b3336-0258-4b66-bd33-dd4e01543236" path="/var/lib/kubelet/pods/ea3b3336-0258-4b66-bd33-dd4e01543236/volumes" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.731868 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-758f874869-2jb7w"] Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.732240 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" podUID="7db6f852-8480-412f-a9bf-9afd18c41d83" containerName="controller-manager" containerID="cri-o://7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e" gracePeriod=30 Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.829056 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f"] Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.829297 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" podUID="64242610-9b91-49bc-9400-12298973aad0" containerName="route-controller-manager" containerID="cri-o://763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152" gracePeriod=30 Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.038451 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ppndl"] Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.039536 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19482ae1-f291-4111-83b5-56fa37063508" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.039613 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="19482ae1-f291-4111-83b5-56fa37063508" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.039685 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.039746 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.039811 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c88f2d9-944f-408e-bfe3-41c8baac6175" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.039889 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c88f2d9-944f-408e-bfe3-41c8baac6175" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.040071 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.040236 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.040372 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.040436 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.040544 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="179de16d-c6d0-4cda-8d1f-8c2396301175" containerName="marketplace-operator" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.040610 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="179de16d-c6d0-4cda-8d1f-8c2396301175" containerName="marketplace-operator" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.040675 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7ccff8-2db2-4663-9565-42f2357e4bda" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.040848 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7ccff8-2db2-4663-9565-42f2357e4bda" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.040910 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3b3336-0258-4b66-bd33-dd4e01543236" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.040980 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3b3336-0258-4b66-bd33-dd4e01543236" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.041080 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.041150 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.041212 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.041265 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.041372 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7ccff8-2db2-4663-9565-42f2357e4bda" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.041426 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7ccff8-2db2-4663-9565-42f2357e4bda" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.041479 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.041539 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.041600 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7ccff8-2db2-4663-9565-42f2357e4bda" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.041661 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7ccff8-2db2-4663-9565-42f2357e4bda" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.041716 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c88f2d9-944f-408e-bfe3-41c8baac6175" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.041774 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c88f2d9-944f-408e-bfe3-41c8baac6175" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.041835 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.041888 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.041949 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.042002 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.042112 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3b3336-0258-4b66-bd33-dd4e01543236" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.042191 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3b3336-0258-4b66-bd33-dd4e01543236" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.042275 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19482ae1-f291-4111-83b5-56fa37063508" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.042335 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="19482ae1-f291-4111-83b5-56fa37063508" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.042423 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c88f2d9-944f-408e-bfe3-41c8baac6175" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.042508 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c88f2d9-944f-408e-bfe3-41c8baac6175" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.042576 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19482ae1-f291-4111-83b5-56fa37063508" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.042671 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="19482ae1-f291-4111-83b5-56fa37063508" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.042747 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.042810 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.042899 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3b3336-0258-4b66-bd33-dd4e01543236" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.042958 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3b3336-0258-4b66-bd33-dd4e01543236" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.043166 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="19482ae1-f291-4111-83b5-56fa37063508" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.043254 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c88f2d9-944f-408e-bfe3-41c8baac6175" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.043338 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3b3336-0258-4b66-bd33-dd4e01543236" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.043395 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.043457 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.043525 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="179de16d-c6d0-4cda-8d1f-8c2396301175" containerName="marketplace-operator" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.043618 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc7ccff8-2db2-4663-9565-42f2357e4bda" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.043678 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.044979 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.046966 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ppndl"] Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.048719 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.067782 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwn2l\" (UniqueName: \"kubernetes.io/projected/663aee99-c55e-45ba-b5ff-a67def0f524e-kube-api-access-zwn2l\") pod \"redhat-marketplace-ppndl\" (UID: \"663aee99-c55e-45ba-b5ff-a67def0f524e\") " pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.067862 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/663aee99-c55e-45ba-b5ff-a67def0f524e-catalog-content\") pod \"redhat-marketplace-ppndl\" (UID: \"663aee99-c55e-45ba-b5ff-a67def0f524e\") " pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.067905 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/663aee99-c55e-45ba-b5ff-a67def0f524e-utilities\") pod \"redhat-marketplace-ppndl\" (UID: \"663aee99-c55e-45ba-b5ff-a67def0f524e\") " pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.127880 4902 scope.go:117] "RemoveContainer" containerID="b0e5bd21eb45121c58536e203537cf733407f5259b12126eae2bd654f50021a5" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.169389 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/663aee99-c55e-45ba-b5ff-a67def0f524e-catalog-content\") pod \"redhat-marketplace-ppndl\" (UID: \"663aee99-c55e-45ba-b5ff-a67def0f524e\") " pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.169463 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/663aee99-c55e-45ba-b5ff-a67def0f524e-utilities\") pod \"redhat-marketplace-ppndl\" (UID: \"663aee99-c55e-45ba-b5ff-a67def0f524e\") " pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.169799 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwn2l\" (UniqueName: \"kubernetes.io/projected/663aee99-c55e-45ba-b5ff-a67def0f524e-kube-api-access-zwn2l\") pod \"redhat-marketplace-ppndl\" (UID: \"663aee99-c55e-45ba-b5ff-a67def0f524e\") " pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.170331 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/663aee99-c55e-45ba-b5ff-a67def0f524e-utilities\") pod \"redhat-marketplace-ppndl\" (UID: \"663aee99-c55e-45ba-b5ff-a67def0f524e\") " pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.170372 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/663aee99-c55e-45ba-b5ff-a67def0f524e-catalog-content\") pod \"redhat-marketplace-ppndl\" (UID: \"663aee99-c55e-45ba-b5ff-a67def0f524e\") " pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.188570 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwn2l\" (UniqueName: \"kubernetes.io/projected/663aee99-c55e-45ba-b5ff-a67def0f524e-kube-api-access-zwn2l\") pod \"redhat-marketplace-ppndl\" (UID: \"663aee99-c55e-45ba-b5ff-a67def0f524e\") " pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.279218 4902 scope.go:117] "RemoveContainer" containerID="b780515dde8ccd794f02bd3dc6005c6baf519de90ebd8e42d401146a27f9e971" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.299593 4902 scope.go:117] "RemoveContainer" containerID="dafc51f1d7f9ba142fe8d6c07ed9585d44e582c8f889e035fd698495242522fb" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.314166 4902 scope.go:117] "RemoveContainer" containerID="d24c33bc95b69d74e25ab4cc6e01c313a3384aa99b80fde04e7056ebb32a4780" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.332097 4902 scope.go:117] "RemoveContainer" containerID="4e686b959372288a5668349b284ecd38a38ea795d787fa0d477db1901cf9976c" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.369531 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.526361 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.589439 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.681433 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64242610-9b91-49bc-9400-12298973aad0-serving-cert\") pod \"64242610-9b91-49bc-9400-12298973aad0\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.681511 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xh6p\" (UniqueName: \"kubernetes.io/projected/64242610-9b91-49bc-9400-12298973aad0-kube-api-access-5xh6p\") pod \"64242610-9b91-49bc-9400-12298973aad0\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.681557 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64242610-9b91-49bc-9400-12298973aad0-client-ca\") pod \"64242610-9b91-49bc-9400-12298973aad0\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.681702 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwrwj\" (UniqueName: \"kubernetes.io/projected/7db6f852-8480-412f-a9bf-9afd18c41d83-kube-api-access-wwrwj\") pod \"7db6f852-8480-412f-a9bf-9afd18c41d83\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.681782 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7db6f852-8480-412f-a9bf-9afd18c41d83-serving-cert\") pod \"7db6f852-8480-412f-a9bf-9afd18c41d83\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.681806 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-client-ca\") pod \"7db6f852-8480-412f-a9bf-9afd18c41d83\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.681831 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64242610-9b91-49bc-9400-12298973aad0-config\") pod \"64242610-9b91-49bc-9400-12298973aad0\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.682993 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64242610-9b91-49bc-9400-12298973aad0-client-ca" (OuterVolumeSpecName: "client-ca") pod "64242610-9b91-49bc-9400-12298973aad0" (UID: "64242610-9b91-49bc-9400-12298973aad0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.683169 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64242610-9b91-49bc-9400-12298973aad0-config" (OuterVolumeSpecName: "config") pod "64242610-9b91-49bc-9400-12298973aad0" (UID: "64242610-9b91-49bc-9400-12298973aad0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.683465 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-client-ca" (OuterVolumeSpecName: "client-ca") pod "7db6f852-8480-412f-a9bf-9afd18c41d83" (UID: "7db6f852-8480-412f-a9bf-9afd18c41d83"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.684873 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7db6f852-8480-412f-a9bf-9afd18c41d83-kube-api-access-wwrwj" (OuterVolumeSpecName: "kube-api-access-wwrwj") pod "7db6f852-8480-412f-a9bf-9afd18c41d83" (UID: "7db6f852-8480-412f-a9bf-9afd18c41d83"). InnerVolumeSpecName "kube-api-access-wwrwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.684869 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64242610-9b91-49bc-9400-12298973aad0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "64242610-9b91-49bc-9400-12298973aad0" (UID: "64242610-9b91-49bc-9400-12298973aad0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.685441 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db6f852-8480-412f-a9bf-9afd18c41d83-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7db6f852-8480-412f-a9bf-9afd18c41d83" (UID: "7db6f852-8480-412f-a9bf-9afd18c41d83"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.686379 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64242610-9b91-49bc-9400-12298973aad0-kube-api-access-5xh6p" (OuterVolumeSpecName: "kube-api-access-5xh6p") pod "64242610-9b91-49bc-9400-12298973aad0" (UID: "64242610-9b91-49bc-9400-12298973aad0"). InnerVolumeSpecName "kube-api-access-5xh6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.769362 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.769715 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.769768 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.771839 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.771911 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388" gracePeriod=600 Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.782908 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-config\") pod \"7db6f852-8480-412f-a9bf-9afd18c41d83\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.782962 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-proxy-ca-bundles\") pod \"7db6f852-8480-412f-a9bf-9afd18c41d83\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.783182 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7db6f852-8480-412f-a9bf-9afd18c41d83-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.783203 4902 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.783320 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64242610-9b91-49bc-9400-12298973aad0-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.783454 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64242610-9b91-49bc-9400-12298973aad0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.783472 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xh6p\" (UniqueName: \"kubernetes.io/projected/64242610-9b91-49bc-9400-12298973aad0-kube-api-access-5xh6p\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.783483 4902 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64242610-9b91-49bc-9400-12298973aad0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.783492 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwrwj\" (UniqueName: \"kubernetes.io/projected/7db6f852-8480-412f-a9bf-9afd18c41d83-kube-api-access-wwrwj\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.783553 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7db6f852-8480-412f-a9bf-9afd18c41d83" (UID: "7db6f852-8480-412f-a9bf-9afd18c41d83"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.784680 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-config" (OuterVolumeSpecName: "config") pod "7db6f852-8480-412f-a9bf-9afd18c41d83" (UID: "7db6f852-8480-412f-a9bf-9afd18c41d83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.796626 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ppndl"] Jan 21 14:37:47 crc kubenswrapper[4902]: W0121 14:37:47.803600 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod663aee99_c55e_45ba_b5ff_a67def0f524e.slice/crio-449380a51ff98996b67beeb5691ab577c4dbce6123ccc219a52b4709312762a9 WatchSource:0}: Error finding container 449380a51ff98996b67beeb5691ab577c4dbce6123ccc219a52b4709312762a9: Status 404 returned error can't find the container with id 449380a51ff98996b67beeb5691ab577c4dbce6123ccc219a52b4709312762a9 Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.884231 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.884262 4902 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.901925 4902 generic.go:334] "Generic (PLEG): container finished" podID="7db6f852-8480-412f-a9bf-9afd18c41d83" containerID="7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e" exitCode=0 Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.902011 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" event={"ID":"7db6f852-8480-412f-a9bf-9afd18c41d83","Type":"ContainerDied","Data":"7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e"} Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.902059 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" event={"ID":"7db6f852-8480-412f-a9bf-9afd18c41d83","Type":"ContainerDied","Data":"1430f5f7f582f37ed906dc3199394088ef56b752688bdfa0d7374651d056e2d3"} Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.902082 4902 scope.go:117] "RemoveContainer" containerID="7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.902202 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.907482 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppndl" event={"ID":"663aee99-c55e-45ba-b5ff-a67def0f524e","Type":"ContainerStarted","Data":"449380a51ff98996b67beeb5691ab577c4dbce6123ccc219a52b4709312762a9"} Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.909701 4902 generic.go:334] "Generic (PLEG): container finished" podID="64242610-9b91-49bc-9400-12298973aad0" containerID="763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152" exitCode=0 Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.909783 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" event={"ID":"64242610-9b91-49bc-9400-12298973aad0","Type":"ContainerDied","Data":"763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152"} Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.909845 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" event={"ID":"64242610-9b91-49bc-9400-12298973aad0","Type":"ContainerDied","Data":"471419efc12598c3a121f35256027b6584df5df609cdadc31ab16086370f4330"} Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.909911 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.916926 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" event={"ID":"021a0823-715d-4b67-b5b2-b52ec6d6c7e8","Type":"ContainerStarted","Data":"3ceffc7e30beda18fa51d24c5f70afc752b14a0351e8a78296b1dfa51bb8f1e8"} Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.918091 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.924349 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.925606 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388" exitCode=0 Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.925641 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388"} Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.942699 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" podStartSLOduration=4.942679047 podStartE2EDuration="4.942679047s" podCreationTimestamp="2026-01-21 14:37:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:37:47.938615648 +0000 UTC m=+230.015448687" watchObservedRunningTime="2026-01-21 14:37:47.942679047 +0000 UTC m=+230.019512076" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.946933 4902 scope.go:117] "RemoveContainer" containerID="7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.947685 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e\": container with ID starting with 7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e not found: ID does not exist" containerID="7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.947743 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e"} err="failed to get container status \"7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e\": rpc error: code = NotFound desc = could not find container \"7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e\": container with ID starting with 7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e not found: ID does not exist" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.947773 4902 scope.go:117] "RemoveContainer" containerID="763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.968455 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f"] Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.975136 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f"] Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.988001 4902 scope.go:117] "RemoveContainer" containerID="763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.988444 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152\": container with ID starting with 763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152 not found: ID does not exist" containerID="763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.988477 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152"} err="failed to get container status \"763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152\": rpc error: code = NotFound desc = could not find container \"763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152\": container with ID starting with 763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152 not found: ID does not exist" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.057907 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-758f874869-2jb7w"] Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.061789 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-758f874869-2jb7w"] Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.065019 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8kplb"] Jan 21 14:37:48 crc kubenswrapper[4902]: E0121 14:37:48.065349 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64242610-9b91-49bc-9400-12298973aad0" containerName="route-controller-manager" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.065371 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="64242610-9b91-49bc-9400-12298973aad0" containerName="route-controller-manager" Jan 21 14:37:48 crc kubenswrapper[4902]: E0121 14:37:48.065390 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7db6f852-8480-412f-a9bf-9afd18c41d83" containerName="controller-manager" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.065396 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="7db6f852-8480-412f-a9bf-9afd18c41d83" containerName="controller-manager" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.065501 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="7db6f852-8480-412f-a9bf-9afd18c41d83" containerName="controller-manager" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.065517 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="64242610-9b91-49bc-9400-12298973aad0" containerName="route-controller-manager" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.066936 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.067093 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8kplb"] Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.079323 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.087512 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c-catalog-content\") pod \"redhat-operators-8kplb\" (UID: \"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c\") " pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.087608 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c-utilities\") pod \"redhat-operators-8kplb\" (UID: \"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c\") " pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.087689 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd8vn\" (UniqueName: \"kubernetes.io/projected/fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c-kube-api-access-zd8vn\") pod \"redhat-operators-8kplb\" (UID: \"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c\") " pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.190230 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c-catalog-content\") pod \"redhat-operators-8kplb\" (UID: \"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c\") " pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.190329 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c-utilities\") pod \"redhat-operators-8kplb\" (UID: \"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c\") " pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.190383 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd8vn\" (UniqueName: \"kubernetes.io/projected/fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c-kube-api-access-zd8vn\") pod \"redhat-operators-8kplb\" (UID: \"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c\") " pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.191115 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c-catalog-content\") pod \"redhat-operators-8kplb\" (UID: \"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c\") " pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.191251 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c-utilities\") pod \"redhat-operators-8kplb\" (UID: \"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c\") " pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.212710 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd8vn\" (UniqueName: \"kubernetes.io/projected/fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c-kube-api-access-zd8vn\") pod \"redhat-operators-8kplb\" (UID: \"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c\") " pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.303623 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64242610-9b91-49bc-9400-12298973aad0" path="/var/lib/kubelet/pods/64242610-9b91-49bc-9400-12298973aad0/volumes" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.304799 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7db6f852-8480-412f-a9bf-9afd18c41d83" path="/var/lib/kubelet/pods/7db6f852-8480-412f-a9bf-9afd18c41d83/volumes" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.401533 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.773969 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6466b9bcb-hkw2b"] Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.775208 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.776997 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh"] Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.777543 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.778009 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.778422 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.778562 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.778977 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.779013 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.779350 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.786208 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.786771 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.787461 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.787530 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.790852 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.790911 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.790861 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.795272 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6466b9bcb-hkw2b"] Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.798768 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-config\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.798819 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb38b0db-02f2-4797-831b-baadb29db220-config\") pod \"route-controller-manager-668f58b975-nxpbh\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.798854 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qjlj\" (UniqueName: \"kubernetes.io/projected/cb38b0db-02f2-4797-831b-baadb29db220-kube-api-access-4qjlj\") pod \"route-controller-manager-668f58b975-nxpbh\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.799194 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb38b0db-02f2-4797-831b-baadb29db220-serving-cert\") pod \"route-controller-manager-668f58b975-nxpbh\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.799241 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-client-ca\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.799281 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-proxy-ca-bundles\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.799317 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a79a8460-e7c3-4c10-b5b9-6626715eb24a-serving-cert\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.799376 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hwbb\" (UniqueName: \"kubernetes.io/projected/a79a8460-e7c3-4c10-b5b9-6626715eb24a-kube-api-access-7hwbb\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.799489 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb38b0db-02f2-4797-831b-baadb29db220-client-ca\") pod \"route-controller-manager-668f58b975-nxpbh\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.801340 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh"] Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.858619 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8kplb"] Jan 21 14:37:48 crc kubenswrapper[4902]: W0121 14:37:48.869794 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfddf72a9_e04a_41e1_9f81_f41a8d7b8d9c.slice/crio-b4d5651e7aed1bfe84b4e1a0469e35bf65cfc9d795aac21f972695ff9172fc3c WatchSource:0}: Error finding container b4d5651e7aed1bfe84b4e1a0469e35bf65cfc9d795aac21f972695ff9172fc3c: Status 404 returned error can't find the container with id b4d5651e7aed1bfe84b4e1a0469e35bf65cfc9d795aac21f972695ff9172fc3c Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.901232 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb38b0db-02f2-4797-831b-baadb29db220-client-ca\") pod \"route-controller-manager-668f58b975-nxpbh\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.901347 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-config\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.901394 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb38b0db-02f2-4797-831b-baadb29db220-config\") pod \"route-controller-manager-668f58b975-nxpbh\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.901438 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qjlj\" (UniqueName: \"kubernetes.io/projected/cb38b0db-02f2-4797-831b-baadb29db220-kube-api-access-4qjlj\") pod \"route-controller-manager-668f58b975-nxpbh\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.901488 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb38b0db-02f2-4797-831b-baadb29db220-serving-cert\") pod \"route-controller-manager-668f58b975-nxpbh\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.901520 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-client-ca\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.901555 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-proxy-ca-bundles\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.901588 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a79a8460-e7c3-4c10-b5b9-6626715eb24a-serving-cert\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.901635 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hwbb\" (UniqueName: \"kubernetes.io/projected/a79a8460-e7c3-4c10-b5b9-6626715eb24a-kube-api-access-7hwbb\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.903401 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb38b0db-02f2-4797-831b-baadb29db220-client-ca\") pod \"route-controller-manager-668f58b975-nxpbh\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.904594 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-client-ca\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.905227 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb38b0db-02f2-4797-831b-baadb29db220-config\") pod \"route-controller-manager-668f58b975-nxpbh\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.906409 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-proxy-ca-bundles\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.907717 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-config\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.909444 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb38b0db-02f2-4797-831b-baadb29db220-serving-cert\") pod \"route-controller-manager-668f58b975-nxpbh\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.909483 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a79a8460-e7c3-4c10-b5b9-6626715eb24a-serving-cert\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.918356 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hwbb\" (UniqueName: \"kubernetes.io/projected/a79a8460-e7c3-4c10-b5b9-6626715eb24a-kube-api-access-7hwbb\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.922824 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qjlj\" (UniqueName: \"kubernetes.io/projected/cb38b0db-02f2-4797-831b-baadb29db220-kube-api-access-4qjlj\") pod \"route-controller-manager-668f58b975-nxpbh\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.937554 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"55b2a83cf4462f21e140aaf547deeb73f9aa69b5d7dddabe47e579030fe921f9"} Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.939964 4902 generic.go:334] "Generic (PLEG): container finished" podID="663aee99-c55e-45ba-b5ff-a67def0f524e" containerID="f958a13bdb6e7904db7ec1bc74b10c95e2b8e2273a9b808890869ef5f622d459" exitCode=0 Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.940019 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppndl" event={"ID":"663aee99-c55e-45ba-b5ff-a67def0f524e","Type":"ContainerDied","Data":"f958a13bdb6e7904db7ec1bc74b10c95e2b8e2273a9b808890869ef5f622d459"} Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.942894 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kplb" event={"ID":"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c","Type":"ContainerStarted","Data":"b4d5651e7aed1bfe84b4e1a0469e35bf65cfc9d795aac21f972695ff9172fc3c"} Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.103566 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.116171 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.444376 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wx2t6"] Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.446610 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.449965 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.456030 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wx2t6"] Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.507983 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh"] Jan 21 14:37:49 crc kubenswrapper[4902]: W0121 14:37:49.514669 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb38b0db_02f2_4797_831b_baadb29db220.slice/crio-a9e0a2e8240b1e4870419d402cb4655289e3ed04ceb3d2a54121480c8cb83557 WatchSource:0}: Error finding container a9e0a2e8240b1e4870419d402cb4655289e3ed04ceb3d2a54121480c8cb83557: Status 404 returned error can't find the container with id a9e0a2e8240b1e4870419d402cb4655289e3ed04ceb3d2a54121480c8cb83557 Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.553893 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6466b9bcb-hkw2b"] Jan 21 14:37:49 crc kubenswrapper[4902]: W0121 14:37:49.571973 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda79a8460_e7c3_4c10_b5b9_6626715eb24a.slice/crio-35dad082b0eb1ecb464f03e9456b66ffeaa63c62d77bb1e4f7192a72abdf2c4d WatchSource:0}: Error finding container 35dad082b0eb1ecb464f03e9456b66ffeaa63c62d77bb1e4f7192a72abdf2c4d: Status 404 returned error can't find the container with id 35dad082b0eb1ecb464f03e9456b66ffeaa63c62d77bb1e4f7192a72abdf2c4d Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.610079 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1458bec-2134-4eb6-8510-ece2a6568215-catalog-content\") pod \"community-operators-wx2t6\" (UID: \"a1458bec-2134-4eb6-8510-ece2a6568215\") " pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.610175 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxvzq\" (UniqueName: \"kubernetes.io/projected/a1458bec-2134-4eb6-8510-ece2a6568215-kube-api-access-bxvzq\") pod \"community-operators-wx2t6\" (UID: \"a1458bec-2134-4eb6-8510-ece2a6568215\") " pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.610241 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1458bec-2134-4eb6-8510-ece2a6568215-utilities\") pod \"community-operators-wx2t6\" (UID: \"a1458bec-2134-4eb6-8510-ece2a6568215\") " pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.710574 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxvzq\" (UniqueName: \"kubernetes.io/projected/a1458bec-2134-4eb6-8510-ece2a6568215-kube-api-access-bxvzq\") pod \"community-operators-wx2t6\" (UID: \"a1458bec-2134-4eb6-8510-ece2a6568215\") " pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.710633 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1458bec-2134-4eb6-8510-ece2a6568215-utilities\") pod \"community-operators-wx2t6\" (UID: \"a1458bec-2134-4eb6-8510-ece2a6568215\") " pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.710676 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1458bec-2134-4eb6-8510-ece2a6568215-catalog-content\") pod \"community-operators-wx2t6\" (UID: \"a1458bec-2134-4eb6-8510-ece2a6568215\") " pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.711153 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1458bec-2134-4eb6-8510-ece2a6568215-catalog-content\") pod \"community-operators-wx2t6\" (UID: \"a1458bec-2134-4eb6-8510-ece2a6568215\") " pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.711379 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1458bec-2134-4eb6-8510-ece2a6568215-utilities\") pod \"community-operators-wx2t6\" (UID: \"a1458bec-2134-4eb6-8510-ece2a6568215\") " pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.730058 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxvzq\" (UniqueName: \"kubernetes.io/projected/a1458bec-2134-4eb6-8510-ece2a6568215-kube-api-access-bxvzq\") pod \"community-operators-wx2t6\" (UID: \"a1458bec-2134-4eb6-8510-ece2a6568215\") " pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.767762 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.972088 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" event={"ID":"cb38b0db-02f2-4797-831b-baadb29db220","Type":"ContainerStarted","Data":"7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd"} Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.972422 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" event={"ID":"cb38b0db-02f2-4797-831b-baadb29db220","Type":"ContainerStarted","Data":"a9e0a2e8240b1e4870419d402cb4655289e3ed04ceb3d2a54121480c8cb83557"} Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.973986 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.986567 4902 generic.go:334] "Generic (PLEG): container finished" podID="fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c" containerID="b6d7d5d162dd8020dab67d2fe23d24005012a29169bdb88e5ec54c6cf61b4929" exitCode=0 Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.986678 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kplb" event={"ID":"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c","Type":"ContainerDied","Data":"b6d7d5d162dd8020dab67d2fe23d24005012a29169bdb88e5ec54c6cf61b4929"} Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.002410 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppndl" event={"ID":"663aee99-c55e-45ba-b5ff-a67def0f524e","Type":"ContainerStarted","Data":"7de311315f7941125515d7510324423471ae52ea94f170c56e695237656c2e2a"} Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.009113 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wx2t6"] Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.012174 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" podStartSLOduration=4.012150293 podStartE2EDuration="4.012150293s" podCreationTimestamp="2026-01-21 14:37:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:37:50.000802895 +0000 UTC m=+232.077635934" watchObservedRunningTime="2026-01-21 14:37:50.012150293 +0000 UTC m=+232.088983332" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.013550 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" event={"ID":"a79a8460-e7c3-4c10-b5b9-6626715eb24a","Type":"ContainerStarted","Data":"d7270dbcc770b97b85d9ccbb0214929ed8fa65fd7e0aee8a26b7893223ddebfa"} Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.013592 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" event={"ID":"a79a8460-e7c3-4c10-b5b9-6626715eb24a","Type":"ContainerStarted","Data":"35dad082b0eb1ecb464f03e9456b66ffeaa63c62d77bb1e4f7192a72abdf2c4d"} Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.013610 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.020909 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.058729 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" podStartSLOduration=4.058714207 podStartE2EDuration="4.058714207s" podCreationTimestamp="2026-01-21 14:37:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:37:50.057944931 +0000 UTC m=+232.134777960" watchObservedRunningTime="2026-01-21 14:37:50.058714207 +0000 UTC m=+232.135547236" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.398672 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.442478 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-26g5j"] Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.443749 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.497141 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.509921 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-26g5j"] Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.527924 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmh97\" (UniqueName: \"kubernetes.io/projected/9904001f-3d1f-494d-bfb6-5baa56f45c7b-kube-api-access-vmh97\") pod \"certified-operators-26g5j\" (UID: \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\") " pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.527969 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9904001f-3d1f-494d-bfb6-5baa56f45c7b-utilities\") pod \"certified-operators-26g5j\" (UID: \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\") " pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.527985 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9904001f-3d1f-494d-bfb6-5baa56f45c7b-catalog-content\") pod \"certified-operators-26g5j\" (UID: \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\") " pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.628540 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmh97\" (UniqueName: \"kubernetes.io/projected/9904001f-3d1f-494d-bfb6-5baa56f45c7b-kube-api-access-vmh97\") pod \"certified-operators-26g5j\" (UID: \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\") " pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.628589 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9904001f-3d1f-494d-bfb6-5baa56f45c7b-utilities\") pod \"certified-operators-26g5j\" (UID: \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\") " pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.628605 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9904001f-3d1f-494d-bfb6-5baa56f45c7b-catalog-content\") pod \"certified-operators-26g5j\" (UID: \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\") " pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.629167 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9904001f-3d1f-494d-bfb6-5baa56f45c7b-catalog-content\") pod \"certified-operators-26g5j\" (UID: \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\") " pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.629219 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9904001f-3d1f-494d-bfb6-5baa56f45c7b-utilities\") pod \"certified-operators-26g5j\" (UID: \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\") " pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.647263 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmh97\" (UniqueName: \"kubernetes.io/projected/9904001f-3d1f-494d-bfb6-5baa56f45c7b-kube-api-access-vmh97\") pod \"certified-operators-26g5j\" (UID: \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\") " pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.817235 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:37:51 crc kubenswrapper[4902]: I0121 14:37:51.018224 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wx2t6" event={"ID":"a1458bec-2134-4eb6-8510-ece2a6568215","Type":"ContainerDied","Data":"75dbfffe1a292d59aebf0dda1372b5bf1cb539e9684f4315cb02199044a5774e"} Jan 21 14:37:51 crc kubenswrapper[4902]: I0121 14:37:51.018030 4902 generic.go:334] "Generic (PLEG): container finished" podID="a1458bec-2134-4eb6-8510-ece2a6568215" containerID="75dbfffe1a292d59aebf0dda1372b5bf1cb539e9684f4315cb02199044a5774e" exitCode=0 Jan 21 14:37:51 crc kubenswrapper[4902]: I0121 14:37:51.018326 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wx2t6" event={"ID":"a1458bec-2134-4eb6-8510-ece2a6568215","Type":"ContainerStarted","Data":"0c33f9b7fd46d05c8e52b7ed0e8c0e3ee3e633992cb415fa75bef4908ef2fa1f"} Jan 21 14:37:51 crc kubenswrapper[4902]: I0121 14:37:51.021712 4902 generic.go:334] "Generic (PLEG): container finished" podID="663aee99-c55e-45ba-b5ff-a67def0f524e" containerID="7de311315f7941125515d7510324423471ae52ea94f170c56e695237656c2e2a" exitCode=0 Jan 21 14:37:51 crc kubenswrapper[4902]: I0121 14:37:51.021841 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppndl" event={"ID":"663aee99-c55e-45ba-b5ff-a67def0f524e","Type":"ContainerDied","Data":"7de311315f7941125515d7510324423471ae52ea94f170c56e695237656c2e2a"} Jan 21 14:37:51 crc kubenswrapper[4902]: I0121 14:37:51.030572 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kplb" event={"ID":"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c","Type":"ContainerStarted","Data":"32933455214f818d252eed3ceaaa0d4a7d2f4fa096127a0a72f35ba55e453be2"} Jan 21 14:37:51 crc kubenswrapper[4902]: I0121 14:37:51.223988 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-26g5j"] Jan 21 14:37:51 crc kubenswrapper[4902]: W0121 14:37:51.233547 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9904001f_3d1f_494d_bfb6_5baa56f45c7b.slice/crio-739b3544e777bebaead10779acdf44cab51721b0171dbd10be4cd7129f38efe6 WatchSource:0}: Error finding container 739b3544e777bebaead10779acdf44cab51721b0171dbd10be4cd7129f38efe6: Status 404 returned error can't find the container with id 739b3544e777bebaead10779acdf44cab51721b0171dbd10be4cd7129f38efe6 Jan 21 14:37:52 crc kubenswrapper[4902]: I0121 14:37:52.037345 4902 generic.go:334] "Generic (PLEG): container finished" podID="a1458bec-2134-4eb6-8510-ece2a6568215" containerID="43adeb973bdbf05aa4340e69a147ab41031881fc3cf5bd920322ca643738ff13" exitCode=0 Jan 21 14:37:52 crc kubenswrapper[4902]: I0121 14:37:52.037448 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wx2t6" event={"ID":"a1458bec-2134-4eb6-8510-ece2a6568215","Type":"ContainerDied","Data":"43adeb973bdbf05aa4340e69a147ab41031881fc3cf5bd920322ca643738ff13"} Jan 21 14:37:52 crc kubenswrapper[4902]: I0121 14:37:52.041987 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppndl" event={"ID":"663aee99-c55e-45ba-b5ff-a67def0f524e","Type":"ContainerStarted","Data":"be4c066623f5d96b397cf3b197cd7394822280faa40315013d520181b2fe0bad"} Jan 21 14:37:52 crc kubenswrapper[4902]: I0121 14:37:52.043886 4902 generic.go:334] "Generic (PLEG): container finished" podID="fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c" containerID="32933455214f818d252eed3ceaaa0d4a7d2f4fa096127a0a72f35ba55e453be2" exitCode=0 Jan 21 14:37:52 crc kubenswrapper[4902]: I0121 14:37:52.043941 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kplb" event={"ID":"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c","Type":"ContainerDied","Data":"32933455214f818d252eed3ceaaa0d4a7d2f4fa096127a0a72f35ba55e453be2"} Jan 21 14:37:52 crc kubenswrapper[4902]: I0121 14:37:52.045679 4902 generic.go:334] "Generic (PLEG): container finished" podID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" containerID="de60c37f5710208f72f9f0715ff13efe88f49c259037aabe1c6d0e05acd832c5" exitCode=0 Jan 21 14:37:52 crc kubenswrapper[4902]: I0121 14:37:52.046452 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26g5j" event={"ID":"9904001f-3d1f-494d-bfb6-5baa56f45c7b","Type":"ContainerDied","Data":"de60c37f5710208f72f9f0715ff13efe88f49c259037aabe1c6d0e05acd832c5"} Jan 21 14:37:52 crc kubenswrapper[4902]: I0121 14:37:52.046473 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26g5j" event={"ID":"9904001f-3d1f-494d-bfb6-5baa56f45c7b","Type":"ContainerStarted","Data":"739b3544e777bebaead10779acdf44cab51721b0171dbd10be4cd7129f38efe6"} Jan 21 14:37:52 crc kubenswrapper[4902]: I0121 14:37:52.097886 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ppndl" podStartSLOduration=2.538808746 podStartE2EDuration="5.097847374s" podCreationTimestamp="2026-01-21 14:37:47 +0000 UTC" firstStartedPulling="2026-01-21 14:37:48.941441574 +0000 UTC m=+231.018274603" lastFinishedPulling="2026-01-21 14:37:51.500480202 +0000 UTC m=+233.577313231" observedRunningTime="2026-01-21 14:37:52.095475193 +0000 UTC m=+234.172308222" watchObservedRunningTime="2026-01-21 14:37:52.097847374 +0000 UTC m=+234.174680403" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.052355 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wx2t6" event={"ID":"a1458bec-2134-4eb6-8510-ece2a6568215","Type":"ContainerStarted","Data":"f55942192334bed78a5ef126b5bef566b21361e0fd55643757cc52ec26452a2b"} Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.055666 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kplb" event={"ID":"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c","Type":"ContainerStarted","Data":"d0bb85ff115f923a7208278f2b5eb58c0438b0731d1a5a61a24b3e079aff5c99"} Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.057568 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26g5j" event={"ID":"9904001f-3d1f-494d-bfb6-5baa56f45c7b","Type":"ContainerStarted","Data":"324a5a076adc14dfa1ea7fccdb6783b2662ed72b0f0e9eef50d853de6bd34ce7"} Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.080464 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wx2t6" podStartSLOduration=2.63394158 podStartE2EDuration="4.080441216s" podCreationTimestamp="2026-01-21 14:37:49 +0000 UTC" firstStartedPulling="2026-01-21 14:37:51.021494422 +0000 UTC m=+233.098327451" lastFinishedPulling="2026-01-21 14:37:52.467994058 +0000 UTC m=+234.544827087" observedRunningTime="2026-01-21 14:37:53.077221986 +0000 UTC m=+235.154055015" watchObservedRunningTime="2026-01-21 14:37:53.080441216 +0000 UTC m=+235.157274245" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.113917 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8kplb" podStartSLOduration=2.592185592 podStartE2EDuration="5.113901662s" podCreationTimestamp="2026-01-21 14:37:48 +0000 UTC" firstStartedPulling="2026-01-21 14:37:49.988510804 +0000 UTC m=+232.065343833" lastFinishedPulling="2026-01-21 14:37:52.510226874 +0000 UTC m=+234.587059903" observedRunningTime="2026-01-21 14:37:53.11179803 +0000 UTC m=+235.188631059" watchObservedRunningTime="2026-01-21 14:37:53.113901662 +0000 UTC m=+235.190734691" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.891239 4902 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.893014 4902 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.893504 4902 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.893653 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405" gracePeriod=15 Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.893852 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c" gracePeriod=15 Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.893944 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9" gracePeriod=15 Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.894006 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2" gracePeriod=15 Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.893587 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.893507 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551" gracePeriod=15 Jan 21 14:37:53 crc kubenswrapper[4902]: E0121 14:37:53.894486 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897179 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 14:37:53 crc kubenswrapper[4902]: E0121 14:37:53.897221 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897232 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 14:37:53 crc kubenswrapper[4902]: E0121 14:37:53.897267 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897277 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 14:37:53 crc kubenswrapper[4902]: E0121 14:37:53.897305 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897315 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 14:37:53 crc kubenswrapper[4902]: E0121 14:37:53.897326 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897340 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 14:37:53 crc kubenswrapper[4902]: E0121 14:37:53.897352 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897364 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 14:37:53 crc kubenswrapper[4902]: E0121 14:37:53.897379 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897387 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897681 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897699 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897718 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897733 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897743 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897760 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.904918 4902 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.943977 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.066256 4902 generic.go:334] "Generic (PLEG): container finished" podID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" containerID="324a5a076adc14dfa1ea7fccdb6783b2662ed72b0f0e9eef50d853de6bd34ce7" exitCode=0 Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.066342 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26g5j" event={"ID":"9904001f-3d1f-494d-bfb6-5baa56f45c7b","Type":"ContainerDied","Data":"324a5a076adc14dfa1ea7fccdb6783b2662ed72b0f0e9eef50d853de6bd34ce7"} Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.067677 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.067991 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:54 crc kubenswrapper[4902]: E0121 14:37:54.070122 4902 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.21:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-26g5j.188cc5d56b5d789b openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-26g5j,UID:9904001f-3d1f-494d-bfb6-5baa56f45c7b,APIVersion:v1,ResourceVersion:29868,FieldPath:spec.containers{registry-server},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 14:37:54.069756059 +0000 UTC m=+236.146589088,LastTimestamp:2026-01-21 14:37:54.069756059 +0000 UTC m=+236.146589088,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.070593 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.071919 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.072795 4902 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405" exitCode=0 Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.072824 4902 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c" exitCode=0 Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.072835 4902 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9" exitCode=0 Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.072846 4902 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2" exitCode=2 Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.072901 4902 scope.go:117] "RemoveContainer" containerID="35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.091895 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.091964 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.092001 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.092020 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.092068 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.092092 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.092119 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.092135 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.173939 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.174504 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.174662 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.174806 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.193723 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.193790 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.193827 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.193850 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.193880 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.193901 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.193957 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.193980 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.194108 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.194158 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.195407 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.195445 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.195475 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.195504 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.195536 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.196154 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: E0121 14:37:54.198241 4902 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.21:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" volumeName="registry-storage" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.239205 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: W0121 14:37:54.257996 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-61900715ef2ada5169683220999783176e3eed57a9af374f4d6d527712b13ddc WatchSource:0}: Error finding container 61900715ef2ada5169683220999783176e3eed57a9af374f4d6d527712b13ddc: Status 404 returned error can't find the container with id 61900715ef2ada5169683220999783176e3eed57a9af374f4d6d527712b13ddc Jan 21 14:37:55 crc kubenswrapper[4902]: I0121 14:37:55.084850 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ba32cd74229e0518ced5be5a05249085a9351531d73703f33e7dd49b6eafbb78"} Jan 21 14:37:55 crc kubenswrapper[4902]: I0121 14:37:55.085724 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"61900715ef2ada5169683220999783176e3eed57a9af374f4d6d527712b13ddc"} Jan 21 14:37:55 crc kubenswrapper[4902]: I0121 14:37:55.092487 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 14:37:55 crc kubenswrapper[4902]: I0121 14:37:55.095864 4902 generic.go:334] "Generic (PLEG): container finished" podID="84af95e1-2275-49b2-987c-afa33fb32734" containerID="ce71892c9cc4a5eca454b5acdd2876bc8fdf1542a231264709d1d8546488cc23" exitCode=0 Jan 21 14:37:55 crc kubenswrapper[4902]: I0121 14:37:55.095922 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"84af95e1-2275-49b2-987c-afa33fb32734","Type":"ContainerDied","Data":"ce71892c9cc4a5eca454b5acdd2876bc8fdf1542a231264709d1d8546488cc23"} Jan 21 14:37:55 crc kubenswrapper[4902]: I0121 14:37:55.098944 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:55 crc kubenswrapper[4902]: I0121 14:37:55.099338 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:55 crc kubenswrapper[4902]: I0121 14:37:55.099527 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:55 crc kubenswrapper[4902]: I0121 14:37:55.099711 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.105096 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26g5j" event={"ID":"9904001f-3d1f-494d-bfb6-5baa56f45c7b","Type":"ContainerStarted","Data":"bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6"} Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.109121 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.109685 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.110105 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.110350 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.110761 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.111190 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.111669 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.111888 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: E0121 14:37:56.823848 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:37:56Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:37:56Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:37:56Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:37:56Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: E0121 14:37:56.824945 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: E0121 14:37:56.825393 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: E0121 14:37:56.825679 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: E0121 14:37:56.825949 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: E0121 14:37:56.825974 4902 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.870641 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.871494 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.871988 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.872326 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.872687 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.872933 4902 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.873271 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.877181 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.877745 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.878106 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.878598 4902 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.878857 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.879174 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.037536 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.037603 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.037631 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.037688 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84af95e1-2275-49b2-987c-afa33fb32734-kubelet-dir\") pod \"84af95e1-2275-49b2-987c-afa33fb32734\" (UID: \"84af95e1-2275-49b2-987c-afa33fb32734\") " Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.037717 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.037755 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84af95e1-2275-49b2-987c-afa33fb32734-var-lock\") pod \"84af95e1-2275-49b2-987c-afa33fb32734\" (UID: \"84af95e1-2275-49b2-987c-afa33fb32734\") " Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.037769 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84af95e1-2275-49b2-987c-afa33fb32734-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "84af95e1-2275-49b2-987c-afa33fb32734" (UID: "84af95e1-2275-49b2-987c-afa33fb32734"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.037762 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.037791 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.037788 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84af95e1-2275-49b2-987c-afa33fb32734-kube-api-access\") pod \"84af95e1-2275-49b2-987c-afa33fb32734\" (UID: \"84af95e1-2275-49b2-987c-afa33fb32734\") " Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.037908 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84af95e1-2275-49b2-987c-afa33fb32734-var-lock" (OuterVolumeSpecName: "var-lock") pod "84af95e1-2275-49b2-987c-afa33fb32734" (UID: "84af95e1-2275-49b2-987c-afa33fb32734"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.038266 4902 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.038285 4902 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84af95e1-2275-49b2-987c-afa33fb32734-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.038294 4902 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.038301 4902 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.038309 4902 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84af95e1-2275-49b2-987c-afa33fb32734-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.044134 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84af95e1-2275-49b2-987c-afa33fb32734-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "84af95e1-2275-49b2-987c-afa33fb32734" (UID: "84af95e1-2275-49b2-987c-afa33fb32734"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.116649 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"84af95e1-2275-49b2-987c-afa33fb32734","Type":"ContainerDied","Data":"1e0e3b99d3e199bf0a5109aed2aaf9e421c0eab2e9ccba48ebac5e8687fa5207"} Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.116690 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e0e3b99d3e199bf0a5109aed2aaf9e421c0eab2e9ccba48ebac5e8687fa5207" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.116995 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.120383 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.120938 4902 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551" exitCode=0 Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.122403 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.126335 4902 scope.go:117] "RemoveContainer" containerID="12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.142122 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.142320 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.142498 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.142673 4902 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.142847 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.148203 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84af95e1-2275-49b2-987c-afa33fb32734-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.152757 4902 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.153238 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.153869 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.153886 4902 scope.go:117] "RemoveContainer" containerID="3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.154166 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.154714 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.170892 4902 scope.go:117] "RemoveContainer" containerID="0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.189166 4902 scope.go:117] "RemoveContainer" containerID="d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.209758 4902 scope.go:117] "RemoveContainer" containerID="56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.226499 4902 scope.go:117] "RemoveContainer" containerID="3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.253065 4902 scope.go:117] "RemoveContainer" containerID="12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405" Jan 21 14:37:57 crc kubenswrapper[4902]: E0121 14:37:57.253488 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\": container with ID starting with 12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405 not found: ID does not exist" containerID="12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.253530 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405"} err="failed to get container status \"12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\": rpc error: code = NotFound desc = could not find container \"12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\": container with ID starting with 12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405 not found: ID does not exist" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.253563 4902 scope.go:117] "RemoveContainer" containerID="3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c" Jan 21 14:37:57 crc kubenswrapper[4902]: E0121 14:37:57.254136 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\": container with ID starting with 3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c not found: ID does not exist" containerID="3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.254201 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c"} err="failed to get container status \"3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\": rpc error: code = NotFound desc = could not find container \"3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\": container with ID starting with 3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c not found: ID does not exist" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.254229 4902 scope.go:117] "RemoveContainer" containerID="0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9" Jan 21 14:37:57 crc kubenswrapper[4902]: E0121 14:37:57.255684 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\": container with ID starting with 0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9 not found: ID does not exist" containerID="0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.255716 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9"} err="failed to get container status \"0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\": rpc error: code = NotFound desc = could not find container \"0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\": container with ID starting with 0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9 not found: ID does not exist" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.255745 4902 scope.go:117] "RemoveContainer" containerID="d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2" Jan 21 14:37:57 crc kubenswrapper[4902]: E0121 14:37:57.256232 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\": container with ID starting with d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2 not found: ID does not exist" containerID="d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.256259 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2"} err="failed to get container status \"d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\": rpc error: code = NotFound desc = could not find container \"d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\": container with ID starting with d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2 not found: ID does not exist" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.256276 4902 scope.go:117] "RemoveContainer" containerID="56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551" Jan 21 14:37:57 crc kubenswrapper[4902]: E0121 14:37:57.256537 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\": container with ID starting with 56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551 not found: ID does not exist" containerID="56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.256557 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551"} err="failed to get container status \"56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\": rpc error: code = NotFound desc = could not find container \"56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\": container with ID starting with 56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551 not found: ID does not exist" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.256570 4902 scope.go:117] "RemoveContainer" containerID="3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e" Jan 21 14:37:57 crc kubenswrapper[4902]: E0121 14:37:57.256817 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\": container with ID starting with 3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e not found: ID does not exist" containerID="3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.256833 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e"} err="failed to get container status \"3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\": rpc error: code = NotFound desc = could not find container \"3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\": container with ID starting with 3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e not found: ID does not exist" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.370078 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.370118 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.432587 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.433817 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.434431 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.435008 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.435664 4902 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.436861 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.437151 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.162930 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.163926 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.164361 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.164573 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.164723 4902 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.164872 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.165010 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.300792 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.301619 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.302074 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.302301 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.302489 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.302720 4902 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.311123 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.401743 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.401789 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.440648 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.441337 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.441735 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.442026 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.442308 4902 status_manager.go:851] "Failed to get status for pod" podUID="fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c" pod="openshift-marketplace/redhat-operators-8kplb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8kplb\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.442520 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.442820 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.172963 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.173997 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.174266 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.174618 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.175274 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.175658 4902 status_manager.go:851] "Failed to get status for pod" podUID="fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c" pod="openshift-marketplace/redhat-operators-8kplb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8kplb\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.176163 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: E0121 14:37:59.336856 4902 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: E0121 14:37:59.337257 4902 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: E0121 14:37:59.337548 4902 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: E0121 14:37:59.337915 4902 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: E0121 14:37:59.338226 4902 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.338361 4902 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 21 14:37:59 crc kubenswrapper[4902]: E0121 14:37:59.338706 4902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="200ms" Jan 21 14:37:59 crc kubenswrapper[4902]: E0121 14:37:59.539767 4902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="400ms" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.768398 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.768463 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.804840 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.805269 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.805660 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.806214 4902 status_manager.go:851] "Failed to get status for pod" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" pod="openshift-marketplace/community-operators-wx2t6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wx2t6\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.806639 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.806914 4902 status_manager.go:851] "Failed to get status for pod" podUID="fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c" pod="openshift-marketplace/redhat-operators-8kplb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8kplb\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.807283 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.807639 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: E0121 14:37:59.940722 4902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="800ms" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.178088 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.178487 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.178776 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.178980 4902 status_manager.go:851] "Failed to get status for pod" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" pod="openshift-marketplace/community-operators-wx2t6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wx2t6\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.179189 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.179433 4902 status_manager.go:851] "Failed to get status for pod" podUID="fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c" pod="openshift-marketplace/redhat-operators-8kplb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8kplb\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.179696 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.179932 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: E0121 14:38:00.743269 4902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="1.6s" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.818730 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.818813 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.857379 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.857875 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.858195 4902 status_manager.go:851] "Failed to get status for pod" podUID="fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c" pod="openshift-marketplace/redhat-operators-8kplb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8kplb\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.858433 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.858695 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.858964 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.859222 4902 status_manager.go:851] "Failed to get status for pod" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" pod="openshift-marketplace/community-operators-wx2t6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wx2t6\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.859467 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:01 crc kubenswrapper[4902]: I0121 14:38:01.177184 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:38:01 crc kubenswrapper[4902]: I0121 14:38:01.177799 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:01 crc kubenswrapper[4902]: I0121 14:38:01.178408 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:01 crc kubenswrapper[4902]: I0121 14:38:01.178887 4902 status_manager.go:851] "Failed to get status for pod" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" pod="openshift-marketplace/community-operators-wx2t6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wx2t6\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:01 crc kubenswrapper[4902]: I0121 14:38:01.179192 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:01 crc kubenswrapper[4902]: I0121 14:38:01.179476 4902 status_manager.go:851] "Failed to get status for pod" podUID="fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c" pod="openshift-marketplace/redhat-operators-8kplb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8kplb\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:01 crc kubenswrapper[4902]: I0121 14:38:01.179769 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:01 crc kubenswrapper[4902]: I0121 14:38:01.180026 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:02 crc kubenswrapper[4902]: E0121 14:38:02.008445 4902 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.21:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-26g5j.188cc5d56b5d789b openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-26g5j,UID:9904001f-3d1f-494d-bfb6-5baa56f45c7b,APIVersion:v1,ResourceVersion:29868,FieldPath:spec.containers{registry-server},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 14:37:54.069756059 +0000 UTC m=+236.146589088,LastTimestamp:2026-01-21 14:37:54.069756059 +0000 UTC m=+236.146589088,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 14:38:02 crc kubenswrapper[4902]: E0121 14:38:02.344850 4902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="3.2s" Jan 21 14:38:05 crc kubenswrapper[4902]: E0121 14:38:05.545678 4902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="6.4s" Jan 21 14:38:07 crc kubenswrapper[4902]: I0121 14:38:07.294513 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:38:07 crc kubenswrapper[4902]: I0121 14:38:07.295651 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:07 crc kubenswrapper[4902]: I0121 14:38:07.296152 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:07 crc kubenswrapper[4902]: I0121 14:38:07.296460 4902 status_manager.go:851] "Failed to get status for pod" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" pod="openshift-marketplace/community-operators-wx2t6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wx2t6\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:07 crc kubenswrapper[4902]: I0121 14:38:07.296695 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:07 crc kubenswrapper[4902]: I0121 14:38:07.296894 4902 status_manager.go:851] "Failed to get status for pod" podUID="fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c" pod="openshift-marketplace/redhat-operators-8kplb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8kplb\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:07 crc kubenswrapper[4902]: I0121 14:38:07.297099 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:07 crc kubenswrapper[4902]: I0121 14:38:07.297292 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:07 crc kubenswrapper[4902]: I0121 14:38:07.308020 4902 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8af838db-be80-416e-baea-f302db74939c" Jan 21 14:38:07 crc kubenswrapper[4902]: I0121 14:38:07.308063 4902 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8af838db-be80-416e-baea-f302db74939c" Jan 21 14:38:07 crc kubenswrapper[4902]: E0121 14:38:07.308381 4902 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:38:07 crc kubenswrapper[4902]: I0121 14:38:07.308794 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.175340 4902 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="dc58673a1dc1631e428ca61fa990459af44227104c602aee2effaba0e45ffddf" exitCode=0 Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.175420 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"dc58673a1dc1631e428ca61fa990459af44227104c602aee2effaba0e45ffddf"} Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.175639 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"64203db93c1e54e1cf79bbfe8881d127e48e32a2911c68da90eca6a89cc36ee3"} Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.175912 4902 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8af838db-be80-416e-baea-f302db74939c" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.175932 4902 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8af838db-be80-416e-baea-f302db74939c" Jan 21 14:38:08 crc kubenswrapper[4902]: E0121 14:38:08.176380 4902 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.176434 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.176931 4902 status_manager.go:851] "Failed to get status for pod" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" pod="openshift-marketplace/community-operators-wx2t6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wx2t6\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.177172 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.177347 4902 status_manager.go:851] "Failed to get status for pod" podUID="fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c" pod="openshift-marketplace/redhat-operators-8kplb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8kplb\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.177602 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.178016 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.178454 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.178884 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.179016 4902 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2" exitCode=1 Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.179062 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2"} Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.179462 4902 scope.go:117] "RemoveContainer" containerID="9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.179989 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.180382 4902 status_manager.go:851] "Failed to get status for pod" podUID="fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c" pod="openshift-marketplace/redhat-operators-8kplb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8kplb\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.180610 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.180836 4902 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.181134 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.181376 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.181670 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.182186 4902 status_manager.go:851] "Failed to get status for pod" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" pod="openshift-marketplace/community-operators-wx2t6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wx2t6\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.303426 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.304082 4902 status_manager.go:851] "Failed to get status for pod" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" pod="openshift-marketplace/community-operators-wx2t6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wx2t6\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.304595 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.304894 4902 status_manager.go:851] "Failed to get status for pod" podUID="fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c" pod="openshift-marketplace/redhat-operators-8kplb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8kplb\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.305105 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.305292 4902 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.305476 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.305699 4902 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.305891 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.766471 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.190670 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"dff9c1202671af0b0a361747134a8092475701e7caa07ad1784ddcd6da6be2fe"} Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.190990 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"823188dc8f43c7423093c090374ff8be58e4e79e28b7a02a3a9d30349f9c9693"} Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.191005 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9c3482f1a841ee8066b99cd19499dcd169de10e84763f4402a7f79cf751954b1"} Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.191015 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ad5166b668349af6575f85f6b6d5d5594bcc1811143eb1c97afcfd05ef5d83c6"} Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.195525 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.195587 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f38e90723c687ad61c1b4f3fc03a3c99070f6e5ce4450df78af77e8fb2cd34c3"} Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.273563 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" podUID="0c16a673-e56a-49ff-ac34-6910e02214a6" containerName="oauth-openshift" containerID="cri-o://38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350" gracePeriod=15 Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.726496 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.842775 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c16a673-e56a-49ff-ac34-6910e02214a6-audit-dir\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843107 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-error\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843132 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-service-ca\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843176 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4l45\" (UniqueName: \"kubernetes.io/projected/0c16a673-e56a-49ff-ac34-6910e02214a6-kube-api-access-v4l45\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843196 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-provider-selection\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.842926 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c16a673-e56a-49ff-ac34-6910e02214a6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843216 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-cliconfig\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843380 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-ocp-branding-template\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843471 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-serving-cert\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843527 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-audit-policies\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843548 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-session\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843575 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-idp-0-file-data\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843604 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-router-certs\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843647 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-trusted-ca-bundle\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843676 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-login\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843838 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843961 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.844269 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.844302 4902 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c16a673-e56a-49ff-ac34-6910e02214a6-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.844325 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.844706 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.845018 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.867814 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.868128 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.872437 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c16a673-e56a-49ff-ac34-6910e02214a6-kube-api-access-v4l45" (OuterVolumeSpecName: "kube-api-access-v4l45") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "kube-api-access-v4l45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.875265 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.875626 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.879801 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.880142 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.881409 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.881836 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.945540 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.945576 4902 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.945586 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.945596 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.945605 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.945615 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.945625 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.945636 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.945646 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4l45\" (UniqueName: \"kubernetes.io/projected/0c16a673-e56a-49ff-ac34-6910e02214a6-kube-api-access-v4l45\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.945655 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.945665 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:10 crc kubenswrapper[4902]: I0121 14:38:10.205071 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bd3eb8aa9e3640a1b39ba0a20e8ed265c0e4eb9a3df867da8f6365840f2fb53b"} Jan 21 14:38:10 crc kubenswrapper[4902]: I0121 14:38:10.205397 4902 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8af838db-be80-416e-baea-f302db74939c" Jan 21 14:38:10 crc kubenswrapper[4902]: I0121 14:38:10.205416 4902 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8af838db-be80-416e-baea-f302db74939c" Jan 21 14:38:10 crc kubenswrapper[4902]: I0121 14:38:10.205672 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:38:10 crc kubenswrapper[4902]: I0121 14:38:10.207241 4902 generic.go:334] "Generic (PLEG): container finished" podID="0c16a673-e56a-49ff-ac34-6910e02214a6" containerID="38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350" exitCode=0 Jan 21 14:38:10 crc kubenswrapper[4902]: I0121 14:38:10.208143 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:38:10 crc kubenswrapper[4902]: I0121 14:38:10.211100 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" event={"ID":"0c16a673-e56a-49ff-ac34-6910e02214a6","Type":"ContainerDied","Data":"38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350"} Jan 21 14:38:10 crc kubenswrapper[4902]: I0121 14:38:10.211134 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" event={"ID":"0c16a673-e56a-49ff-ac34-6910e02214a6","Type":"ContainerDied","Data":"7b9eaa6ff12a7628df3550e4b5486c4dd30838dd795331af359c3d19256bdd60"} Jan 21 14:38:10 crc kubenswrapper[4902]: I0121 14:38:10.211151 4902 scope.go:117] "RemoveContainer" containerID="38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350" Jan 21 14:38:10 crc kubenswrapper[4902]: I0121 14:38:10.226760 4902 scope.go:117] "RemoveContainer" containerID="38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350" Jan 21 14:38:10 crc kubenswrapper[4902]: E0121 14:38:10.227201 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350\": container with ID starting with 38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350 not found: ID does not exist" containerID="38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350" Jan 21 14:38:10 crc kubenswrapper[4902]: I0121 14:38:10.227250 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350"} err="failed to get container status \"38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350\": rpc error: code = NotFound desc = could not find container \"38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350\": container with ID starting with 38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350 not found: ID does not exist" Jan 21 14:38:12 crc kubenswrapper[4902]: I0121 14:38:12.330476 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:38:12 crc kubenswrapper[4902]: I0121 14:38:12.330891 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:38:12 crc kubenswrapper[4902]: I0121 14:38:12.330915 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:38:13 crc kubenswrapper[4902]: I0121 14:38:13.483216 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:38:13 crc kubenswrapper[4902]: I0121 14:38:13.487406 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:38:13 crc kubenswrapper[4902]: I0121 14:38:13.579737 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:38:15 crc kubenswrapper[4902]: I0121 14:38:15.211738 4902 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:38:15 crc kubenswrapper[4902]: I0121 14:38:15.214929 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad5166b668349af6575f85f6b6d5d5594bcc1811143eb1c97afcfd05ef5d83c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823188dc8f43c7423093c090374ff8be58e4e79e28b7a02a3a9d30349f9c9693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3482f1a841ee8066b99cd19499dcd169de10e84763f4402a7f79cf751954b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3eb8aa9e3640a1b39ba0a20e8ed265c0e4eb9a3df867da8f6365840f2fb53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff9c1202671af0b0a361747134a8092475701e7caa07ad1784ddcd6da6be2fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:38:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": pods \"kube-apiserver-crc\" not found" Jan 21 14:38:15 crc kubenswrapper[4902]: I0121 14:38:15.233886 4902 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8af838db-be80-416e-baea-f302db74939c" Jan 21 14:38:15 crc kubenswrapper[4902]: I0121 14:38:15.233920 4902 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8af838db-be80-416e-baea-f302db74939c" Jan 21 14:38:15 crc kubenswrapper[4902]: I0121 14:38:15.237427 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:38:15 crc kubenswrapper[4902]: I0121 14:38:15.243065 4902 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1dd7ed5c-b84b-483e-a79c-dd31f29665ca" Jan 21 14:38:16 crc kubenswrapper[4902]: I0121 14:38:16.238474 4902 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8af838db-be80-416e-baea-f302db74939c" Jan 21 14:38:16 crc kubenswrapper[4902]: I0121 14:38:16.239642 4902 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8af838db-be80-416e-baea-f302db74939c" Jan 21 14:38:18 crc kubenswrapper[4902]: I0121 14:38:18.317146 4902 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1dd7ed5c-b84b-483e-a79c-dd31f29665ca" Jan 21 14:38:23 crc kubenswrapper[4902]: I0121 14:38:23.586306 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:38:24 crc kubenswrapper[4902]: I0121 14:38:24.549702 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 14:38:25 crc kubenswrapper[4902]: I0121 14:38:25.313423 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 14:38:25 crc kubenswrapper[4902]: I0121 14:38:25.463921 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 14:38:25 crc kubenswrapper[4902]: I0121 14:38:25.846723 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 14:38:26 crc kubenswrapper[4902]: I0121 14:38:26.782439 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 14:38:26 crc kubenswrapper[4902]: I0121 14:38:26.782790 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 14:38:26 crc kubenswrapper[4902]: I0121 14:38:26.783016 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 14:38:26 crc kubenswrapper[4902]: I0121 14:38:26.787313 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 14:38:26 crc kubenswrapper[4902]: I0121 14:38:26.842004 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 14:38:26 crc kubenswrapper[4902]: I0121 14:38:26.896344 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.095738 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.159865 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.169833 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.374956 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.401679 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.484222 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.543699 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.593940 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.829179 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.835487 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.849315 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.866520 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.943190 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.968317 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.984499 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.044511 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.061286 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.067207 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.076108 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.093321 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.209572 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.487737 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.552878 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.772969 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.834085 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.841236 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.884765 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.923773 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.967757 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.997357 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.011549 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.031937 4902 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.140309 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.145896 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.195225 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.294096 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.314098 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.477079 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.488340 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.528597 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.567262 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.581960 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.588224 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.678491 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.705749 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.764880 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.776441 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.867674 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.898580 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.946844 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.969671 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.986483 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.038532 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.070018 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.091145 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.131895 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.309811 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.347867 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.362166 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.367099 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.504095 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.584729 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.743619 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.901065 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.948475 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.068973 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.071209 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.193572 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.225889 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.285242 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.506499 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.525772 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.622178 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.710265 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.724572 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.778899 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.812632 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.825573 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.911871 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.990772 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 14:38:32 crc kubenswrapper[4902]: I0121 14:38:32.035413 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 14:38:32 crc kubenswrapper[4902]: I0121 14:38:32.061269 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 14:38:32 crc kubenswrapper[4902]: I0121 14:38:32.076176 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 14:38:32 crc kubenswrapper[4902]: I0121 14:38:32.155642 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 14:38:32 crc kubenswrapper[4902]: I0121 14:38:32.304454 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 14:38:32 crc kubenswrapper[4902]: I0121 14:38:32.529393 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 14:38:32 crc kubenswrapper[4902]: I0121 14:38:32.698101 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 14:38:32 crc kubenswrapper[4902]: I0121 14:38:32.832639 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 14:38:32 crc kubenswrapper[4902]: I0121 14:38:32.860900 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 14:38:32 crc kubenswrapper[4902]: I0121 14:38:32.914685 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 14:38:32 crc kubenswrapper[4902]: I0121 14:38:32.931008 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 14:38:32 crc kubenswrapper[4902]: I0121 14:38:32.971444 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.033608 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.199506 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.206771 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.270154 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.370461 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.388491 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.394180 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.619644 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.798329 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.845922 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.892037 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.923256 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.931743 4902 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.932944 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=40.932928018 podStartE2EDuration="40.932928018s" podCreationTimestamp="2026-01-21 14:37:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:38:15.09741177 +0000 UTC m=+257.174244799" watchObservedRunningTime="2026-01-21 14:38:33.932928018 +0000 UTC m=+276.009761047" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.934324 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-26g5j" podStartSLOduration=40.832240482 podStartE2EDuration="43.934316281s" podCreationTimestamp="2026-01-21 14:37:50 +0000 UTC" firstStartedPulling="2026-01-21 14:37:52.048226966 +0000 UTC m=+234.125059995" lastFinishedPulling="2026-01-21 14:37:55.150302765 +0000 UTC m=+237.227135794" observedRunningTime="2026-01-21 14:38:15.134447368 +0000 UTC m=+257.211280397" watchObservedRunningTime="2026-01-21 14:38:33.934316281 +0000 UTC m=+276.011149320" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.936831 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n2xzb","openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.936891 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-984c8fd85-7vnz7","openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 14:38:33 crc kubenswrapper[4902]: E0121 14:38:33.937117 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84af95e1-2275-49b2-987c-afa33fb32734" containerName="installer" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.937140 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="84af95e1-2275-49b2-987c-afa33fb32734" containerName="installer" Jan 21 14:38:33 crc kubenswrapper[4902]: E0121 14:38:33.937164 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c16a673-e56a-49ff-ac34-6910e02214a6" containerName="oauth-openshift" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.937174 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c16a673-e56a-49ff-ac34-6910e02214a6" containerName="oauth-openshift" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.937436 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c16a673-e56a-49ff-ac34-6910e02214a6" containerName="oauth-openshift" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.937461 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="84af95e1-2275-49b2-987c-afa33fb32734" containerName="installer" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.937464 4902 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8af838db-be80-416e-baea-f302db74939c" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.937496 4902 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8af838db-be80-416e-baea-f302db74939c" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.937913 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.940609 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.941098 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.941348 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.941682 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.941725 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.941817 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.941829 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.941867 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.942058 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.942095 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.942325 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.942449 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.947221 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.948359 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.961030 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.965003 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.966990 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.973163 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.973214 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-session\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.973245 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/59eebcd0-5352-4547-b84b-8de6538c7a03-audit-dir\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.973419 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.973479 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/59eebcd0-5352-4547-b84b-8de6538c7a03-audit-policies\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.973562 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-service-ca\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.973629 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-user-template-login\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.973740 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-router-certs\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.973811 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.974325 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nqs8\" (UniqueName: \"kubernetes.io/projected/59eebcd0-5352-4547-b84b-8de6538c7a03-kube-api-access-8nqs8\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.974409 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.974469 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-cliconfig\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.974499 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-user-template-error\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.974555 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-serving-cert\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.986699 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.98667697 podStartE2EDuration="18.98667697s" podCreationTimestamp="2026-01-21 14:38:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:38:33.98344012 +0000 UTC m=+276.060273149" watchObservedRunningTime="2026-01-21 14:38:33.98667697 +0000 UTC m=+276.063510019" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.010865 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.022539 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075238 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nqs8\" (UniqueName: \"kubernetes.io/projected/59eebcd0-5352-4547-b84b-8de6538c7a03-kube-api-access-8nqs8\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075291 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075336 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-cliconfig\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075363 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-user-template-error\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075383 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-serving-cert\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075417 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-session\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075439 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075462 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/59eebcd0-5352-4547-b84b-8de6538c7a03-audit-dir\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075489 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075509 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/59eebcd0-5352-4547-b84b-8de6538c7a03-audit-policies\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075546 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-service-ca\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075567 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-user-template-login\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075597 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-router-certs\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075630 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.076748 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/59eebcd0-5352-4547-b84b-8de6538c7a03-audit-dir\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.077018 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-cliconfig\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.077219 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/59eebcd0-5352-4547-b84b-8de6538c7a03-audit-policies\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.077443 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.077873 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-service-ca\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.081785 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-router-certs\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.081960 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.082338 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-session\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.082392 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-serving-cert\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.082528 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.082780 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-user-template-error\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.083534 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-user-template-login\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.086723 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.092093 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.093587 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nqs8\" (UniqueName: \"kubernetes.io/projected/59eebcd0-5352-4547-b84b-8de6538c7a03-kube-api-access-8nqs8\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.131519 4902 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.149544 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.166871 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.237648 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.259239 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.295944 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.301127 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c16a673-e56a-49ff-ac34-6910e02214a6" path="/var/lib/kubelet/pods/0c16a673-e56a-49ff-ac34-6910e02214a6/volumes" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.311469 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.354218 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.372289 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.380214 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.413950 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.421361 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.438936 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.452459 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.511499 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.530089 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.539263 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.542064 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.568059 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.707243 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.815228 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.916530 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.916628 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.068282 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.095009 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.108982 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.158616 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.252156 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.342914 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.445881 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.481058 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.552799 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.565967 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.570813 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.580232 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.597311 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.622438 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.657020 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.699307 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.798766 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.973476 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-984c8fd85-7vnz7"] Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.104004 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.131813 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.216437 4902 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.285421 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.619804 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.663843 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.665884 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.852138 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" event={"ID":"59eebcd0-5352-4547-b84b-8de6538c7a03","Type":"ContainerStarted","Data":"869c1f5dc9f4ec2f0178b825ba09a2f9f2fe20a5dc314ece6abcacfb7fd245c9"} Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.852205 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" event={"ID":"59eebcd0-5352-4547-b84b-8de6538c7a03","Type":"ContainerStarted","Data":"f20b329b2afcc59d2ad8385b89bb7a8933a00cb1bdad6a8d834a7cc51454aca7"} Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.853390 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.858392 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.874248 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" podStartSLOduration=52.874228308 podStartE2EDuration="52.874228308s" podCreationTimestamp="2026-01-21 14:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:38:36.871925756 +0000 UTC m=+278.948758825" watchObservedRunningTime="2026-01-21 14:38:36.874228308 +0000 UTC m=+278.951061337" Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.887535 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.888969 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.967237 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.005671 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.061073 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.132014 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.181371 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.275069 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.275103 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.275767 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.305793 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.331989 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.346915 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.480630 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.628510 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.656971 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.663358 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.702302 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.716608 4902 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.716877 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ba32cd74229e0518ced5be5a05249085a9351531d73703f33e7dd49b6eafbb78" gracePeriod=5 Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.744694 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.782003 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.808499 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.817177 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.827936 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.926202 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.949384 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.960204 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.998730 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.022823 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.095347 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.118801 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.397663 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.404788 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.442436 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.516009 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.650477 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.749486 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.770494 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.777659 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.796938 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.843012 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.004790 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.016237 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.038535 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.094719 4902 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.314288 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.351549 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.429134 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.429515 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.505493 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.545125 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.545942 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.630659 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.717462 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.841466 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.934986 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.935220 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.947728 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 14:38:40 crc kubenswrapper[4902]: I0121 14:38:40.038521 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 14:38:40 crc kubenswrapper[4902]: I0121 14:38:40.050734 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 14:38:40 crc kubenswrapper[4902]: I0121 14:38:40.320873 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 14:38:40 crc kubenswrapper[4902]: I0121 14:38:40.360304 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 14:38:40 crc kubenswrapper[4902]: I0121 14:38:40.528783 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 14:38:40 crc kubenswrapper[4902]: I0121 14:38:40.597684 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 14:38:40 crc kubenswrapper[4902]: I0121 14:38:40.613036 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 14:38:40 crc kubenswrapper[4902]: I0121 14:38:40.847513 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9nccj"] Jan 21 14:38:40 crc kubenswrapper[4902]: I0121 14:38:40.912086 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 14:38:41 crc kubenswrapper[4902]: I0121 14:38:41.065315 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 14:38:41 crc kubenswrapper[4902]: I0121 14:38:41.425884 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 14:38:41 crc kubenswrapper[4902]: I0121 14:38:41.455168 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 14:38:41 crc kubenswrapper[4902]: I0121 14:38:41.536269 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 14:38:41 crc kubenswrapper[4902]: I0121 14:38:41.782887 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 14:38:42 crc kubenswrapper[4902]: I0121 14:38:42.158061 4902 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 14:38:42 crc kubenswrapper[4902]: I0121 14:38:42.892606 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 14:38:42 crc kubenswrapper[4902]: I0121 14:38:42.892912 4902 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ba32cd74229e0518ced5be5a05249085a9351531d73703f33e7dd49b6eafbb78" exitCode=137 Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.286682 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.286753 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403172 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403224 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403303 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403308 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403362 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403394 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403441 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403450 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403499 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403669 4902 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403690 4902 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403700 4902 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403710 4902 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.412658 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.474771 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.504531 4902 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.899609 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.899691 4902 scope.go:117] "RemoveContainer" containerID="ba32cd74229e0518ced5be5a05249085a9351531d73703f33e7dd49b6eafbb78" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.899781 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:38:44 crc kubenswrapper[4902]: I0121 14:38:44.301649 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 21 14:38:44 crc kubenswrapper[4902]: I0121 14:38:44.302345 4902 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 21 14:38:44 crc kubenswrapper[4902]: I0121 14:38:44.312671 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 14:38:44 crc kubenswrapper[4902]: I0121 14:38:44.312720 4902 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6611278e-a0c8-4ecd-a30f-18efb57ba215" Jan 21 14:38:44 crc kubenswrapper[4902]: I0121 14:38:44.316070 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 14:38:44 crc kubenswrapper[4902]: I0121 14:38:44.316157 4902 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6611278e-a0c8-4ecd-a30f-18efb57ba215" Jan 21 14:38:46 crc kubenswrapper[4902]: I0121 14:38:46.714868 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6466b9bcb-hkw2b"] Jan 21 14:38:46 crc kubenswrapper[4902]: I0121 14:38:46.715500 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" podUID="a79a8460-e7c3-4c10-b5b9-6626715eb24a" containerName="controller-manager" containerID="cri-o://d7270dbcc770b97b85d9ccbb0214929ed8fa65fd7e0aee8a26b7893223ddebfa" gracePeriod=30 Jan 21 14:38:46 crc kubenswrapper[4902]: I0121 14:38:46.816693 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh"] Jan 21 14:38:46 crc kubenswrapper[4902]: I0121 14:38:46.817004 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" podUID="cb38b0db-02f2-4797-831b-baadb29db220" containerName="route-controller-manager" containerID="cri-o://7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd" gracePeriod=30 Jan 21 14:38:46 crc kubenswrapper[4902]: I0121 14:38:46.920630 4902 generic.go:334] "Generic (PLEG): container finished" podID="a79a8460-e7c3-4c10-b5b9-6626715eb24a" containerID="d7270dbcc770b97b85d9ccbb0214929ed8fa65fd7e0aee8a26b7893223ddebfa" exitCode=0 Jan 21 14:38:46 crc kubenswrapper[4902]: I0121 14:38:46.920672 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" event={"ID":"a79a8460-e7c3-4c10-b5b9-6626715eb24a","Type":"ContainerDied","Data":"d7270dbcc770b97b85d9ccbb0214929ed8fa65fd7e0aee8a26b7893223ddebfa"} Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.190879 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.196981 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.361351 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a79a8460-e7c3-4c10-b5b9-6626715eb24a-serving-cert\") pod \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.361459 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qjlj\" (UniqueName: \"kubernetes.io/projected/cb38b0db-02f2-4797-831b-baadb29db220-kube-api-access-4qjlj\") pod \"cb38b0db-02f2-4797-831b-baadb29db220\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.361498 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-proxy-ca-bundles\") pod \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.361521 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hwbb\" (UniqueName: \"kubernetes.io/projected/a79a8460-e7c3-4c10-b5b9-6626715eb24a-kube-api-access-7hwbb\") pod \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.361558 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-client-ca\") pod \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.361614 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb38b0db-02f2-4797-831b-baadb29db220-config\") pod \"cb38b0db-02f2-4797-831b-baadb29db220\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.361631 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-config\") pod \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.361685 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb38b0db-02f2-4797-831b-baadb29db220-client-ca\") pod \"cb38b0db-02f2-4797-831b-baadb29db220\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.361726 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb38b0db-02f2-4797-831b-baadb29db220-serving-cert\") pod \"cb38b0db-02f2-4797-831b-baadb29db220\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.362764 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-client-ca" (OuterVolumeSpecName: "client-ca") pod "a79a8460-e7c3-4c10-b5b9-6626715eb24a" (UID: "a79a8460-e7c3-4c10-b5b9-6626715eb24a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.362817 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a79a8460-e7c3-4c10-b5b9-6626715eb24a" (UID: "a79a8460-e7c3-4c10-b5b9-6626715eb24a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.363152 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-config" (OuterVolumeSpecName: "config") pod "a79a8460-e7c3-4c10-b5b9-6626715eb24a" (UID: "a79a8460-e7c3-4c10-b5b9-6626715eb24a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.363292 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb38b0db-02f2-4797-831b-baadb29db220-client-ca" (OuterVolumeSpecName: "client-ca") pod "cb38b0db-02f2-4797-831b-baadb29db220" (UID: "cb38b0db-02f2-4797-831b-baadb29db220"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.363310 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb38b0db-02f2-4797-831b-baadb29db220-config" (OuterVolumeSpecName: "config") pod "cb38b0db-02f2-4797-831b-baadb29db220" (UID: "cb38b0db-02f2-4797-831b-baadb29db220"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.367881 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb38b0db-02f2-4797-831b-baadb29db220-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cb38b0db-02f2-4797-831b-baadb29db220" (UID: "cb38b0db-02f2-4797-831b-baadb29db220"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.367963 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a79a8460-e7c3-4c10-b5b9-6626715eb24a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a79a8460-e7c3-4c10-b5b9-6626715eb24a" (UID: "a79a8460-e7c3-4c10-b5b9-6626715eb24a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.368106 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb38b0db-02f2-4797-831b-baadb29db220-kube-api-access-4qjlj" (OuterVolumeSpecName: "kube-api-access-4qjlj") pod "cb38b0db-02f2-4797-831b-baadb29db220" (UID: "cb38b0db-02f2-4797-831b-baadb29db220"). InnerVolumeSpecName "kube-api-access-4qjlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.368913 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a79a8460-e7c3-4c10-b5b9-6626715eb24a-kube-api-access-7hwbb" (OuterVolumeSpecName: "kube-api-access-7hwbb") pod "a79a8460-e7c3-4c10-b5b9-6626715eb24a" (UID: "a79a8460-e7c3-4c10-b5b9-6626715eb24a"). InnerVolumeSpecName "kube-api-access-7hwbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.463748 4902 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.463800 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb38b0db-02f2-4797-831b-baadb29db220-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.463808 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.463881 4902 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb38b0db-02f2-4797-831b-baadb29db220-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.464453 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb38b0db-02f2-4797-831b-baadb29db220-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.464484 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a79a8460-e7c3-4c10-b5b9-6626715eb24a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.464496 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qjlj\" (UniqueName: \"kubernetes.io/projected/cb38b0db-02f2-4797-831b-baadb29db220-kube-api-access-4qjlj\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.464508 4902 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.464517 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hwbb\" (UniqueName: \"kubernetes.io/projected/a79a8460-e7c3-4c10-b5b9-6626715eb24a-kube-api-access-7hwbb\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.819684 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bd978d84c-b5724"] Jan 21 14:38:47 crc kubenswrapper[4902]: E0121 14:38:47.819970 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.819988 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 14:38:47 crc kubenswrapper[4902]: E0121 14:38:47.820011 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb38b0db-02f2-4797-831b-baadb29db220" containerName="route-controller-manager" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.820019 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb38b0db-02f2-4797-831b-baadb29db220" containerName="route-controller-manager" Jan 21 14:38:47 crc kubenswrapper[4902]: E0121 14:38:47.820036 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a79a8460-e7c3-4c10-b5b9-6626715eb24a" containerName="controller-manager" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.820049 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a79a8460-e7c3-4c10-b5b9-6626715eb24a" containerName="controller-manager" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.821516 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.821543 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb38b0db-02f2-4797-831b-baadb29db220" containerName="route-controller-manager" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.821563 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="a79a8460-e7c3-4c10-b5b9-6626715eb24a" containerName="controller-manager" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.822274 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.836466 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bd978d84c-b5724"] Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.928658 4902 generic.go:334] "Generic (PLEG): container finished" podID="cb38b0db-02f2-4797-831b-baadb29db220" containerID="7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd" exitCode=0 Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.928771 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" event={"ID":"cb38b0db-02f2-4797-831b-baadb29db220","Type":"ContainerDied","Data":"7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd"} Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.928825 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" event={"ID":"cb38b0db-02f2-4797-831b-baadb29db220","Type":"ContainerDied","Data":"a9e0a2e8240b1e4870419d402cb4655289e3ed04ceb3d2a54121480c8cb83557"} Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.928843 4902 scope.go:117] "RemoveContainer" containerID="7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.929011 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.930453 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" event={"ID":"a79a8460-e7c3-4c10-b5b9-6626715eb24a","Type":"ContainerDied","Data":"35dad082b0eb1ecb464f03e9456b66ffeaa63c62d77bb1e4f7192a72abdf2c4d"} Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.930535 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.945556 4902 scope.go:117] "RemoveContainer" containerID="7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd" Jan 21 14:38:47 crc kubenswrapper[4902]: E0121 14:38:47.946264 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd\": container with ID starting with 7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd not found: ID does not exist" containerID="7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.946307 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd"} err="failed to get container status \"7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd\": rpc error: code = NotFound desc = could not find container \"7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd\": container with ID starting with 7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd not found: ID does not exist" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.946336 4902 scope.go:117] "RemoveContainer" containerID="d7270dbcc770b97b85d9ccbb0214929ed8fa65fd7e0aee8a26b7893223ddebfa" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.965335 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6466b9bcb-hkw2b"] Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.970920 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5srg5\" (UniqueName: \"kubernetes.io/projected/125eaa50-dfc7-4d81-8e49-28c62e080939-kube-api-access-5srg5\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.970975 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125eaa50-dfc7-4d81-8e49-28c62e080939-config\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.971000 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/125eaa50-dfc7-4d81-8e49-28c62e080939-serving-cert\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.971082 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/125eaa50-dfc7-4d81-8e49-28c62e080939-client-ca\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.971107 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/125eaa50-dfc7-4d81-8e49-28c62e080939-proxy-ca-bundles\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.973720 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6466b9bcb-hkw2b"] Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.977053 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh"] Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.980433 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh"] Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.071997 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/125eaa50-dfc7-4d81-8e49-28c62e080939-client-ca\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.072119 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/125eaa50-dfc7-4d81-8e49-28c62e080939-proxy-ca-bundles\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.072176 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5srg5\" (UniqueName: \"kubernetes.io/projected/125eaa50-dfc7-4d81-8e49-28c62e080939-kube-api-access-5srg5\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.072229 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125eaa50-dfc7-4d81-8e49-28c62e080939-config\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.072264 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/125eaa50-dfc7-4d81-8e49-28c62e080939-serving-cert\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.073107 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/125eaa50-dfc7-4d81-8e49-28c62e080939-client-ca\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.073461 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/125eaa50-dfc7-4d81-8e49-28c62e080939-proxy-ca-bundles\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.073717 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125eaa50-dfc7-4d81-8e49-28c62e080939-config\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.086895 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/125eaa50-dfc7-4d81-8e49-28c62e080939-serving-cert\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.094275 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5srg5\" (UniqueName: \"kubernetes.io/projected/125eaa50-dfc7-4d81-8e49-28c62e080939-kube-api-access-5srg5\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.175639 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.301621 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a79a8460-e7c3-4c10-b5b9-6626715eb24a" path="/var/lib/kubelet/pods/a79a8460-e7c3-4c10-b5b9-6626715eb24a/volumes" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.302978 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb38b0db-02f2-4797-831b-baadb29db220" path="/var/lib/kubelet/pods/cb38b0db-02f2-4797-831b-baadb29db220/volumes" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.619197 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bd978d84c-b5724"] Jan 21 14:38:48 crc kubenswrapper[4902]: W0121 14:38:48.624269 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod125eaa50_dfc7_4d81_8e49_28c62e080939.slice/crio-7e5d892850f253ed31b54fcf96bbe2dd5879eb1ddef0410a13f4406da7111df4 WatchSource:0}: Error finding container 7e5d892850f253ed31b54fcf96bbe2dd5879eb1ddef0410a13f4406da7111df4: Status 404 returned error can't find the container with id 7e5d892850f253ed31b54fcf96bbe2dd5879eb1ddef0410a13f4406da7111df4 Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.817080 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh"] Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.817949 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.820318 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.820339 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.820419 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.820487 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.820325 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.821301 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.832451 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh"] Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.887253 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19c29a84-66f0-4537-b266-ec000b3bd70e-serving-cert\") pod \"route-controller-manager-54b864cd9c-j48hh\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.887313 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx55h\" (UniqueName: \"kubernetes.io/projected/19c29a84-66f0-4537-b266-ec000b3bd70e-kube-api-access-lx55h\") pod \"route-controller-manager-54b864cd9c-j48hh\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.887514 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19c29a84-66f0-4537-b266-ec000b3bd70e-client-ca\") pod \"route-controller-manager-54b864cd9c-j48hh\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.887693 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19c29a84-66f0-4537-b266-ec000b3bd70e-config\") pod \"route-controller-manager-54b864cd9c-j48hh\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.939783 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" event={"ID":"125eaa50-dfc7-4d81-8e49-28c62e080939","Type":"ContainerStarted","Data":"70a7c205fbbabf5baec83c64bd2a844890660d39714ab1e0611b23c452ed7f43"} Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.939834 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" event={"ID":"125eaa50-dfc7-4d81-8e49-28c62e080939","Type":"ContainerStarted","Data":"7e5d892850f253ed31b54fcf96bbe2dd5879eb1ddef0410a13f4406da7111df4"} Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.940030 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.970133 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" podStartSLOduration=2.970115194 podStartE2EDuration="2.970115194s" podCreationTimestamp="2026-01-21 14:38:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:38:48.965963675 +0000 UTC m=+291.042796704" watchObservedRunningTime="2026-01-21 14:38:48.970115194 +0000 UTC m=+291.046948223" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.989462 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx55h\" (UniqueName: \"kubernetes.io/projected/19c29a84-66f0-4537-b266-ec000b3bd70e-kube-api-access-lx55h\") pod \"route-controller-manager-54b864cd9c-j48hh\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.989586 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19c29a84-66f0-4537-b266-ec000b3bd70e-client-ca\") pod \"route-controller-manager-54b864cd9c-j48hh\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.989673 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19c29a84-66f0-4537-b266-ec000b3bd70e-config\") pod \"route-controller-manager-54b864cd9c-j48hh\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.989697 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19c29a84-66f0-4537-b266-ec000b3bd70e-serving-cert\") pod \"route-controller-manager-54b864cd9c-j48hh\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.990769 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19c29a84-66f0-4537-b266-ec000b3bd70e-client-ca\") pod \"route-controller-manager-54b864cd9c-j48hh\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.991141 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19c29a84-66f0-4537-b266-ec000b3bd70e-config\") pod \"route-controller-manager-54b864cd9c-j48hh\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.996784 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:49 crc kubenswrapper[4902]: I0121 14:38:49.009473 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19c29a84-66f0-4537-b266-ec000b3bd70e-serving-cert\") pod \"route-controller-manager-54b864cd9c-j48hh\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:49 crc kubenswrapper[4902]: I0121 14:38:49.016671 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx55h\" (UniqueName: \"kubernetes.io/projected/19c29a84-66f0-4537-b266-ec000b3bd70e-kube-api-access-lx55h\") pod \"route-controller-manager-54b864cd9c-j48hh\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:49 crc kubenswrapper[4902]: I0121 14:38:49.153897 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:49 crc kubenswrapper[4902]: I0121 14:38:49.332667 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh"] Jan 21 14:38:49 crc kubenswrapper[4902]: W0121 14:38:49.337741 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19c29a84_66f0_4537_b266_ec000b3bd70e.slice/crio-26466d6a0f3dd74552659838d2ebe09425c9b03a70d960e33a6db94de70ceba7 WatchSource:0}: Error finding container 26466d6a0f3dd74552659838d2ebe09425c9b03a70d960e33a6db94de70ceba7: Status 404 returned error can't find the container with id 26466d6a0f3dd74552659838d2ebe09425c9b03a70d960e33a6db94de70ceba7 Jan 21 14:38:49 crc kubenswrapper[4902]: I0121 14:38:49.945535 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" event={"ID":"19c29a84-66f0-4537-b266-ec000b3bd70e","Type":"ContainerStarted","Data":"26466d6a0f3dd74552659838d2ebe09425c9b03a70d960e33a6db94de70ceba7"} Jan 21 14:38:50 crc kubenswrapper[4902]: I0121 14:38:50.955928 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" event={"ID":"19c29a84-66f0-4537-b266-ec000b3bd70e","Type":"ContainerStarted","Data":"45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45"} Jan 21 14:38:50 crc kubenswrapper[4902]: I0121 14:38:50.956899 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:50 crc kubenswrapper[4902]: I0121 14:38:50.965626 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:50 crc kubenswrapper[4902]: I0121 14:38:50.983866 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" podStartSLOduration=4.983832945 podStartE2EDuration="4.983832945s" podCreationTimestamp="2026-01-21 14:38:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:38:50.978721296 +0000 UTC m=+293.055554325" watchObservedRunningTime="2026-01-21 14:38:50.983832945 +0000 UTC m=+293.060665974" Jan 21 14:38:58 crc kubenswrapper[4902]: I0121 14:38:58.147638 4902 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 21 14:39:05 crc kubenswrapper[4902]: I0121 14:39:05.912138 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" podUID="2e95c252-bd71-44fe-a8f1-d9a346d8a882" containerName="registry" containerID="cri-o://7547a62e909793d452303b8e38ed4e3709638a07c8cd2df82117a97266265a83" gracePeriod=30 Jan 21 14:39:06 crc kubenswrapper[4902]: I0121 14:39:06.715521 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh"] Jan 21 14:39:06 crc kubenswrapper[4902]: I0121 14:39:06.715767 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" podUID="19c29a84-66f0-4537-b266-ec000b3bd70e" containerName="route-controller-manager" containerID="cri-o://45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45" gracePeriod=30 Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.766175 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.811984 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf"] Jan 21 14:39:07 crc kubenswrapper[4902]: E0121 14:39:07.812224 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19c29a84-66f0-4537-b266-ec000b3bd70e" containerName="route-controller-manager" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.812235 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="19c29a84-66f0-4537-b266-ec000b3bd70e" containerName="route-controller-manager" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.812334 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="19c29a84-66f0-4537-b266-ec000b3bd70e" containerName="route-controller-manager" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.812731 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.813283 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf"] Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.824950 4902 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-9nccj container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.26:5000/healthz\": dial tcp 10.217.0.26:5000: connect: connection refused" start-of-body= Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.827177 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" podUID="2e95c252-bd71-44fe-a8f1-d9a346d8a882" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.26:5000/healthz\": dial tcp 10.217.0.26:5000: connect: connection refused" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.861093 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c457167-3537-454e-9813-8d4368a5c81a-client-ca\") pod \"route-controller-manager-7b895dcf8-vkrtf\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.861182 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hqf9\" (UniqueName: \"kubernetes.io/projected/8c457167-3537-454e-9813-8d4368a5c81a-kube-api-access-9hqf9\") pod \"route-controller-manager-7b895dcf8-vkrtf\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.861273 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c457167-3537-454e-9813-8d4368a5c81a-serving-cert\") pod \"route-controller-manager-7b895dcf8-vkrtf\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.861369 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c457167-3537-454e-9813-8d4368a5c81a-config\") pod \"route-controller-manager-7b895dcf8-vkrtf\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.963176 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19c29a84-66f0-4537-b266-ec000b3bd70e-serving-cert\") pod \"19c29a84-66f0-4537-b266-ec000b3bd70e\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.963235 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx55h\" (UniqueName: \"kubernetes.io/projected/19c29a84-66f0-4537-b266-ec000b3bd70e-kube-api-access-lx55h\") pod \"19c29a84-66f0-4537-b266-ec000b3bd70e\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.963266 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19c29a84-66f0-4537-b266-ec000b3bd70e-config\") pod \"19c29a84-66f0-4537-b266-ec000b3bd70e\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.963295 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19c29a84-66f0-4537-b266-ec000b3bd70e-client-ca\") pod \"19c29a84-66f0-4537-b266-ec000b3bd70e\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.963420 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c457167-3537-454e-9813-8d4368a5c81a-serving-cert\") pod \"route-controller-manager-7b895dcf8-vkrtf\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.963473 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c457167-3537-454e-9813-8d4368a5c81a-config\") pod \"route-controller-manager-7b895dcf8-vkrtf\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.963528 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c457167-3537-454e-9813-8d4368a5c81a-client-ca\") pod \"route-controller-manager-7b895dcf8-vkrtf\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.963557 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hqf9\" (UniqueName: \"kubernetes.io/projected/8c457167-3537-454e-9813-8d4368a5c81a-kube-api-access-9hqf9\") pod \"route-controller-manager-7b895dcf8-vkrtf\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.964067 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19c29a84-66f0-4537-b266-ec000b3bd70e-config" (OuterVolumeSpecName: "config") pod "19c29a84-66f0-4537-b266-ec000b3bd70e" (UID: "19c29a84-66f0-4537-b266-ec000b3bd70e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.964516 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19c29a84-66f0-4537-b266-ec000b3bd70e-client-ca" (OuterVolumeSpecName: "client-ca") pod "19c29a84-66f0-4537-b266-ec000b3bd70e" (UID: "19c29a84-66f0-4537-b266-ec000b3bd70e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.964882 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c457167-3537-454e-9813-8d4368a5c81a-client-ca\") pod \"route-controller-manager-7b895dcf8-vkrtf\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.965170 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c457167-3537-454e-9813-8d4368a5c81a-config\") pod \"route-controller-manager-7b895dcf8-vkrtf\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.969285 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19c29a84-66f0-4537-b266-ec000b3bd70e-kube-api-access-lx55h" (OuterVolumeSpecName: "kube-api-access-lx55h") pod "19c29a84-66f0-4537-b266-ec000b3bd70e" (UID: "19c29a84-66f0-4537-b266-ec000b3bd70e"). InnerVolumeSpecName "kube-api-access-lx55h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.969813 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c457167-3537-454e-9813-8d4368a5c81a-serving-cert\") pod \"route-controller-manager-7b895dcf8-vkrtf\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.975201 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19c29a84-66f0-4537-b266-ec000b3bd70e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "19c29a84-66f0-4537-b266-ec000b3bd70e" (UID: "19c29a84-66f0-4537-b266-ec000b3bd70e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.991881 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hqf9\" (UniqueName: \"kubernetes.io/projected/8c457167-3537-454e-9813-8d4368a5c81a-kube-api-access-9hqf9\") pod \"route-controller-manager-7b895dcf8-vkrtf\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.043533 4902 generic.go:334] "Generic (PLEG): container finished" podID="2e95c252-bd71-44fe-a8f1-d9a346d8a882" containerID="7547a62e909793d452303b8e38ed4e3709638a07c8cd2df82117a97266265a83" exitCode=0 Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.043600 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" event={"ID":"2e95c252-bd71-44fe-a8f1-d9a346d8a882","Type":"ContainerDied","Data":"7547a62e909793d452303b8e38ed4e3709638a07c8cd2df82117a97266265a83"} Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.044840 4902 generic.go:334] "Generic (PLEG): container finished" podID="19c29a84-66f0-4537-b266-ec000b3bd70e" containerID="45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45" exitCode=0 Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.044864 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" event={"ID":"19c29a84-66f0-4537-b266-ec000b3bd70e","Type":"ContainerDied","Data":"45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45"} Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.044885 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" event={"ID":"19c29a84-66f0-4537-b266-ec000b3bd70e","Type":"ContainerDied","Data":"26466d6a0f3dd74552659838d2ebe09425c9b03a70d960e33a6db94de70ceba7"} Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.044913 4902 scope.go:117] "RemoveContainer" containerID="45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.044918 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.060297 4902 scope.go:117] "RemoveContainer" containerID="45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45" Jan 21 14:39:08 crc kubenswrapper[4902]: E0121 14:39:08.061480 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45\": container with ID starting with 45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45 not found: ID does not exist" containerID="45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.061531 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45"} err="failed to get container status \"45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45\": rpc error: code = NotFound desc = could not find container \"45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45\": container with ID starting with 45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45 not found: ID does not exist" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.065268 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19c29a84-66f0-4537-b266-ec000b3bd70e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.065299 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx55h\" (UniqueName: \"kubernetes.io/projected/19c29a84-66f0-4537-b266-ec000b3bd70e-kube-api-access-lx55h\") on node \"crc\" DevicePath \"\"" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.065313 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19c29a84-66f0-4537-b266-ec000b3bd70e-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.065325 4902 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19c29a84-66f0-4537-b266-ec000b3bd70e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.077603 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh"] Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.082405 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh"] Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.132291 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.305391 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19c29a84-66f0-4537-b266-ec000b3bd70e" path="/var/lib/kubelet/pods/19c29a84-66f0-4537-b266-ec000b3bd70e/volumes" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.372018 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.554832 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf"] Jan 21 14:39:08 crc kubenswrapper[4902]: W0121 14:39:08.563425 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c457167_3537_454e_9813_8d4368a5c81a.slice/crio-ae70df3ba9716f535c67df452b7e64d459d015c7500af14b1cae99a0a0635724 WatchSource:0}: Error finding container ae70df3ba9716f535c67df452b7e64d459d015c7500af14b1cae99a0a0635724: Status 404 returned error can't find the container with id ae70df3ba9716f535c67df452b7e64d459d015c7500af14b1cae99a0a0635724 Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.569659 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2e95c252-bd71-44fe-a8f1-d9a346d8a882-installation-pull-secrets\") pod \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.569708 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2e95c252-bd71-44fe-a8f1-d9a346d8a882-registry-certificates\") pod \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.569745 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-registry-tls\") pod \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.569767 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9l2m\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-kube-api-access-r9l2m\") pod \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.569795 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2e95c252-bd71-44fe-a8f1-d9a346d8a882-ca-trust-extracted\") pod \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.569850 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-bound-sa-token\") pod \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.569877 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e95c252-bd71-44fe-a8f1-d9a346d8a882-trusted-ca\") pod \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.570036 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.570821 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e95c252-bd71-44fe-a8f1-d9a346d8a882-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "2e95c252-bd71-44fe-a8f1-d9a346d8a882" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.571872 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e95c252-bd71-44fe-a8f1-d9a346d8a882-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2e95c252-bd71-44fe-a8f1-d9a346d8a882" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.575116 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-kube-api-access-r9l2m" (OuterVolumeSpecName: "kube-api-access-r9l2m") pod "2e95c252-bd71-44fe-a8f1-d9a346d8a882" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882"). InnerVolumeSpecName "kube-api-access-r9l2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.575717 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e95c252-bd71-44fe-a8f1-d9a346d8a882-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "2e95c252-bd71-44fe-a8f1-d9a346d8a882" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.575820 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "2e95c252-bd71-44fe-a8f1-d9a346d8a882" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.576152 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "2e95c252-bd71-44fe-a8f1-d9a346d8a882" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.589949 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "2e95c252-bd71-44fe-a8f1-d9a346d8a882" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.597913 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e95c252-bd71-44fe-a8f1-d9a346d8a882-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "2e95c252-bd71-44fe-a8f1-d9a346d8a882" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.672002 4902 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.672030 4902 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e95c252-bd71-44fe-a8f1-d9a346d8a882-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.672056 4902 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2e95c252-bd71-44fe-a8f1-d9a346d8a882-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.672067 4902 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2e95c252-bd71-44fe-a8f1-d9a346d8a882-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.672077 4902 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.672085 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9l2m\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-kube-api-access-r9l2m\") on node \"crc\" DevicePath \"\"" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.672094 4902 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2e95c252-bd71-44fe-a8f1-d9a346d8a882-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 14:39:09 crc kubenswrapper[4902]: I0121 14:39:09.050852 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" event={"ID":"8c457167-3537-454e-9813-8d4368a5c81a","Type":"ContainerStarted","Data":"6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a"} Jan 21 14:39:09 crc kubenswrapper[4902]: I0121 14:39:09.050898 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" event={"ID":"8c457167-3537-454e-9813-8d4368a5c81a","Type":"ContainerStarted","Data":"ae70df3ba9716f535c67df452b7e64d459d015c7500af14b1cae99a0a0635724"} Jan 21 14:39:09 crc kubenswrapper[4902]: I0121 14:39:09.051084 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:09 crc kubenswrapper[4902]: I0121 14:39:09.052344 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" event={"ID":"2e95c252-bd71-44fe-a8f1-d9a346d8a882","Type":"ContainerDied","Data":"72fa44f70a1a8a5c4b377700f7f908db843af15c5da8c33d09c4e26da32bbe19"} Jan 21 14:39:09 crc kubenswrapper[4902]: I0121 14:39:09.052391 4902 scope.go:117] "RemoveContainer" containerID="7547a62e909793d452303b8e38ed4e3709638a07c8cd2df82117a97266265a83" Jan 21 14:39:09 crc kubenswrapper[4902]: I0121 14:39:09.052500 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:39:09 crc kubenswrapper[4902]: I0121 14:39:09.059115 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:09 crc kubenswrapper[4902]: I0121 14:39:09.081124 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" podStartSLOduration=3.081103946 podStartE2EDuration="3.081103946s" podCreationTimestamp="2026-01-21 14:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:39:09.078407582 +0000 UTC m=+311.155240611" watchObservedRunningTime="2026-01-21 14:39:09.081103946 +0000 UTC m=+311.157936975" Jan 21 14:39:09 crc kubenswrapper[4902]: I0121 14:39:09.092654 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9nccj"] Jan 21 14:39:09 crc kubenswrapper[4902]: I0121 14:39:09.101731 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9nccj"] Jan 21 14:39:10 crc kubenswrapper[4902]: I0121 14:39:10.300867 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e95c252-bd71-44fe-a8f1-d9a346d8a882" path="/var/lib/kubelet/pods/2e95c252-bd71-44fe-a8f1-d9a346d8a882/volumes" Jan 21 14:40:06 crc kubenswrapper[4902]: I0121 14:40:06.758315 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf"] Jan 21 14:40:06 crc kubenswrapper[4902]: I0121 14:40:06.759307 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" podUID="8c457167-3537-454e-9813-8d4368a5c81a" containerName="route-controller-manager" containerID="cri-o://6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a" gracePeriod=30 Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.247590 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.368058 4902 generic.go:334] "Generic (PLEG): container finished" podID="8c457167-3537-454e-9813-8d4368a5c81a" containerID="6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a" exitCode=0 Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.368114 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" event={"ID":"8c457167-3537-454e-9813-8d4368a5c81a","Type":"ContainerDied","Data":"6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a"} Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.368145 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" event={"ID":"8c457167-3537-454e-9813-8d4368a5c81a","Type":"ContainerDied","Data":"ae70df3ba9716f535c67df452b7e64d459d015c7500af14b1cae99a0a0635724"} Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.368143 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.368164 4902 scope.go:117] "RemoveContainer" containerID="6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.387445 4902 scope.go:117] "RemoveContainer" containerID="6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a" Jan 21 14:40:07 crc kubenswrapper[4902]: E0121 14:40:07.388732 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a\": container with ID starting with 6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a not found: ID does not exist" containerID="6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.388801 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a"} err="failed to get container status \"6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a\": rpc error: code = NotFound desc = could not find container \"6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a\": container with ID starting with 6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a not found: ID does not exist" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.410733 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c457167-3537-454e-9813-8d4368a5c81a-serving-cert\") pod \"8c457167-3537-454e-9813-8d4368a5c81a\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.410861 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c457167-3537-454e-9813-8d4368a5c81a-client-ca\") pod \"8c457167-3537-454e-9813-8d4368a5c81a\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.410902 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hqf9\" (UniqueName: \"kubernetes.io/projected/8c457167-3537-454e-9813-8d4368a5c81a-kube-api-access-9hqf9\") pod \"8c457167-3537-454e-9813-8d4368a5c81a\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.410956 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c457167-3537-454e-9813-8d4368a5c81a-config\") pod \"8c457167-3537-454e-9813-8d4368a5c81a\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.411899 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c457167-3537-454e-9813-8d4368a5c81a-config" (OuterVolumeSpecName: "config") pod "8c457167-3537-454e-9813-8d4368a5c81a" (UID: "8c457167-3537-454e-9813-8d4368a5c81a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.412302 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c457167-3537-454e-9813-8d4368a5c81a-client-ca" (OuterVolumeSpecName: "client-ca") pod "8c457167-3537-454e-9813-8d4368a5c81a" (UID: "8c457167-3537-454e-9813-8d4368a5c81a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.422341 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c457167-3537-454e-9813-8d4368a5c81a-kube-api-access-9hqf9" (OuterVolumeSpecName: "kube-api-access-9hqf9") pod "8c457167-3537-454e-9813-8d4368a5c81a" (UID: "8c457167-3537-454e-9813-8d4368a5c81a"). InnerVolumeSpecName "kube-api-access-9hqf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.422544 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c457167-3537-454e-9813-8d4368a5c81a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8c457167-3537-454e-9813-8d4368a5c81a" (UID: "8c457167-3537-454e-9813-8d4368a5c81a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.513111 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c457167-3537-454e-9813-8d4368a5c81a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.513157 4902 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c457167-3537-454e-9813-8d4368a5c81a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.513170 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hqf9\" (UniqueName: \"kubernetes.io/projected/8c457167-3537-454e-9813-8d4368a5c81a-kube-api-access-9hqf9\") on node \"crc\" DevicePath \"\"" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.513183 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c457167-3537-454e-9813-8d4368a5c81a-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.694223 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf"] Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.699566 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf"] Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.874164 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt"] Jan 21 14:40:07 crc kubenswrapper[4902]: E0121 14:40:07.874407 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e95c252-bd71-44fe-a8f1-d9a346d8a882" containerName="registry" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.874425 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e95c252-bd71-44fe-a8f1-d9a346d8a882" containerName="registry" Jan 21 14:40:07 crc kubenswrapper[4902]: E0121 14:40:07.874471 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c457167-3537-454e-9813-8d4368a5c81a" containerName="route-controller-manager" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.874480 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c457167-3537-454e-9813-8d4368a5c81a" containerName="route-controller-manager" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.874620 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e95c252-bd71-44fe-a8f1-d9a346d8a882" containerName="registry" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.874640 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c457167-3537-454e-9813-8d4368a5c81a" containerName="route-controller-manager" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.875060 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.876723 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.882098 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.882312 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.882327 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.882837 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.882915 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.892170 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt"] Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.018626 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73d979ed-d4af-414c-911c-d6246f682f19-client-ca\") pod \"route-controller-manager-54b864cd9c-lm5kt\" (UID: \"73d979ed-d4af-414c-911c-d6246f682f19\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.018704 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6x9k\" (UniqueName: \"kubernetes.io/projected/73d979ed-d4af-414c-911c-d6246f682f19-kube-api-access-n6x9k\") pod \"route-controller-manager-54b864cd9c-lm5kt\" (UID: \"73d979ed-d4af-414c-911c-d6246f682f19\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.018751 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73d979ed-d4af-414c-911c-d6246f682f19-config\") pod \"route-controller-manager-54b864cd9c-lm5kt\" (UID: \"73d979ed-d4af-414c-911c-d6246f682f19\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.018779 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73d979ed-d4af-414c-911c-d6246f682f19-serving-cert\") pod \"route-controller-manager-54b864cd9c-lm5kt\" (UID: \"73d979ed-d4af-414c-911c-d6246f682f19\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.119654 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73d979ed-d4af-414c-911c-d6246f682f19-config\") pod \"route-controller-manager-54b864cd9c-lm5kt\" (UID: \"73d979ed-d4af-414c-911c-d6246f682f19\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.119715 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73d979ed-d4af-414c-911c-d6246f682f19-serving-cert\") pod \"route-controller-manager-54b864cd9c-lm5kt\" (UID: \"73d979ed-d4af-414c-911c-d6246f682f19\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.119777 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73d979ed-d4af-414c-911c-d6246f682f19-client-ca\") pod \"route-controller-manager-54b864cd9c-lm5kt\" (UID: \"73d979ed-d4af-414c-911c-d6246f682f19\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.119810 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6x9k\" (UniqueName: \"kubernetes.io/projected/73d979ed-d4af-414c-911c-d6246f682f19-kube-api-access-n6x9k\") pod \"route-controller-manager-54b864cd9c-lm5kt\" (UID: \"73d979ed-d4af-414c-911c-d6246f682f19\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.121025 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73d979ed-d4af-414c-911c-d6246f682f19-config\") pod \"route-controller-manager-54b864cd9c-lm5kt\" (UID: \"73d979ed-d4af-414c-911c-d6246f682f19\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.121923 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73d979ed-d4af-414c-911c-d6246f682f19-client-ca\") pod \"route-controller-manager-54b864cd9c-lm5kt\" (UID: \"73d979ed-d4af-414c-911c-d6246f682f19\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.129909 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73d979ed-d4af-414c-911c-d6246f682f19-serving-cert\") pod \"route-controller-manager-54b864cd9c-lm5kt\" (UID: \"73d979ed-d4af-414c-911c-d6246f682f19\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.150395 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6x9k\" (UniqueName: \"kubernetes.io/projected/73d979ed-d4af-414c-911c-d6246f682f19-kube-api-access-n6x9k\") pod \"route-controller-manager-54b864cd9c-lm5kt\" (UID: \"73d979ed-d4af-414c-911c-d6246f682f19\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.188740 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.305909 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c457167-3537-454e-9813-8d4368a5c81a" path="/var/lib/kubelet/pods/8c457167-3537-454e-9813-8d4368a5c81a/volumes" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.641195 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt"] Jan 21 14:40:09 crc kubenswrapper[4902]: I0121 14:40:09.386416 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" event={"ID":"73d979ed-d4af-414c-911c-d6246f682f19","Type":"ContainerStarted","Data":"8b59272b0e8ef0c26929cebefe1f549c93e075b724ad829848efd3dbe8b75b87"} Jan 21 14:40:09 crc kubenswrapper[4902]: I0121 14:40:09.389400 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" event={"ID":"73d979ed-d4af-414c-911c-d6246f682f19","Type":"ContainerStarted","Data":"2f8af44cbe969b1d62717c4fd7f965210b4ef59ce492b827dd2dcb705d902642"} Jan 21 14:40:09 crc kubenswrapper[4902]: I0121 14:40:09.389531 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:09 crc kubenswrapper[4902]: I0121 14:40:09.444988 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:09 crc kubenswrapper[4902]: I0121 14:40:09.466334 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" podStartSLOduration=3.466314067 podStartE2EDuration="3.466314067s" podCreationTimestamp="2026-01-21 14:40:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:40:09.409480252 +0000 UTC m=+371.486313301" watchObservedRunningTime="2026-01-21 14:40:09.466314067 +0000 UTC m=+371.543147096" Jan 21 14:40:17 crc kubenswrapper[4902]: I0121 14:40:17.769743 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:40:17 crc kubenswrapper[4902]: I0121 14:40:17.770345 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:40:47 crc kubenswrapper[4902]: I0121 14:40:47.770448 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:40:47 crc kubenswrapper[4902]: I0121 14:40:47.771155 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:41:17 crc kubenswrapper[4902]: I0121 14:41:17.769571 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:41:17 crc kubenswrapper[4902]: I0121 14:41:17.770285 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:41:17 crc kubenswrapper[4902]: I0121 14:41:17.770343 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:41:17 crc kubenswrapper[4902]: I0121 14:41:17.771115 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"55b2a83cf4462f21e140aaf547deeb73f9aa69b5d7dddabe47e579030fe921f9"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:41:17 crc kubenswrapper[4902]: I0121 14:41:17.771182 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://55b2a83cf4462f21e140aaf547deeb73f9aa69b5d7dddabe47e579030fe921f9" gracePeriod=600 Jan 21 14:41:18 crc kubenswrapper[4902]: I0121 14:41:18.787471 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="55b2a83cf4462f21e140aaf547deeb73f9aa69b5d7dddabe47e579030fe921f9" exitCode=0 Jan 21 14:41:18 crc kubenswrapper[4902]: I0121 14:41:18.787721 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"55b2a83cf4462f21e140aaf547deeb73f9aa69b5d7dddabe47e579030fe921f9"} Jan 21 14:41:18 crc kubenswrapper[4902]: I0121 14:41:18.787869 4902 scope.go:117] "RemoveContainer" containerID="67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388" Jan 21 14:41:19 crc kubenswrapper[4902]: I0121 14:41:19.808226 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"1d2da13e9ad46e483ffa722259ad0a8b94b5c2e16fcacdd89045b1d8ac2afd0e"} Jan 21 14:43:47 crc kubenswrapper[4902]: I0121 14:43:47.769724 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:43:47 crc kubenswrapper[4902]: I0121 14:43:47.770297 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:44:17 crc kubenswrapper[4902]: I0121 14:44:17.770939 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:44:17 crc kubenswrapper[4902]: I0121 14:44:17.773183 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.825646 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8l7jc"] Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.827150 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovn-controller" containerID="cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b" gracePeriod=30 Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.827231 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="nbdb" containerID="cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9" gracePeriod=30 Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.827350 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="northd" containerID="cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240" gracePeriod=30 Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.827447 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8" gracePeriod=30 Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.827529 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="kube-rbac-proxy-node" containerID="cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321" gracePeriod=30 Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.827607 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovn-acl-logging" containerID="cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d" gracePeriod=30 Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.827809 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="sbdb" containerID="cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6" gracePeriod=30 Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.904185 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" containerID="cri-o://1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb" gracePeriod=30 Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.973088 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mztd6_037b55cf-cb9e-41ce-8b1e-3898f490a4aa/kube-multus/2.log" Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.975635 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mztd6_037b55cf-cb9e-41ce-8b1e-3898f490a4aa/kube-multus/1.log" Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.975690 4902 generic.go:334] "Generic (PLEG): container finished" podID="037b55cf-cb9e-41ce-8b1e-3898f490a4aa" containerID="5db75faf330517f6e52171754b04634f6e477d49b65357ee3295df0a7560fb4d" exitCode=2 Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.975726 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mztd6" event={"ID":"037b55cf-cb9e-41ce-8b1e-3898f490a4aa","Type":"ContainerDied","Data":"5db75faf330517f6e52171754b04634f6e477d49b65357ee3295df0a7560fb4d"} Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.975766 4902 scope.go:117] "RemoveContainer" containerID="1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6" Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.976394 4902 scope.go:117] "RemoveContainer" containerID="5db75faf330517f6e52171754b04634f6e477d49b65357ee3295df0a7560fb4d" Jan 21 14:44:34 crc kubenswrapper[4902]: E0121 14:44:34.976657 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-mztd6_openshift-multus(037b55cf-cb9e-41ce-8b1e-3898f490a4aa)\"" pod="openshift-multus/multus-mztd6" podUID="037b55cf-cb9e-41ce-8b1e-3898f490a4aa" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.562535 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/3.log" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.565633 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovn-acl-logging/0.log" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.566470 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovn-controller/0.log" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.567133 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.624623 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dhrhm"] Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.624972 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="northd" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.625074 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="northd" Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.625154 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovn-acl-logging" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.625210 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovn-acl-logging" Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.625263 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovn-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.625312 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovn-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.625364 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.625412 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.625462 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.625532 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.625589 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.625639 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.625694 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="kube-rbac-proxy-node" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.625738 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="kube-rbac-proxy-node" Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.625790 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="sbdb" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.625838 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="sbdb" Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.625891 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="nbdb" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.625938 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="nbdb" Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.625983 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626033 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.626126 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="kubecfg-setup" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626248 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="kubecfg-setup" Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.626303 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626353 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626473 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626534 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="sbdb" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626587 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626668 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626735 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626788 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626840 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovn-acl-logging" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626887 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="northd" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626937 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="nbdb" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626990 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovn-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.627063 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="kube-rbac-proxy-node" Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.627217 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.627293 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.627425 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.631958 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710434 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-cni-bin\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710505 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-log-socket\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710529 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-ovn\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710549 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-cni-netd\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710565 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-var-lib-openvswitch\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710568 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-log-socket" (OuterVolumeSpecName: "log-socket") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710568 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710610 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710613 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710638 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-var-lib-cni-networks-ovn-kubernetes\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710610 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710707 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710731 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovnkube-config\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710938 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-kubelet\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710978 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-openvswitch\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711002 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-systemd\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711026 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711037 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovn-node-metrics-cert\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711063 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711079 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-systemd-units\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711098 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711100 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovnkube-script-lib\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711148 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-etc-openvswitch\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711167 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711175 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-node-log\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711196 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-node-log" (OuterVolumeSpecName: "node-log") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711212 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcxq8\" (UniqueName: \"kubernetes.io/projected/0ec3a89a-830c-4274-8c1e-bd3c98120708-kube-api-access-fcxq8\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711224 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711235 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-slash\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711288 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-slash" (OuterVolumeSpecName: "host-slash") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711417 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-run-ovn-kubernetes\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711735 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-env-overrides\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711440 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711755 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-run-netns\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711467 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711799 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711887 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnfs6\" (UniqueName: \"kubernetes.io/projected/4efb5f30-d596-48cb-8fd7-85968f522bb6-kube-api-access-lnfs6\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711917 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-log-socket\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711965 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-run-openvswitch\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711999 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-cni-netd\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712018 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712067 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4efb5f30-d596-48cb-8fd7-85968f522bb6-ovnkube-script-lib\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712129 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-cni-bin\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712211 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-slash\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712245 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-etc-openvswitch\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712299 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4efb5f30-d596-48cb-8fd7-85968f522bb6-env-overrides\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712316 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712339 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-kubelet\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712372 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-systemd-units\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712392 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4efb5f30-d596-48cb-8fd7-85968f522bb6-ovn-node-metrics-cert\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712408 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-node-log\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712443 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4efb5f30-d596-48cb-8fd7-85968f522bb6-ovnkube-config\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712459 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-run-ovn-kubernetes\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712479 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-run-netns\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712495 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-run-systemd\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712543 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-var-lib-openvswitch\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712558 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-run-ovn\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712652 4902 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712664 4902 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712767 4902 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712793 4902 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712808 4902 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-log-socket\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712820 4902 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712832 4902 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712843 4902 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712857 4902 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712870 4902 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712882 4902 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712893 4902 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712904 4902 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712915 4902 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712925 4902 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712935 4902 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-node-log\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712947 4902 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-slash\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.715846 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ec3a89a-830c-4274-8c1e-bd3c98120708-kube-api-access-fcxq8" (OuterVolumeSpecName: "kube-api-access-fcxq8") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "kube-api-access-fcxq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.716144 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.723379 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.813832 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-run-openvswitch\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.813915 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-cni-netd\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.813922 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-run-openvswitch\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.813947 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4efb5f30-d596-48cb-8fd7-85968f522bb6-ovnkube-script-lib\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.813996 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-cni-bin\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814030 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-slash\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814077 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-etc-openvswitch\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814099 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4efb5f30-d596-48cb-8fd7-85968f522bb6-env-overrides\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814104 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-cni-netd\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814135 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-slash\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814117 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-cni-bin\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814147 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814125 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814285 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-kubelet\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814339 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-systemd-units\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814383 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4efb5f30-d596-48cb-8fd7-85968f522bb6-ovn-node-metrics-cert\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814414 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-node-log\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814449 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4efb5f30-d596-48cb-8fd7-85968f522bb6-ovnkube-config\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814484 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-run-ovn-kubernetes\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814520 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-run-netns\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814567 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-run-systemd\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814627 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-var-lib-openvswitch\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814673 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-run-ovn\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814739 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnfs6\" (UniqueName: \"kubernetes.io/projected/4efb5f30-d596-48cb-8fd7-85968f522bb6-kube-api-access-lnfs6\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814762 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4efb5f30-d596-48cb-8fd7-85968f522bb6-env-overrides\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814784 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-log-socket\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814799 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-run-netns\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814872 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-kubelet\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814187 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-etc-openvswitch\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814898 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-var-lib-openvswitch\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814911 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-node-log\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814902 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-log-socket\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814939 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-run-systemd\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814985 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-run-ovn\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.815258 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-run-ovn-kubernetes\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.815316 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4efb5f30-d596-48cb-8fd7-85968f522bb6-ovnkube-config\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.815320 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcxq8\" (UniqueName: \"kubernetes.io/projected/0ec3a89a-830c-4274-8c1e-bd3c98120708-kube-api-access-fcxq8\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.815348 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-systemd-units\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.815355 4902 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.815379 4902 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.816320 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4efb5f30-d596-48cb-8fd7-85968f522bb6-ovnkube-script-lib\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.825211 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4efb5f30-d596-48cb-8fd7-85968f522bb6-ovn-node-metrics-cert\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.835190 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnfs6\" (UniqueName: \"kubernetes.io/projected/4efb5f30-d596-48cb-8fd7-85968f522bb6-kube-api-access-lnfs6\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.952115 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.986222 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mztd6_037b55cf-cb9e-41ce-8b1e-3898f490a4aa/kube-multus/2.log" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.987750 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" event={"ID":"4efb5f30-d596-48cb-8fd7-85968f522bb6","Type":"ContainerStarted","Data":"ad059d3c04662441495a96a8cd0280ba57077575c4151ab3d7af23456bd3809f"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.990606 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/3.log" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.994237 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovn-acl-logging/0.log" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.995253 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovn-controller/0.log" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996001 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb" exitCode=0 Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996125 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6" exitCode=0 Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996220 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9" exitCode=0 Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996406 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240" exitCode=0 Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996497 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8" exitCode=0 Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996120 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996610 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996633 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996646 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996657 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996658 4902 scope.go:117] "RemoveContainer" containerID="1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996667 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996801 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996256 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996813 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996822 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996830 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996837 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996844 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996851 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996857 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996881 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996570 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321" exitCode=0 Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.997304 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d" exitCode=143 Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.997380 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b" exitCode=143 Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.997375 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998600 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998631 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998643 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998653 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998692 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998703 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998712 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998721 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998735 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998744 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998791 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998811 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998824 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998834 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998872 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998881 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998891 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998903 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998912 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998980 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998996 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.999012 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"5ce6899ab2b12b8f4895228356fb88bbef937550a4743b5874ab9aba66a78a98"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.999030 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.999081 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.999094 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.999106 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.999115 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.999125 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.999134 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.999168 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.999179 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.999188 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9"} Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.032679 4902 scope.go:117] "RemoveContainer" containerID="16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.053830 4902 scope.go:117] "RemoveContainer" containerID="8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.056841 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8l7jc"] Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.059487 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8l7jc"] Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.070413 4902 scope.go:117] "RemoveContainer" containerID="cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.081645 4902 scope.go:117] "RemoveContainer" containerID="99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.096136 4902 scope.go:117] "RemoveContainer" containerID="82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8" Jan 21 14:44:36 crc kubenswrapper[4902]: E0121 14:44:36.097051 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ec3a89a_830c_4274_8c1e_bd3c98120708.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ec3a89a_830c_4274_8c1e_bd3c98120708.slice/crio-5ce6899ab2b12b8f4895228356fb88bbef937550a4743b5874ab9aba66a78a98\": RecentStats: unable to find data in memory cache]" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.111756 4902 scope.go:117] "RemoveContainer" containerID="1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.166637 4902 scope.go:117] "RemoveContainer" containerID="f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.178773 4902 scope.go:117] "RemoveContainer" containerID="f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.204919 4902 scope.go:117] "RemoveContainer" containerID="5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.230416 4902 scope.go:117] "RemoveContainer" containerID="1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb" Jan 21 14:44:36 crc kubenswrapper[4902]: E0121 14:44:36.231088 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb\": container with ID starting with 1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb not found: ID does not exist" containerID="1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.231166 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb"} err="failed to get container status \"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb\": rpc error: code = NotFound desc = could not find container \"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb\": container with ID starting with 1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.231199 4902 scope.go:117] "RemoveContainer" containerID="16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e" Jan 21 14:44:36 crc kubenswrapper[4902]: E0121 14:44:36.231687 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\": container with ID starting with 16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e not found: ID does not exist" containerID="16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.231726 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e"} err="failed to get container status \"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\": rpc error: code = NotFound desc = could not find container \"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\": container with ID starting with 16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.231755 4902 scope.go:117] "RemoveContainer" containerID="8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6" Jan 21 14:44:36 crc kubenswrapper[4902]: E0121 14:44:36.232168 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\": container with ID starting with 8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6 not found: ID does not exist" containerID="8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.232230 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6"} err="failed to get container status \"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\": rpc error: code = NotFound desc = could not find container \"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\": container with ID starting with 8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.232274 4902 scope.go:117] "RemoveContainer" containerID="cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9" Jan 21 14:44:36 crc kubenswrapper[4902]: E0121 14:44:36.232754 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\": container with ID starting with cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9 not found: ID does not exist" containerID="cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.232785 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9"} err="failed to get container status \"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\": rpc error: code = NotFound desc = could not find container \"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\": container with ID starting with cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.232805 4902 scope.go:117] "RemoveContainer" containerID="99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240" Jan 21 14:44:36 crc kubenswrapper[4902]: E0121 14:44:36.233222 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\": container with ID starting with 99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240 not found: ID does not exist" containerID="99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.233253 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240"} err="failed to get container status \"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\": rpc error: code = NotFound desc = could not find container \"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\": container with ID starting with 99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.233274 4902 scope.go:117] "RemoveContainer" containerID="82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8" Jan 21 14:44:36 crc kubenswrapper[4902]: E0121 14:44:36.233615 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\": container with ID starting with 82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8 not found: ID does not exist" containerID="82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.233644 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8"} err="failed to get container status \"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\": rpc error: code = NotFound desc = could not find container \"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\": container with ID starting with 82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.233665 4902 scope.go:117] "RemoveContainer" containerID="1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321" Jan 21 14:44:36 crc kubenswrapper[4902]: E0121 14:44:36.234095 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\": container with ID starting with 1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321 not found: ID does not exist" containerID="1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.234120 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321"} err="failed to get container status \"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\": rpc error: code = NotFound desc = could not find container \"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\": container with ID starting with 1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.234139 4902 scope.go:117] "RemoveContainer" containerID="f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d" Jan 21 14:44:36 crc kubenswrapper[4902]: E0121 14:44:36.234497 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\": container with ID starting with f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d not found: ID does not exist" containerID="f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.234526 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d"} err="failed to get container status \"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\": rpc error: code = NotFound desc = could not find container \"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\": container with ID starting with f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.234542 4902 scope.go:117] "RemoveContainer" containerID="f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b" Jan 21 14:44:36 crc kubenswrapper[4902]: E0121 14:44:36.234887 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\": container with ID starting with f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b not found: ID does not exist" containerID="f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.234913 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b"} err="failed to get container status \"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\": rpc error: code = NotFound desc = could not find container \"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\": container with ID starting with f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.234928 4902 scope.go:117] "RemoveContainer" containerID="5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9" Jan 21 14:44:36 crc kubenswrapper[4902]: E0121 14:44:36.235342 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\": container with ID starting with 5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9 not found: ID does not exist" containerID="5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.235391 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9"} err="failed to get container status \"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\": rpc error: code = NotFound desc = could not find container \"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\": container with ID starting with 5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.235426 4902 scope.go:117] "RemoveContainer" containerID="1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.235847 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb"} err="failed to get container status \"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb\": rpc error: code = NotFound desc = could not find container \"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb\": container with ID starting with 1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.235892 4902 scope.go:117] "RemoveContainer" containerID="16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.236258 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e"} err="failed to get container status \"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\": rpc error: code = NotFound desc = could not find container \"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\": container with ID starting with 16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.236302 4902 scope.go:117] "RemoveContainer" containerID="8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.236709 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6"} err="failed to get container status \"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\": rpc error: code = NotFound desc = could not find container \"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\": container with ID starting with 8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.236742 4902 scope.go:117] "RemoveContainer" containerID="cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.237181 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9"} err="failed to get container status \"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\": rpc error: code = NotFound desc = could not find container \"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\": container with ID starting with cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.237209 4902 scope.go:117] "RemoveContainer" containerID="99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.237516 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240"} err="failed to get container status \"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\": rpc error: code = NotFound desc = could not find container \"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\": container with ID starting with 99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.237538 4902 scope.go:117] "RemoveContainer" containerID="82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.238165 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8"} err="failed to get container status \"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\": rpc error: code = NotFound desc = could not find container \"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\": container with ID starting with 82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.238207 4902 scope.go:117] "RemoveContainer" containerID="1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.238549 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321"} err="failed to get container status \"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\": rpc error: code = NotFound desc = could not find container \"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\": container with ID starting with 1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.238578 4902 scope.go:117] "RemoveContainer" containerID="f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.238892 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d"} err="failed to get container status \"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\": rpc error: code = NotFound desc = could not find container \"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\": container with ID starting with f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.238931 4902 scope.go:117] "RemoveContainer" containerID="f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.239274 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b"} err="failed to get container status \"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\": rpc error: code = NotFound desc = could not find container \"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\": container with ID starting with f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.239301 4902 scope.go:117] "RemoveContainer" containerID="5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.239627 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9"} err="failed to get container status \"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\": rpc error: code = NotFound desc = could not find container \"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\": container with ID starting with 5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.239659 4902 scope.go:117] "RemoveContainer" containerID="1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.239946 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb"} err="failed to get container status \"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb\": rpc error: code = NotFound desc = could not find container \"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb\": container with ID starting with 1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.239980 4902 scope.go:117] "RemoveContainer" containerID="16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.240305 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e"} err="failed to get container status \"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\": rpc error: code = NotFound desc = could not find container \"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\": container with ID starting with 16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.240332 4902 scope.go:117] "RemoveContainer" containerID="8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.240607 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6"} err="failed to get container status \"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\": rpc error: code = NotFound desc = could not find container \"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\": container with ID starting with 8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.240633 4902 scope.go:117] "RemoveContainer" containerID="cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.240928 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9"} err="failed to get container status \"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\": rpc error: code = NotFound desc = could not find container \"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\": container with ID starting with cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.240961 4902 scope.go:117] "RemoveContainer" containerID="99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.241465 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240"} err="failed to get container status \"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\": rpc error: code = NotFound desc = could not find container \"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\": container with ID starting with 99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.241494 4902 scope.go:117] "RemoveContainer" containerID="82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.241792 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8"} err="failed to get container status \"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\": rpc error: code = NotFound desc = could not find container \"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\": container with ID starting with 82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.241815 4902 scope.go:117] "RemoveContainer" containerID="1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.242197 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321"} err="failed to get container status \"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\": rpc error: code = NotFound desc = could not find container \"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\": container with ID starting with 1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.242235 4902 scope.go:117] "RemoveContainer" containerID="f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.242710 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d"} err="failed to get container status \"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\": rpc error: code = NotFound desc = could not find container \"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\": container with ID starting with f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.242745 4902 scope.go:117] "RemoveContainer" containerID="f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.243133 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b"} err="failed to get container status \"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\": rpc error: code = NotFound desc = could not find container \"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\": container with ID starting with f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.243157 4902 scope.go:117] "RemoveContainer" containerID="5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.243549 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9"} err="failed to get container status \"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\": rpc error: code = NotFound desc = could not find container \"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\": container with ID starting with 5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.243580 4902 scope.go:117] "RemoveContainer" containerID="1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.243901 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb"} err="failed to get container status \"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb\": rpc error: code = NotFound desc = could not find container \"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb\": container with ID starting with 1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.243936 4902 scope.go:117] "RemoveContainer" containerID="16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.244281 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e"} err="failed to get container status \"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\": rpc error: code = NotFound desc = could not find container \"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\": container with ID starting with 16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.244306 4902 scope.go:117] "RemoveContainer" containerID="8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.244565 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6"} err="failed to get container status \"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\": rpc error: code = NotFound desc = could not find container \"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\": container with ID starting with 8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.244592 4902 scope.go:117] "RemoveContainer" containerID="cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.244877 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9"} err="failed to get container status \"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\": rpc error: code = NotFound desc = could not find container \"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\": container with ID starting with cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.244901 4902 scope.go:117] "RemoveContainer" containerID="99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.245175 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240"} err="failed to get container status \"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\": rpc error: code = NotFound desc = could not find container \"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\": container with ID starting with 99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.245199 4902 scope.go:117] "RemoveContainer" containerID="82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.245444 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8"} err="failed to get container status \"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\": rpc error: code = NotFound desc = could not find container \"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\": container with ID starting with 82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.245480 4902 scope.go:117] "RemoveContainer" containerID="1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.245896 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321"} err="failed to get container status \"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\": rpc error: code = NotFound desc = could not find container \"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\": container with ID starting with 1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.245920 4902 scope.go:117] "RemoveContainer" containerID="f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.246207 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d"} err="failed to get container status \"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\": rpc error: code = NotFound desc = could not find container \"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\": container with ID starting with f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.246241 4902 scope.go:117] "RemoveContainer" containerID="f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.246658 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b"} err="failed to get container status \"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\": rpc error: code = NotFound desc = could not find container \"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\": container with ID starting with f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.246699 4902 scope.go:117] "RemoveContainer" containerID="5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.246962 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9"} err="failed to get container status \"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\": rpc error: code = NotFound desc = could not find container \"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\": container with ID starting with 5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.246989 4902 scope.go:117] "RemoveContainer" containerID="1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.247292 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb"} err="failed to get container status \"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb\": rpc error: code = NotFound desc = could not find container \"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb\": container with ID starting with 1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.303939 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" path="/var/lib/kubelet/pods/0ec3a89a-830c-4274-8c1e-bd3c98120708/volumes" Jan 21 14:44:37 crc kubenswrapper[4902]: I0121 14:44:37.006018 4902 generic.go:334] "Generic (PLEG): container finished" podID="4efb5f30-d596-48cb-8fd7-85968f522bb6" containerID="4e55abc3c8cffd91a1f936cf36cb221f25e916ae78ccfb0be036073e2cc4d481" exitCode=0 Jan 21 14:44:37 crc kubenswrapper[4902]: I0121 14:44:37.006082 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" event={"ID":"4efb5f30-d596-48cb-8fd7-85968f522bb6","Type":"ContainerDied","Data":"4e55abc3c8cffd91a1f936cf36cb221f25e916ae78ccfb0be036073e2cc4d481"} Jan 21 14:44:38 crc kubenswrapper[4902]: I0121 14:44:38.017595 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" event={"ID":"4efb5f30-d596-48cb-8fd7-85968f522bb6","Type":"ContainerStarted","Data":"52a01671f2cdbe46ee209fecd24ce54d63691986995ab91fe19ecce4c14d9684"} Jan 21 14:44:38 crc kubenswrapper[4902]: I0121 14:44:38.019096 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" event={"ID":"4efb5f30-d596-48cb-8fd7-85968f522bb6","Type":"ContainerStarted","Data":"586db90d7b9ba7aeedf26e2fce60c044446f9fa89706859a2ee71a4a21fec242"} Jan 21 14:44:38 crc kubenswrapper[4902]: I0121 14:44:38.019201 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" event={"ID":"4efb5f30-d596-48cb-8fd7-85968f522bb6","Type":"ContainerStarted","Data":"a008fea5f1bbd714f64bc241037ee503c1f687d265bd6d2a0f26ae1dde8fc1c7"} Jan 21 14:44:38 crc kubenswrapper[4902]: I0121 14:44:38.019281 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" event={"ID":"4efb5f30-d596-48cb-8fd7-85968f522bb6","Type":"ContainerStarted","Data":"218c31c02fabec107cd708634f79587f806de91b1b369776e06b394c5890bc31"} Jan 21 14:44:38 crc kubenswrapper[4902]: I0121 14:44:38.019358 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" event={"ID":"4efb5f30-d596-48cb-8fd7-85968f522bb6","Type":"ContainerStarted","Data":"5c77c1ff71629f4c3f1ed5f377987bf3648a72717f5a18e9f99ade3a00bf1d08"} Jan 21 14:44:38 crc kubenswrapper[4902]: I0121 14:44:38.019436 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" event={"ID":"4efb5f30-d596-48cb-8fd7-85968f522bb6","Type":"ContainerStarted","Data":"ef658df00fa72c2c2d41ecb4029b789c15b2ea8c2c6c7dcbe002bd573f027b17"} Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.040411 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" event={"ID":"4efb5f30-d596-48cb-8fd7-85968f522bb6","Type":"ContainerStarted","Data":"3efae7cce7888dd17e09a54ccf0d60d54ab81978ba3ba8a6b07376413f1e8114"} Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.585918 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-2v7g4"] Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.586729 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.588943 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.589444 4902 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-qwwr2" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.589785 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.590457 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/33301553-deaa-4183-9538-1a43f822be80-node-mnt\") pod \"crc-storage-crc-2v7g4\" (UID: \"33301553-deaa-4183-9538-1a43f822be80\") " pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.590744 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qb5r\" (UniqueName: \"kubernetes.io/projected/33301553-deaa-4183-9538-1a43f822be80-kube-api-access-4qb5r\") pod \"crc-storage-crc-2v7g4\" (UID: \"33301553-deaa-4183-9538-1a43f822be80\") " pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.590968 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/33301553-deaa-4183-9538-1a43f822be80-crc-storage\") pod \"crc-storage-crc-2v7g4\" (UID: \"33301553-deaa-4183-9538-1a43f822be80\") " pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.590510 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.692492 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/33301553-deaa-4183-9538-1a43f822be80-node-mnt\") pod \"crc-storage-crc-2v7g4\" (UID: \"33301553-deaa-4183-9538-1a43f822be80\") " pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.692687 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qb5r\" (UniqueName: \"kubernetes.io/projected/33301553-deaa-4183-9538-1a43f822be80-kube-api-access-4qb5r\") pod \"crc-storage-crc-2v7g4\" (UID: \"33301553-deaa-4183-9538-1a43f822be80\") " pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.692782 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/33301553-deaa-4183-9538-1a43f822be80-crc-storage\") pod \"crc-storage-crc-2v7g4\" (UID: \"33301553-deaa-4183-9538-1a43f822be80\") " pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.693196 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/33301553-deaa-4183-9538-1a43f822be80-node-mnt\") pod \"crc-storage-crc-2v7g4\" (UID: \"33301553-deaa-4183-9538-1a43f822be80\") " pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.694496 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/33301553-deaa-4183-9538-1a43f822be80-crc-storage\") pod \"crc-storage-crc-2v7g4\" (UID: \"33301553-deaa-4183-9538-1a43f822be80\") " pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.725540 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qb5r\" (UniqueName: \"kubernetes.io/projected/33301553-deaa-4183-9538-1a43f822be80-kube-api-access-4qb5r\") pod \"crc-storage-crc-2v7g4\" (UID: \"33301553-deaa-4183-9538-1a43f822be80\") " pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.908165 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: E0121 14:44:41.944713 4902 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2v7g4_crc-storage_33301553-deaa-4183-9538-1a43f822be80_0(1ebf17af0782bc29382b336ac529768568dfcf589051ddd131b8de039bf40135): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 14:44:41 crc kubenswrapper[4902]: E0121 14:44:41.945102 4902 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2v7g4_crc-storage_33301553-deaa-4183-9538-1a43f822be80_0(1ebf17af0782bc29382b336ac529768568dfcf589051ddd131b8de039bf40135): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: E0121 14:44:41.945337 4902 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2v7g4_crc-storage_33301553-deaa-4183-9538-1a43f822be80_0(1ebf17af0782bc29382b336ac529768568dfcf589051ddd131b8de039bf40135): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: E0121 14:44:41.945712 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-2v7g4_crc-storage(33301553-deaa-4183-9538-1a43f822be80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-2v7g4_crc-storage(33301553-deaa-4183-9538-1a43f822be80)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2v7g4_crc-storage_33301553-deaa-4183-9538-1a43f822be80_0(1ebf17af0782bc29382b336ac529768568dfcf589051ddd131b8de039bf40135): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-2v7g4" podUID="33301553-deaa-4183-9538-1a43f822be80" Jan 21 14:44:43 crc kubenswrapper[4902]: I0121 14:44:43.057980 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" event={"ID":"4efb5f30-d596-48cb-8fd7-85968f522bb6","Type":"ContainerStarted","Data":"ce4721052f072f29cf2301c45a9bd8fe0a0061b719a0d060c59820ebdb9e2aa9"} Jan 21 14:44:43 crc kubenswrapper[4902]: I0121 14:44:43.058479 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:43 crc kubenswrapper[4902]: I0121 14:44:43.058495 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:43 crc kubenswrapper[4902]: I0121 14:44:43.058540 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:43 crc kubenswrapper[4902]: I0121 14:44:43.081102 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-2v7g4"] Jan 21 14:44:43 crc kubenswrapper[4902]: I0121 14:44:43.081191 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:43 crc kubenswrapper[4902]: I0121 14:44:43.081546 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:43 crc kubenswrapper[4902]: I0121 14:44:43.091256 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:43 crc kubenswrapper[4902]: I0121 14:44:43.094560 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" podStartSLOduration=8.094539419 podStartE2EDuration="8.094539419s" podCreationTimestamp="2026-01-21 14:44:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:44:43.093403966 +0000 UTC m=+645.170237015" watchObservedRunningTime="2026-01-21 14:44:43.094539419 +0000 UTC m=+645.171372448" Jan 21 14:44:43 crc kubenswrapper[4902]: I0121 14:44:43.095271 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:43 crc kubenswrapper[4902]: E0121 14:44:43.110996 4902 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2v7g4_crc-storage_33301553-deaa-4183-9538-1a43f822be80_0(eecf7cd942b09f7468f82c9ae28745d547ac1dd0f9c0d8a044fdbefa0d073b41): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 14:44:43 crc kubenswrapper[4902]: E0121 14:44:43.111063 4902 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2v7g4_crc-storage_33301553-deaa-4183-9538-1a43f822be80_0(eecf7cd942b09f7468f82c9ae28745d547ac1dd0f9c0d8a044fdbefa0d073b41): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:43 crc kubenswrapper[4902]: E0121 14:44:43.111090 4902 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2v7g4_crc-storage_33301553-deaa-4183-9538-1a43f822be80_0(eecf7cd942b09f7468f82c9ae28745d547ac1dd0f9c0d8a044fdbefa0d073b41): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:43 crc kubenswrapper[4902]: E0121 14:44:43.111132 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-2v7g4_crc-storage(33301553-deaa-4183-9538-1a43f822be80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-2v7g4_crc-storage(33301553-deaa-4183-9538-1a43f822be80)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2v7g4_crc-storage_33301553-deaa-4183-9538-1a43f822be80_0(eecf7cd942b09f7468f82c9ae28745d547ac1dd0f9c0d8a044fdbefa0d073b41): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-2v7g4" podUID="33301553-deaa-4183-9538-1a43f822be80" Jan 21 14:44:47 crc kubenswrapper[4902]: I0121 14:44:47.770733 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:44:47 crc kubenswrapper[4902]: I0121 14:44:47.771617 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:44:47 crc kubenswrapper[4902]: I0121 14:44:47.772464 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:44:47 crc kubenswrapper[4902]: I0121 14:44:47.773333 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d2da13e9ad46e483ffa722259ad0a8b94b5c2e16fcacdd89045b1d8ac2afd0e"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:44:47 crc kubenswrapper[4902]: I0121 14:44:47.773433 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://1d2da13e9ad46e483ffa722259ad0a8b94b5c2e16fcacdd89045b1d8ac2afd0e" gracePeriod=600 Jan 21 14:44:48 crc kubenswrapper[4902]: I0121 14:44:48.091276 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="1d2da13e9ad46e483ffa722259ad0a8b94b5c2e16fcacdd89045b1d8ac2afd0e" exitCode=0 Jan 21 14:44:48 crc kubenswrapper[4902]: I0121 14:44:48.091328 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"1d2da13e9ad46e483ffa722259ad0a8b94b5c2e16fcacdd89045b1d8ac2afd0e"} Jan 21 14:44:48 crc kubenswrapper[4902]: I0121 14:44:48.091363 4902 scope.go:117] "RemoveContainer" containerID="55b2a83cf4462f21e140aaf547deeb73f9aa69b5d7dddabe47e579030fe921f9" Jan 21 14:44:48 crc kubenswrapper[4902]: I0121 14:44:48.299692 4902 scope.go:117] "RemoveContainer" containerID="5db75faf330517f6e52171754b04634f6e477d49b65357ee3295df0a7560fb4d" Jan 21 14:44:48 crc kubenswrapper[4902]: E0121 14:44:48.300082 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-mztd6_openshift-multus(037b55cf-cb9e-41ce-8b1e-3898f490a4aa)\"" pod="openshift-multus/multus-mztd6" podUID="037b55cf-cb9e-41ce-8b1e-3898f490a4aa" Jan 21 14:44:49 crc kubenswrapper[4902]: I0121 14:44:49.101494 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"097b55fd9fa87b27fef8f06ba3cbfef04c2339f11dc61a41eeced54a3451dbca"} Jan 21 14:44:54 crc kubenswrapper[4902]: I0121 14:44:54.294196 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:54 crc kubenswrapper[4902]: I0121 14:44:54.294886 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:54 crc kubenswrapper[4902]: E0121 14:44:54.325646 4902 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2v7g4_crc-storage_33301553-deaa-4183-9538-1a43f822be80_0(103d44a9baa15814086ecf3f6c4e014fcd678e9792c7f66f94dfd9a695f7ac69): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 14:44:54 crc kubenswrapper[4902]: E0121 14:44:54.326182 4902 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2v7g4_crc-storage_33301553-deaa-4183-9538-1a43f822be80_0(103d44a9baa15814086ecf3f6c4e014fcd678e9792c7f66f94dfd9a695f7ac69): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:54 crc kubenswrapper[4902]: E0121 14:44:54.326231 4902 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2v7g4_crc-storage_33301553-deaa-4183-9538-1a43f822be80_0(103d44a9baa15814086ecf3f6c4e014fcd678e9792c7f66f94dfd9a695f7ac69): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:54 crc kubenswrapper[4902]: E0121 14:44:54.326421 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-2v7g4_crc-storage(33301553-deaa-4183-9538-1a43f822be80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-2v7g4_crc-storage(33301553-deaa-4183-9538-1a43f822be80)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2v7g4_crc-storage_33301553-deaa-4183-9538-1a43f822be80_0(103d44a9baa15814086ecf3f6c4e014fcd678e9792c7f66f94dfd9a695f7ac69): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-2v7g4" podUID="33301553-deaa-4183-9538-1a43f822be80" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.155851 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2"] Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.157508 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.158979 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.159449 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.172077 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2"] Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.341184 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fbc78bb-1faf-4da9-ab79-cee1540bb647-secret-volume\") pod \"collect-profiles-29483445-2whx2\" (UID: \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.341528 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6jxd\" (UniqueName: \"kubernetes.io/projected/0fbc78bb-1faf-4da9-ab79-cee1540bb647-kube-api-access-t6jxd\") pod \"collect-profiles-29483445-2whx2\" (UID: \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.341591 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fbc78bb-1faf-4da9-ab79-cee1540bb647-config-volume\") pod \"collect-profiles-29483445-2whx2\" (UID: \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.443309 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6jxd\" (UniqueName: \"kubernetes.io/projected/0fbc78bb-1faf-4da9-ab79-cee1540bb647-kube-api-access-t6jxd\") pod \"collect-profiles-29483445-2whx2\" (UID: \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.443732 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fbc78bb-1faf-4da9-ab79-cee1540bb647-config-volume\") pod \"collect-profiles-29483445-2whx2\" (UID: \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.443944 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fbc78bb-1faf-4da9-ab79-cee1540bb647-secret-volume\") pod \"collect-profiles-29483445-2whx2\" (UID: \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.444762 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fbc78bb-1faf-4da9-ab79-cee1540bb647-config-volume\") pod \"collect-profiles-29483445-2whx2\" (UID: \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.450241 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fbc78bb-1faf-4da9-ab79-cee1540bb647-secret-volume\") pod \"collect-profiles-29483445-2whx2\" (UID: \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.460847 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6jxd\" (UniqueName: \"kubernetes.io/projected/0fbc78bb-1faf-4da9-ab79-cee1540bb647-kube-api-access-t6jxd\") pod \"collect-profiles-29483445-2whx2\" (UID: \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.480231 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: E0121 14:45:00.503537 4902 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29483445-2whx2_openshift-operator-lifecycle-manager_0fbc78bb-1faf-4da9-ab79-cee1540bb647_0(7122897a24392e7a89d724864ece6d32a75bfb47dad155ca4c7294299109b38a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 14:45:00 crc kubenswrapper[4902]: E0121 14:45:00.503614 4902 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29483445-2whx2_openshift-operator-lifecycle-manager_0fbc78bb-1faf-4da9-ab79-cee1540bb647_0(7122897a24392e7a89d724864ece6d32a75bfb47dad155ca4c7294299109b38a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: E0121 14:45:00.503642 4902 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29483445-2whx2_openshift-operator-lifecycle-manager_0fbc78bb-1faf-4da9-ab79-cee1540bb647_0(7122897a24392e7a89d724864ece6d32a75bfb47dad155ca4c7294299109b38a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: E0121 14:45:00.503700 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29483445-2whx2_openshift-operator-lifecycle-manager(0fbc78bb-1faf-4da9-ab79-cee1540bb647)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29483445-2whx2_openshift-operator-lifecycle-manager(0fbc78bb-1faf-4da9-ab79-cee1540bb647)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29483445-2whx2_openshift-operator-lifecycle-manager_0fbc78bb-1faf-4da9-ab79-cee1540bb647_0(7122897a24392e7a89d724864ece6d32a75bfb47dad155ca4c7294299109b38a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" podUID="0fbc78bb-1faf-4da9-ab79-cee1540bb647" Jan 21 14:45:01 crc kubenswrapper[4902]: I0121 14:45:01.180672 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:01 crc kubenswrapper[4902]: I0121 14:45:01.181926 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:01 crc kubenswrapper[4902]: E0121 14:45:01.218942 4902 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29483445-2whx2_openshift-operator-lifecycle-manager_0fbc78bb-1faf-4da9-ab79-cee1540bb647_0(26f898ada82329d6d422c780fae53f157c7f1038550e131a957546c157bdf041): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 14:45:01 crc kubenswrapper[4902]: E0121 14:45:01.219300 4902 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29483445-2whx2_openshift-operator-lifecycle-manager_0fbc78bb-1faf-4da9-ab79-cee1540bb647_0(26f898ada82329d6d422c780fae53f157c7f1038550e131a957546c157bdf041): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:01 crc kubenswrapper[4902]: E0121 14:45:01.219331 4902 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29483445-2whx2_openshift-operator-lifecycle-manager_0fbc78bb-1faf-4da9-ab79-cee1540bb647_0(26f898ada82329d6d422c780fae53f157c7f1038550e131a957546c157bdf041): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:01 crc kubenswrapper[4902]: E0121 14:45:01.219391 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29483445-2whx2_openshift-operator-lifecycle-manager(0fbc78bb-1faf-4da9-ab79-cee1540bb647)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29483445-2whx2_openshift-operator-lifecycle-manager(0fbc78bb-1faf-4da9-ab79-cee1540bb647)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29483445-2whx2_openshift-operator-lifecycle-manager_0fbc78bb-1faf-4da9-ab79-cee1540bb647_0(26f898ada82329d6d422c780fae53f157c7f1038550e131a957546c157bdf041): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" podUID="0fbc78bb-1faf-4da9-ab79-cee1540bb647" Jan 21 14:45:02 crc kubenswrapper[4902]: I0121 14:45:02.295367 4902 scope.go:117] "RemoveContainer" containerID="5db75faf330517f6e52171754b04634f6e477d49b65357ee3295df0a7560fb4d" Jan 21 14:45:03 crc kubenswrapper[4902]: I0121 14:45:03.197206 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mztd6_037b55cf-cb9e-41ce-8b1e-3898f490a4aa/kube-multus/2.log" Jan 21 14:45:03 crc kubenswrapper[4902]: I0121 14:45:03.197298 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mztd6" event={"ID":"037b55cf-cb9e-41ce-8b1e-3898f490a4aa","Type":"ContainerStarted","Data":"51154431438b475e544119556d5d3665be92d7fa8bff56e1cd8614a93dda6ab2"} Jan 21 14:45:05 crc kubenswrapper[4902]: I0121 14:45:05.978851 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:45:08 crc kubenswrapper[4902]: I0121 14:45:08.294236 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:45:08 crc kubenswrapper[4902]: I0121 14:45:08.299429 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:45:08 crc kubenswrapper[4902]: I0121 14:45:08.767999 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-2v7g4"] Jan 21 14:45:08 crc kubenswrapper[4902]: I0121 14:45:08.780506 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 14:45:09 crc kubenswrapper[4902]: I0121 14:45:09.235696 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2v7g4" event={"ID":"33301553-deaa-4183-9538-1a43f822be80","Type":"ContainerStarted","Data":"168dd8050c7d704f577789dc61a56a850b7ce18ed7ff065b9ba17798be68a97b"} Jan 21 14:45:11 crc kubenswrapper[4902]: I0121 14:45:11.249165 4902 generic.go:334] "Generic (PLEG): container finished" podID="33301553-deaa-4183-9538-1a43f822be80" containerID="bea584749b1ccfd891d97d3ebbaf45ab41b6cc3e6efd100d0aa2c6701cc97c94" exitCode=0 Jan 21 14:45:11 crc kubenswrapper[4902]: I0121 14:45:11.249289 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2v7g4" event={"ID":"33301553-deaa-4183-9538-1a43f822be80","Type":"ContainerDied","Data":"bea584749b1ccfd891d97d3ebbaf45ab41b6cc3e6efd100d0aa2c6701cc97c94"} Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.294907 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.295699 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.534850 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2"] Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.539322 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:45:12 crc kubenswrapper[4902]: W0121 14:45:12.540238 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fbc78bb_1faf_4da9_ab79_cee1540bb647.slice/crio-e1215334675642ac402cd6b84bd24c9b626f11f54c3e47b52d28c09e9c7a92ba WatchSource:0}: Error finding container e1215334675642ac402cd6b84bd24c9b626f11f54c3e47b52d28c09e9c7a92ba: Status 404 returned error can't find the container with id e1215334675642ac402cd6b84bd24c9b626f11f54c3e47b52d28c09e9c7a92ba Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.721398 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/33301553-deaa-4183-9538-1a43f822be80-node-mnt\") pod \"33301553-deaa-4183-9538-1a43f822be80\" (UID: \"33301553-deaa-4183-9538-1a43f822be80\") " Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.721758 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qb5r\" (UniqueName: \"kubernetes.io/projected/33301553-deaa-4183-9538-1a43f822be80-kube-api-access-4qb5r\") pod \"33301553-deaa-4183-9538-1a43f822be80\" (UID: \"33301553-deaa-4183-9538-1a43f822be80\") " Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.721522 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33301553-deaa-4183-9538-1a43f822be80-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "33301553-deaa-4183-9538-1a43f822be80" (UID: "33301553-deaa-4183-9538-1a43f822be80"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.721935 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/33301553-deaa-4183-9538-1a43f822be80-crc-storage\") pod \"33301553-deaa-4183-9538-1a43f822be80\" (UID: \"33301553-deaa-4183-9538-1a43f822be80\") " Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.722247 4902 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/33301553-deaa-4183-9538-1a43f822be80-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.727262 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33301553-deaa-4183-9538-1a43f822be80-kube-api-access-4qb5r" (OuterVolumeSpecName: "kube-api-access-4qb5r") pod "33301553-deaa-4183-9538-1a43f822be80" (UID: "33301553-deaa-4183-9538-1a43f822be80"). InnerVolumeSpecName "kube-api-access-4qb5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.735551 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33301553-deaa-4183-9538-1a43f822be80-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "33301553-deaa-4183-9538-1a43f822be80" (UID: "33301553-deaa-4183-9538-1a43f822be80"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.823218 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qb5r\" (UniqueName: \"kubernetes.io/projected/33301553-deaa-4183-9538-1a43f822be80-kube-api-access-4qb5r\") on node \"crc\" DevicePath \"\"" Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.823522 4902 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/33301553-deaa-4183-9538-1a43f822be80-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 21 14:45:13 crc kubenswrapper[4902]: I0121 14:45:13.263905 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" event={"ID":"0fbc78bb-1faf-4da9-ab79-cee1540bb647","Type":"ContainerDied","Data":"fa1156cf23ef6713ff3d92ca234f6e5140ae3f940464e50453ee6dd138fecf3b"} Jan 21 14:45:13 crc kubenswrapper[4902]: I0121 14:45:13.263706 4902 generic.go:334] "Generic (PLEG): container finished" podID="0fbc78bb-1faf-4da9-ab79-cee1540bb647" containerID="fa1156cf23ef6713ff3d92ca234f6e5140ae3f940464e50453ee6dd138fecf3b" exitCode=0 Jan 21 14:45:13 crc kubenswrapper[4902]: I0121 14:45:13.264466 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" event={"ID":"0fbc78bb-1faf-4da9-ab79-cee1540bb647","Type":"ContainerStarted","Data":"e1215334675642ac402cd6b84bd24c9b626f11f54c3e47b52d28c09e9c7a92ba"} Jan 21 14:45:13 crc kubenswrapper[4902]: I0121 14:45:13.268474 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2v7g4" event={"ID":"33301553-deaa-4183-9538-1a43f822be80","Type":"ContainerDied","Data":"168dd8050c7d704f577789dc61a56a850b7ce18ed7ff065b9ba17798be68a97b"} Jan 21 14:45:13 crc kubenswrapper[4902]: I0121 14:45:13.268569 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="168dd8050c7d704f577789dc61a56a850b7ce18ed7ff065b9ba17798be68a97b" Jan 21 14:45:13 crc kubenswrapper[4902]: I0121 14:45:13.268640 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:45:14 crc kubenswrapper[4902]: I0121 14:45:14.523172 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:14 crc kubenswrapper[4902]: I0121 14:45:14.646319 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fbc78bb-1faf-4da9-ab79-cee1540bb647-secret-volume\") pod \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\" (UID: \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\") " Jan 21 14:45:14 crc kubenswrapper[4902]: I0121 14:45:14.646427 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fbc78bb-1faf-4da9-ab79-cee1540bb647-config-volume\") pod \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\" (UID: \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\") " Jan 21 14:45:14 crc kubenswrapper[4902]: I0121 14:45:14.646464 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6jxd\" (UniqueName: \"kubernetes.io/projected/0fbc78bb-1faf-4da9-ab79-cee1540bb647-kube-api-access-t6jxd\") pod \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\" (UID: \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\") " Jan 21 14:45:14 crc kubenswrapper[4902]: I0121 14:45:14.647237 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fbc78bb-1faf-4da9-ab79-cee1540bb647-config-volume" (OuterVolumeSpecName: "config-volume") pod "0fbc78bb-1faf-4da9-ab79-cee1540bb647" (UID: "0fbc78bb-1faf-4da9-ab79-cee1540bb647"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:45:14 crc kubenswrapper[4902]: I0121 14:45:14.650781 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fbc78bb-1faf-4da9-ab79-cee1540bb647-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0fbc78bb-1faf-4da9-ab79-cee1540bb647" (UID: "0fbc78bb-1faf-4da9-ab79-cee1540bb647"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:45:14 crc kubenswrapper[4902]: I0121 14:45:14.650985 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fbc78bb-1faf-4da9-ab79-cee1540bb647-kube-api-access-t6jxd" (OuterVolumeSpecName: "kube-api-access-t6jxd") pod "0fbc78bb-1faf-4da9-ab79-cee1540bb647" (UID: "0fbc78bb-1faf-4da9-ab79-cee1540bb647"). InnerVolumeSpecName "kube-api-access-t6jxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:45:14 crc kubenswrapper[4902]: I0121 14:45:14.747552 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fbc78bb-1faf-4da9-ab79-cee1540bb647-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 14:45:14 crc kubenswrapper[4902]: I0121 14:45:14.747591 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6jxd\" (UniqueName: \"kubernetes.io/projected/0fbc78bb-1faf-4da9-ab79-cee1540bb647-kube-api-access-t6jxd\") on node \"crc\" DevicePath \"\"" Jan 21 14:45:14 crc kubenswrapper[4902]: I0121 14:45:14.747605 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fbc78bb-1faf-4da9-ab79-cee1540bb647-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 14:45:15 crc kubenswrapper[4902]: I0121 14:45:15.297575 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" event={"ID":"0fbc78bb-1faf-4da9-ab79-cee1540bb647","Type":"ContainerDied","Data":"e1215334675642ac402cd6b84bd24c9b626f11f54c3e47b52d28c09e9c7a92ba"} Jan 21 14:45:15 crc kubenswrapper[4902]: I0121 14:45:15.297605 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:15 crc kubenswrapper[4902]: I0121 14:45:15.297626 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1215334675642ac402cd6b84bd24c9b626f11f54c3e47b52d28c09e9c7a92ba" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.307829 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr"] Jan 21 14:45:20 crc kubenswrapper[4902]: E0121 14:45:20.309839 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fbc78bb-1faf-4da9-ab79-cee1540bb647" containerName="collect-profiles" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.309956 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fbc78bb-1faf-4da9-ab79-cee1540bb647" containerName="collect-profiles" Jan 21 14:45:20 crc kubenswrapper[4902]: E0121 14:45:20.310090 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33301553-deaa-4183-9538-1a43f822be80" containerName="storage" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.310189 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="33301553-deaa-4183-9538-1a43f822be80" containerName="storage" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.310429 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fbc78bb-1faf-4da9-ab79-cee1540bb647" containerName="collect-profiles" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.310543 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="33301553-deaa-4183-9538-1a43f822be80" containerName="storage" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.311783 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.316584 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.318322 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr"] Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.434367 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/91ab62d2-e4b6-44ce-afc8-292ac5685c46-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr\" (UID: \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.434450 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2424q\" (UniqueName: \"kubernetes.io/projected/91ab62d2-e4b6-44ce-afc8-292ac5685c46-kube-api-access-2424q\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr\" (UID: \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.434691 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/91ab62d2-e4b6-44ce-afc8-292ac5685c46-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr\" (UID: \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.536507 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/91ab62d2-e4b6-44ce-afc8-292ac5685c46-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr\" (UID: \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.536614 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/91ab62d2-e4b6-44ce-afc8-292ac5685c46-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr\" (UID: \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.536655 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2424q\" (UniqueName: \"kubernetes.io/projected/91ab62d2-e4b6-44ce-afc8-292ac5685c46-kube-api-access-2424q\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr\" (UID: \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.537344 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/91ab62d2-e4b6-44ce-afc8-292ac5685c46-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr\" (UID: \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.537435 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/91ab62d2-e4b6-44ce-afc8-292ac5685c46-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr\" (UID: \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.570633 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2424q\" (UniqueName: \"kubernetes.io/projected/91ab62d2-e4b6-44ce-afc8-292ac5685c46-kube-api-access-2424q\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr\" (UID: \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.636630 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.886164 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr"] Jan 21 14:45:21 crc kubenswrapper[4902]: I0121 14:45:21.341463 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" event={"ID":"91ab62d2-e4b6-44ce-afc8-292ac5685c46","Type":"ContainerStarted","Data":"b875986b0761d560b02bc44d8e8fbac72883a5463f133e7f56fd7b5d8ec459b9"} Jan 21 14:45:21 crc kubenswrapper[4902]: I0121 14:45:21.341916 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" event={"ID":"91ab62d2-e4b6-44ce-afc8-292ac5685c46","Type":"ContainerStarted","Data":"53ff9133fd502ef47a35311e474dd325d1f71b4604c5261be01cc0ef2bfd0077"} Jan 21 14:45:22 crc kubenswrapper[4902]: I0121 14:45:22.349424 4902 generic.go:334] "Generic (PLEG): container finished" podID="91ab62d2-e4b6-44ce-afc8-292ac5685c46" containerID="b875986b0761d560b02bc44d8e8fbac72883a5463f133e7f56fd7b5d8ec459b9" exitCode=0 Jan 21 14:45:22 crc kubenswrapper[4902]: I0121 14:45:22.349474 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" event={"ID":"91ab62d2-e4b6-44ce-afc8-292ac5685c46","Type":"ContainerDied","Data":"b875986b0761d560b02bc44d8e8fbac72883a5463f133e7f56fd7b5d8ec459b9"} Jan 21 14:45:24 crc kubenswrapper[4902]: I0121 14:45:24.364848 4902 generic.go:334] "Generic (PLEG): container finished" podID="91ab62d2-e4b6-44ce-afc8-292ac5685c46" containerID="fc4ddac622b7528ae9797cbb3577446dc6fe3fbbbec9e87f4584f273623fe288" exitCode=0 Jan 21 14:45:24 crc kubenswrapper[4902]: I0121 14:45:24.364950 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" event={"ID":"91ab62d2-e4b6-44ce-afc8-292ac5685c46","Type":"ContainerDied","Data":"fc4ddac622b7528ae9797cbb3577446dc6fe3fbbbec9e87f4584f273623fe288"} Jan 21 14:45:25 crc kubenswrapper[4902]: I0121 14:45:25.375542 4902 generic.go:334] "Generic (PLEG): container finished" podID="91ab62d2-e4b6-44ce-afc8-292ac5685c46" containerID="f486c46ee39ff38efaa5d9e3f69a67ba9a92bec84c3df2b335d54ce2d6581843" exitCode=0 Jan 21 14:45:25 crc kubenswrapper[4902]: I0121 14:45:25.375703 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" event={"ID":"91ab62d2-e4b6-44ce-afc8-292ac5685c46","Type":"ContainerDied","Data":"f486c46ee39ff38efaa5d9e3f69a67ba9a92bec84c3df2b335d54ce2d6581843"} Jan 21 14:45:26 crc kubenswrapper[4902]: I0121 14:45:26.640547 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:26 crc kubenswrapper[4902]: I0121 14:45:26.737885 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/91ab62d2-e4b6-44ce-afc8-292ac5685c46-bundle\") pod \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\" (UID: \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\") " Jan 21 14:45:26 crc kubenswrapper[4902]: I0121 14:45:26.738239 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2424q\" (UniqueName: \"kubernetes.io/projected/91ab62d2-e4b6-44ce-afc8-292ac5685c46-kube-api-access-2424q\") pod \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\" (UID: \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\") " Jan 21 14:45:26 crc kubenswrapper[4902]: I0121 14:45:26.738343 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/91ab62d2-e4b6-44ce-afc8-292ac5685c46-util\") pod \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\" (UID: \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\") " Jan 21 14:45:26 crc kubenswrapper[4902]: I0121 14:45:26.739393 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91ab62d2-e4b6-44ce-afc8-292ac5685c46-bundle" (OuterVolumeSpecName: "bundle") pod "91ab62d2-e4b6-44ce-afc8-292ac5685c46" (UID: "91ab62d2-e4b6-44ce-afc8-292ac5685c46"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:45:26 crc kubenswrapper[4902]: I0121 14:45:26.743720 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91ab62d2-e4b6-44ce-afc8-292ac5685c46-kube-api-access-2424q" (OuterVolumeSpecName: "kube-api-access-2424q") pod "91ab62d2-e4b6-44ce-afc8-292ac5685c46" (UID: "91ab62d2-e4b6-44ce-afc8-292ac5685c46"). InnerVolumeSpecName "kube-api-access-2424q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:45:26 crc kubenswrapper[4902]: I0121 14:45:26.835291 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91ab62d2-e4b6-44ce-afc8-292ac5685c46-util" (OuterVolumeSpecName: "util") pod "91ab62d2-e4b6-44ce-afc8-292ac5685c46" (UID: "91ab62d2-e4b6-44ce-afc8-292ac5685c46"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:45:26 crc kubenswrapper[4902]: I0121 14:45:26.839357 4902 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/91ab62d2-e4b6-44ce-afc8-292ac5685c46-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:45:26 crc kubenswrapper[4902]: I0121 14:45:26.839380 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2424q\" (UniqueName: \"kubernetes.io/projected/91ab62d2-e4b6-44ce-afc8-292ac5685c46-kube-api-access-2424q\") on node \"crc\" DevicePath \"\"" Jan 21 14:45:26 crc kubenswrapper[4902]: I0121 14:45:26.839389 4902 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/91ab62d2-e4b6-44ce-afc8-292ac5685c46-util\") on node \"crc\" DevicePath \"\"" Jan 21 14:45:27 crc kubenswrapper[4902]: I0121 14:45:27.394844 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" event={"ID":"91ab62d2-e4b6-44ce-afc8-292ac5685c46","Type":"ContainerDied","Data":"53ff9133fd502ef47a35311e474dd325d1f71b4604c5261be01cc0ef2bfd0077"} Jan 21 14:45:27 crc kubenswrapper[4902]: I0121 14:45:27.394889 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53ff9133fd502ef47a35311e474dd325d1f71b4604c5261be01cc0ef2bfd0077" Jan 21 14:45:27 crc kubenswrapper[4902]: I0121 14:45:27.395291 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:31 crc kubenswrapper[4902]: I0121 14:45:31.823989 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-q2fs2"] Jan 21 14:45:31 crc kubenswrapper[4902]: E0121 14:45:31.824436 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ab62d2-e4b6-44ce-afc8-292ac5685c46" containerName="pull" Jan 21 14:45:31 crc kubenswrapper[4902]: I0121 14:45:31.824447 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ab62d2-e4b6-44ce-afc8-292ac5685c46" containerName="pull" Jan 21 14:45:31 crc kubenswrapper[4902]: E0121 14:45:31.824465 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ab62d2-e4b6-44ce-afc8-292ac5685c46" containerName="util" Jan 21 14:45:31 crc kubenswrapper[4902]: I0121 14:45:31.824471 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ab62d2-e4b6-44ce-afc8-292ac5685c46" containerName="util" Jan 21 14:45:31 crc kubenswrapper[4902]: E0121 14:45:31.824481 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ab62d2-e4b6-44ce-afc8-292ac5685c46" containerName="extract" Jan 21 14:45:31 crc kubenswrapper[4902]: I0121 14:45:31.824486 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ab62d2-e4b6-44ce-afc8-292ac5685c46" containerName="extract" Jan 21 14:45:31 crc kubenswrapper[4902]: I0121 14:45:31.824569 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="91ab62d2-e4b6-44ce-afc8-292ac5685c46" containerName="extract" Jan 21 14:45:31 crc kubenswrapper[4902]: I0121 14:45:31.825023 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-q2fs2" Jan 21 14:45:31 crc kubenswrapper[4902]: I0121 14:45:31.826781 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 21 14:45:31 crc kubenswrapper[4902]: I0121 14:45:31.830269 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 21 14:45:31 crc kubenswrapper[4902]: I0121 14:45:31.833812 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-tfdrx" Jan 21 14:45:31 crc kubenswrapper[4902]: I0121 14:45:31.837097 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-q2fs2"] Jan 21 14:45:32 crc kubenswrapper[4902]: I0121 14:45:32.009924 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th8fg\" (UniqueName: \"kubernetes.io/projected/bb74694a-8b82-4c31-85da-4ba2c732bbb8-kube-api-access-th8fg\") pod \"nmstate-operator-646758c888-q2fs2\" (UID: \"bb74694a-8b82-4c31-85da-4ba2c732bbb8\") " pod="openshift-nmstate/nmstate-operator-646758c888-q2fs2" Jan 21 14:45:32 crc kubenswrapper[4902]: I0121 14:45:32.111646 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th8fg\" (UniqueName: \"kubernetes.io/projected/bb74694a-8b82-4c31-85da-4ba2c732bbb8-kube-api-access-th8fg\") pod \"nmstate-operator-646758c888-q2fs2\" (UID: \"bb74694a-8b82-4c31-85da-4ba2c732bbb8\") " pod="openshift-nmstate/nmstate-operator-646758c888-q2fs2" Jan 21 14:45:32 crc kubenswrapper[4902]: I0121 14:45:32.128890 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th8fg\" (UniqueName: \"kubernetes.io/projected/bb74694a-8b82-4c31-85da-4ba2c732bbb8-kube-api-access-th8fg\") pod \"nmstate-operator-646758c888-q2fs2\" (UID: \"bb74694a-8b82-4c31-85da-4ba2c732bbb8\") " pod="openshift-nmstate/nmstate-operator-646758c888-q2fs2" Jan 21 14:45:32 crc kubenswrapper[4902]: I0121 14:45:32.140454 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-q2fs2" Jan 21 14:45:32 crc kubenswrapper[4902]: I0121 14:45:32.324372 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-q2fs2"] Jan 21 14:45:32 crc kubenswrapper[4902]: I0121 14:45:32.419948 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-q2fs2" event={"ID":"bb74694a-8b82-4c31-85da-4ba2c732bbb8","Type":"ContainerStarted","Data":"6e89956a53dcb676d83fe8b19783f8035fff664a5f3e7aac8df4398e7b326d9d"} Jan 21 14:45:35 crc kubenswrapper[4902]: I0121 14:45:35.435849 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-q2fs2" event={"ID":"bb74694a-8b82-4c31-85da-4ba2c732bbb8","Type":"ContainerStarted","Data":"a7b1fa21778d2b3c170a53ce867376fa99da09058a09bbdd87fdc9bb2b7c47cd"} Jan 21 14:45:35 crc kubenswrapper[4902]: I0121 14:45:35.457040 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-q2fs2" podStartSLOduration=1.858482731 podStartE2EDuration="4.456991326s" podCreationTimestamp="2026-01-21 14:45:31 +0000 UTC" firstStartedPulling="2026-01-21 14:45:32.334741543 +0000 UTC m=+694.411574572" lastFinishedPulling="2026-01-21 14:45:34.933250138 +0000 UTC m=+697.010083167" observedRunningTime="2026-01-21 14:45:35.452877947 +0000 UTC m=+697.529710976" watchObservedRunningTime="2026-01-21 14:45:35.456991326 +0000 UTC m=+697.533824355" Jan 21 14:45:40 crc kubenswrapper[4902]: I0121 14:45:40.944705 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-x6qnj"] Jan 21 14:45:40 crc kubenswrapper[4902]: I0121 14:45:40.946250 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-x6qnj" Jan 21 14:45:40 crc kubenswrapper[4902]: I0121 14:45:40.966891 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-8plgt" Jan 21 14:45:40 crc kubenswrapper[4902]: I0121 14:45:40.967798 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr"] Jan 21 14:45:40 crc kubenswrapper[4902]: I0121 14:45:40.968642 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" Jan 21 14:45:40 crc kubenswrapper[4902]: I0121 14:45:40.970072 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 21 14:45:40 crc kubenswrapper[4902]: I0121 14:45:40.973034 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-x6qnj"] Jan 21 14:45:40 crc kubenswrapper[4902]: I0121 14:45:40.978784 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr"] Jan 21 14:45:40 crc kubenswrapper[4902]: I0121 14:45:40.983969 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-p9t9n"] Jan 21 14:45:40 crc kubenswrapper[4902]: I0121 14:45:40.985603 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.063009 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c"] Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.063815 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.065748 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.065783 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-nh8rq" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.065902 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.069697 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c"] Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.136629 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/14dd02e5-8cb3-4382-9107-5f5b698a2701-ovs-socket\") pod \"nmstate-handler-p9t9n\" (UID: \"14dd02e5-8cb3-4382-9107-5f5b698a2701\") " pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.136718 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kntmc\" (UniqueName: \"kubernetes.io/projected/14dd02e5-8cb3-4382-9107-5f5b698a2701-kube-api-access-kntmc\") pod \"nmstate-handler-p9t9n\" (UID: \"14dd02e5-8cb3-4382-9107-5f5b698a2701\") " pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.136754 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7rg2\" (UniqueName: \"kubernetes.io/projected/87768889-c41f-4563-8b38-3d939fa22303-kube-api-access-f7rg2\") pod \"nmstate-webhook-8474b5b9d8-88bkr\" (UID: \"87768889-c41f-4563-8b38-3d939fa22303\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.136774 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/14dd02e5-8cb3-4382-9107-5f5b698a2701-dbus-socket\") pod \"nmstate-handler-p9t9n\" (UID: \"14dd02e5-8cb3-4382-9107-5f5b698a2701\") " pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.136801 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/87768889-c41f-4563-8b38-3d939fa22303-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-88bkr\" (UID: \"87768889-c41f-4563-8b38-3d939fa22303\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.136823 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7pg4\" (UniqueName: \"kubernetes.io/projected/d406f136-7416-4694-b6cd-d6bdf6b60e1f-kube-api-access-j7pg4\") pod \"nmstate-metrics-54757c584b-x6qnj\" (UID: \"d406f136-7416-4694-b6cd-d6bdf6b60e1f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-x6qnj" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.136879 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/14dd02e5-8cb3-4382-9107-5f5b698a2701-nmstate-lock\") pod \"nmstate-handler-p9t9n\" (UID: \"14dd02e5-8cb3-4382-9107-5f5b698a2701\") " pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.237826 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kntmc\" (UniqueName: \"kubernetes.io/projected/14dd02e5-8cb3-4382-9107-5f5b698a2701-kube-api-access-kntmc\") pod \"nmstate-handler-p9t9n\" (UID: \"14dd02e5-8cb3-4382-9107-5f5b698a2701\") " pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.237883 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7rg2\" (UniqueName: \"kubernetes.io/projected/87768889-c41f-4563-8b38-3d939fa22303-kube-api-access-f7rg2\") pod \"nmstate-webhook-8474b5b9d8-88bkr\" (UID: \"87768889-c41f-4563-8b38-3d939fa22303\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.237905 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/14dd02e5-8cb3-4382-9107-5f5b698a2701-dbus-socket\") pod \"nmstate-handler-p9t9n\" (UID: \"14dd02e5-8cb3-4382-9107-5f5b698a2701\") " pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.237923 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2828t\" (UniqueName: \"kubernetes.io/projected/ce3bf701-2498-42d7-969d-8944df02f1c7-kube-api-access-2828t\") pod \"nmstate-console-plugin-7754f76f8b-6vz5c\" (UID: \"ce3bf701-2498-42d7-969d-8944df02f1c7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.237946 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/87768889-c41f-4563-8b38-3d939fa22303-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-88bkr\" (UID: \"87768889-c41f-4563-8b38-3d939fa22303\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.237961 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7pg4\" (UniqueName: \"kubernetes.io/projected/d406f136-7416-4694-b6cd-d6bdf6b60e1f-kube-api-access-j7pg4\") pod \"nmstate-metrics-54757c584b-x6qnj\" (UID: \"d406f136-7416-4694-b6cd-d6bdf6b60e1f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-x6qnj" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.237989 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ce3bf701-2498-42d7-969d-8944df02f1c7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-6vz5c\" (UID: \"ce3bf701-2498-42d7-969d-8944df02f1c7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.238019 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/14dd02e5-8cb3-4382-9107-5f5b698a2701-nmstate-lock\") pod \"nmstate-handler-p9t9n\" (UID: \"14dd02e5-8cb3-4382-9107-5f5b698a2701\") " pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.238073 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/14dd02e5-8cb3-4382-9107-5f5b698a2701-ovs-socket\") pod \"nmstate-handler-p9t9n\" (UID: \"14dd02e5-8cb3-4382-9107-5f5b698a2701\") " pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.238105 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce3bf701-2498-42d7-969d-8944df02f1c7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-6vz5c\" (UID: \"ce3bf701-2498-42d7-969d-8944df02f1c7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.238734 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/14dd02e5-8cb3-4382-9107-5f5b698a2701-dbus-socket\") pod \"nmstate-handler-p9t9n\" (UID: \"14dd02e5-8cb3-4382-9107-5f5b698a2701\") " pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.238902 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/14dd02e5-8cb3-4382-9107-5f5b698a2701-nmstate-lock\") pod \"nmstate-handler-p9t9n\" (UID: \"14dd02e5-8cb3-4382-9107-5f5b698a2701\") " pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.238933 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/14dd02e5-8cb3-4382-9107-5f5b698a2701-ovs-socket\") pod \"nmstate-handler-p9t9n\" (UID: \"14dd02e5-8cb3-4382-9107-5f5b698a2701\") " pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.245700 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/87768889-c41f-4563-8b38-3d939fa22303-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-88bkr\" (UID: \"87768889-c41f-4563-8b38-3d939fa22303\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.264032 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7pg4\" (UniqueName: \"kubernetes.io/projected/d406f136-7416-4694-b6cd-d6bdf6b60e1f-kube-api-access-j7pg4\") pod \"nmstate-metrics-54757c584b-x6qnj\" (UID: \"d406f136-7416-4694-b6cd-d6bdf6b60e1f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-x6qnj" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.264601 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-x6qnj" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.271431 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7rg2\" (UniqueName: \"kubernetes.io/projected/87768889-c41f-4563-8b38-3d939fa22303-kube-api-access-f7rg2\") pod \"nmstate-webhook-8474b5b9d8-88bkr\" (UID: \"87768889-c41f-4563-8b38-3d939fa22303\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.272915 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kntmc\" (UniqueName: \"kubernetes.io/projected/14dd02e5-8cb3-4382-9107-5f5b698a2701-kube-api-access-kntmc\") pod \"nmstate-handler-p9t9n\" (UID: \"14dd02e5-8cb3-4382-9107-5f5b698a2701\") " pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.295184 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.301624 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.321707 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d465bdf6b-lmlwx"] Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.322464 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.341572 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ce3bf701-2498-42d7-969d-8944df02f1c7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-6vz5c\" (UID: \"ce3bf701-2498-42d7-969d-8944df02f1c7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.341656 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce3bf701-2498-42d7-969d-8944df02f1c7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-6vz5c\" (UID: \"ce3bf701-2498-42d7-969d-8944df02f1c7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.341683 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2828t\" (UniqueName: \"kubernetes.io/projected/ce3bf701-2498-42d7-969d-8944df02f1c7-kube-api-access-2828t\") pod \"nmstate-console-plugin-7754f76f8b-6vz5c\" (UID: \"ce3bf701-2498-42d7-969d-8944df02f1c7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" Jan 21 14:45:41 crc kubenswrapper[4902]: E0121 14:45:41.342335 4902 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 21 14:45:41 crc kubenswrapper[4902]: E0121 14:45:41.342413 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce3bf701-2498-42d7-969d-8944df02f1c7-plugin-serving-cert podName:ce3bf701-2498-42d7-969d-8944df02f1c7 nodeName:}" failed. No retries permitted until 2026-01-21 14:45:41.842392043 +0000 UTC m=+703.919225072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/ce3bf701-2498-42d7-969d-8944df02f1c7-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-6vz5c" (UID: "ce3bf701-2498-42d7-969d-8944df02f1c7") : secret "plugin-serving-cert" not found Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.343265 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d465bdf6b-lmlwx"] Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.343351 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ce3bf701-2498-42d7-969d-8944df02f1c7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-6vz5c\" (UID: \"ce3bf701-2498-42d7-969d-8944df02f1c7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" Jan 21 14:45:41 crc kubenswrapper[4902]: W0121 14:45:41.357605 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14dd02e5_8cb3_4382_9107_5f5b698a2701.slice/crio-f685391b36c90acca2dff7bd4e5a34f530bb4c9af6a6b96d1a20eabb461594b3 WatchSource:0}: Error finding container f685391b36c90acca2dff7bd4e5a34f530bb4c9af6a6b96d1a20eabb461594b3: Status 404 returned error can't find the container with id f685391b36c90acca2dff7bd4e5a34f530bb4c9af6a6b96d1a20eabb461594b3 Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.372537 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2828t\" (UniqueName: \"kubernetes.io/projected/ce3bf701-2498-42d7-969d-8944df02f1c7-kube-api-access-2828t\") pod \"nmstate-console-plugin-7754f76f8b-6vz5c\" (UID: \"ce3bf701-2498-42d7-969d-8944df02f1c7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.442641 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08e0dea0-bfea-427f-b481-61e8d54dee3b-trusted-ca-bundle\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.442685 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp2gl\" (UniqueName: \"kubernetes.io/projected/08e0dea0-bfea-427f-b481-61e8d54dee3b-kube-api-access-pp2gl\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.442717 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/08e0dea0-bfea-427f-b481-61e8d54dee3b-console-config\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.443727 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/08e0dea0-bfea-427f-b481-61e8d54dee3b-oauth-serving-cert\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.443771 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08e0dea0-bfea-427f-b481-61e8d54dee3b-service-ca\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.443802 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/08e0dea0-bfea-427f-b481-61e8d54dee3b-console-oauth-config\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.443820 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/08e0dea0-bfea-427f-b481-61e8d54dee3b-console-serving-cert\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.476068 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-p9t9n" event={"ID":"14dd02e5-8cb3-4382-9107-5f5b698a2701","Type":"ContainerStarted","Data":"f685391b36c90acca2dff7bd4e5a34f530bb4c9af6a6b96d1a20eabb461594b3"} Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.545476 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08e0dea0-bfea-427f-b481-61e8d54dee3b-service-ca\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.545526 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/08e0dea0-bfea-427f-b481-61e8d54dee3b-console-oauth-config\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.545544 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/08e0dea0-bfea-427f-b481-61e8d54dee3b-console-serving-cert\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.545586 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08e0dea0-bfea-427f-b481-61e8d54dee3b-trusted-ca-bundle\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.545614 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp2gl\" (UniqueName: \"kubernetes.io/projected/08e0dea0-bfea-427f-b481-61e8d54dee3b-kube-api-access-pp2gl\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.545637 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/08e0dea0-bfea-427f-b481-61e8d54dee3b-console-config\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.545672 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/08e0dea0-bfea-427f-b481-61e8d54dee3b-oauth-serving-cert\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.547568 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/08e0dea0-bfea-427f-b481-61e8d54dee3b-console-config\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.549203 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08e0dea0-bfea-427f-b481-61e8d54dee3b-service-ca\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.550269 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08e0dea0-bfea-427f-b481-61e8d54dee3b-trusted-ca-bundle\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.551685 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/08e0dea0-bfea-427f-b481-61e8d54dee3b-oauth-serving-cert\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.551864 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/08e0dea0-bfea-427f-b481-61e8d54dee3b-console-oauth-config\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.553409 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/08e0dea0-bfea-427f-b481-61e8d54dee3b-console-serving-cert\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.558004 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-x6qnj"] Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.572744 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp2gl\" (UniqueName: \"kubernetes.io/projected/08e0dea0-bfea-427f-b481-61e8d54dee3b-kube-api-access-pp2gl\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.679713 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.822862 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr"] Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.849577 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce3bf701-2498-42d7-969d-8944df02f1c7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-6vz5c\" (UID: \"ce3bf701-2498-42d7-969d-8944df02f1c7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.853553 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce3bf701-2498-42d7-969d-8944df02f1c7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-6vz5c\" (UID: \"ce3bf701-2498-42d7-969d-8944df02f1c7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.893124 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d465bdf6b-lmlwx"] Jan 21 14:45:41 crc kubenswrapper[4902]: W0121 14:45:41.894850 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08e0dea0_bfea_427f_b481_61e8d54dee3b.slice/crio-29104ebeb778472fb768bbf7e0966114429742a82f494e7703718863a2882728 WatchSource:0}: Error finding container 29104ebeb778472fb768bbf7e0966114429742a82f494e7703718863a2882728: Status 404 returned error can't find the container with id 29104ebeb778472fb768bbf7e0966114429742a82f494e7703718863a2882728 Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.987449 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" Jan 21 14:45:42 crc kubenswrapper[4902]: I0121 14:45:42.194149 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c"] Jan 21 14:45:42 crc kubenswrapper[4902]: W0121 14:45:42.199080 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce3bf701_2498_42d7_969d_8944df02f1c7.slice/crio-37c376c3b774a4ece34265c7e4379b250597a8de58cdc3d164fa14514d7104d4 WatchSource:0}: Error finding container 37c376c3b774a4ece34265c7e4379b250597a8de58cdc3d164fa14514d7104d4: Status 404 returned error can't find the container with id 37c376c3b774a4ece34265c7e4379b250597a8de58cdc3d164fa14514d7104d4 Jan 21 14:45:42 crc kubenswrapper[4902]: I0121 14:45:42.485242 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" event={"ID":"87768889-c41f-4563-8b38-3d939fa22303","Type":"ContainerStarted","Data":"355c10a1670be8e994cc74d7b45bc4fc91ecaa1527aea0a4e848d6330a572126"} Jan 21 14:45:42 crc kubenswrapper[4902]: I0121 14:45:42.487012 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-x6qnj" event={"ID":"d406f136-7416-4694-b6cd-d6bdf6b60e1f","Type":"ContainerStarted","Data":"062dda8ed40fcbefabba7bfe9e9599c40bd3cfcfa12864c1841541ad12cb2094"} Jan 21 14:45:42 crc kubenswrapper[4902]: I0121 14:45:42.488010 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" event={"ID":"ce3bf701-2498-42d7-969d-8944df02f1c7","Type":"ContainerStarted","Data":"37c376c3b774a4ece34265c7e4379b250597a8de58cdc3d164fa14514d7104d4"} Jan 21 14:45:42 crc kubenswrapper[4902]: I0121 14:45:42.489758 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d465bdf6b-lmlwx" event={"ID":"08e0dea0-bfea-427f-b481-61e8d54dee3b","Type":"ContainerStarted","Data":"17a160a25df378fe575baaee56b145bb427fd51c26e693c0e43d6b048dc0119b"} Jan 21 14:45:42 crc kubenswrapper[4902]: I0121 14:45:42.489788 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d465bdf6b-lmlwx" event={"ID":"08e0dea0-bfea-427f-b481-61e8d54dee3b","Type":"ContainerStarted","Data":"29104ebeb778472fb768bbf7e0966114429742a82f494e7703718863a2882728"} Jan 21 14:45:42 crc kubenswrapper[4902]: I0121 14:45:42.510181 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5d465bdf6b-lmlwx" podStartSLOduration=1.5101570720000002 podStartE2EDuration="1.510157072s" podCreationTimestamp="2026-01-21 14:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:45:42.50559224 +0000 UTC m=+704.582425299" watchObservedRunningTime="2026-01-21 14:45:42.510157072 +0000 UTC m=+704.586990101" Jan 21 14:45:44 crc kubenswrapper[4902]: I0121 14:45:44.505408 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" event={"ID":"87768889-c41f-4563-8b38-3d939fa22303","Type":"ContainerStarted","Data":"513c8d8798bebd9b85401a861b031560b4c21ec0614f2f08f971782e45df7d10"} Jan 21 14:45:44 crc kubenswrapper[4902]: I0121 14:45:44.507325 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" Jan 21 14:45:44 crc kubenswrapper[4902]: I0121 14:45:44.510766 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-x6qnj" event={"ID":"d406f136-7416-4694-b6cd-d6bdf6b60e1f","Type":"ContainerStarted","Data":"04ace925f9f9a2cc1e1d5d5f94788d8b87b4fc28aa4736b9413cdb9869730af8"} Jan 21 14:45:44 crc kubenswrapper[4902]: I0121 14:45:44.530661 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" podStartSLOduration=2.06139104 podStartE2EDuration="4.530632902s" podCreationTimestamp="2026-01-21 14:45:40 +0000 UTC" firstStartedPulling="2026-01-21 14:45:41.833632581 +0000 UTC m=+703.910465610" lastFinishedPulling="2026-01-21 14:45:44.302874443 +0000 UTC m=+706.379707472" observedRunningTime="2026-01-21 14:45:44.529642513 +0000 UTC m=+706.606475552" watchObservedRunningTime="2026-01-21 14:45:44.530632902 +0000 UTC m=+706.607465931" Jan 21 14:45:45 crc kubenswrapper[4902]: I0121 14:45:45.520410 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-p9t9n" event={"ID":"14dd02e5-8cb3-4382-9107-5f5b698a2701","Type":"ContainerStarted","Data":"f53b88ff4b2343318e18fbdd3016cf4ff4a79803161c4c527b9e2876631249a3"} Jan 21 14:45:45 crc kubenswrapper[4902]: I0121 14:45:45.520780 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:45 crc kubenswrapper[4902]: I0121 14:45:45.542332 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-p9t9n" podStartSLOduration=2.66624941 podStartE2EDuration="5.542312923s" podCreationTimestamp="2026-01-21 14:45:40 +0000 UTC" firstStartedPulling="2026-01-21 14:45:41.371226625 +0000 UTC m=+703.448059654" lastFinishedPulling="2026-01-21 14:45:44.247290118 +0000 UTC m=+706.324123167" observedRunningTime="2026-01-21 14:45:45.542148678 +0000 UTC m=+707.618981747" watchObservedRunningTime="2026-01-21 14:45:45.542312923 +0000 UTC m=+707.619145952" Jan 21 14:45:46 crc kubenswrapper[4902]: I0121 14:45:46.532172 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" event={"ID":"ce3bf701-2498-42d7-969d-8944df02f1c7","Type":"ContainerStarted","Data":"cab0a46ee65cc527ba9f68932d1600b34fb1e18af80107f6c6fca9ad6a595d2c"} Jan 21 14:45:46 crc kubenswrapper[4902]: I0121 14:45:46.557539 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" podStartSLOduration=2.167596421 podStartE2EDuration="5.557513076s" podCreationTimestamp="2026-01-21 14:45:41 +0000 UTC" firstStartedPulling="2026-01-21 14:45:42.201828326 +0000 UTC m=+704.278661345" lastFinishedPulling="2026-01-21 14:45:45.591744971 +0000 UTC m=+707.668578000" observedRunningTime="2026-01-21 14:45:46.549111374 +0000 UTC m=+708.625944423" watchObservedRunningTime="2026-01-21 14:45:46.557513076 +0000 UTC m=+708.634346115" Jan 21 14:45:47 crc kubenswrapper[4902]: I0121 14:45:47.538077 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-x6qnj" event={"ID":"d406f136-7416-4694-b6cd-d6bdf6b60e1f","Type":"ContainerStarted","Data":"6e3db7c12b35283423b30efb8976925ddcb8d39fb4900c2b8f2650c2603179f4"} Jan 21 14:45:47 crc kubenswrapper[4902]: I0121 14:45:47.563487 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-x6qnj" podStartSLOduration=2.138001753 podStartE2EDuration="7.563457333s" podCreationTimestamp="2026-01-21 14:45:40 +0000 UTC" firstStartedPulling="2026-01-21 14:45:41.563363165 +0000 UTC m=+703.640196194" lastFinishedPulling="2026-01-21 14:45:46.988818725 +0000 UTC m=+709.065651774" observedRunningTime="2026-01-21 14:45:47.554138874 +0000 UTC m=+709.630971983" watchObservedRunningTime="2026-01-21 14:45:47.563457333 +0000 UTC m=+709.640290392" Jan 21 14:45:51 crc kubenswrapper[4902]: I0121 14:45:51.333350 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:51 crc kubenswrapper[4902]: I0121 14:45:51.680275 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:51 crc kubenswrapper[4902]: I0121 14:45:51.680372 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:51 crc kubenswrapper[4902]: I0121 14:45:51.688555 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:52 crc kubenswrapper[4902]: I0121 14:45:52.580334 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:52 crc kubenswrapper[4902]: I0121 14:45:52.639960 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9nw4v"] Jan 21 14:46:01 crc kubenswrapper[4902]: I0121 14:46:01.301941 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.646928 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw"] Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.648622 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.651810 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.659941 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw"] Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.789594 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw\" (UID: \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.789700 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6gp6\" (UniqueName: \"kubernetes.io/projected/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-kube-api-access-r6gp6\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw\" (UID: \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.789894 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw\" (UID: \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.891808 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6gp6\" (UniqueName: \"kubernetes.io/projected/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-kube-api-access-r6gp6\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw\" (UID: \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.891859 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw\" (UID: \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.891917 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw\" (UID: \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.892376 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw\" (UID: \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.892542 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw\" (UID: \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.919954 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6gp6\" (UniqueName: \"kubernetes.io/projected/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-kube-api-access-r6gp6\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw\" (UID: \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.965011 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:16 crc kubenswrapper[4902]: I0121 14:46:16.153119 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw"] Jan 21 14:46:16 crc kubenswrapper[4902]: I0121 14:46:16.708581 4902 generic.go:334] "Generic (PLEG): container finished" podID="5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" containerID="056949580ed2fcf95c59c991288fe924f7c3867e56878d30dca4327cb8866274" exitCode=0 Jan 21 14:46:16 crc kubenswrapper[4902]: I0121 14:46:16.708630 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" event={"ID":"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea","Type":"ContainerDied","Data":"056949580ed2fcf95c59c991288fe924f7c3867e56878d30dca4327cb8866274"} Jan 21 14:46:16 crc kubenswrapper[4902]: I0121 14:46:16.708841 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" event={"ID":"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea","Type":"ContainerStarted","Data":"2a81671ed1bad6d4359aa6ae00fb18ca71b35101ea8e33f3a97da3ccb4d25f90"} Jan 21 14:46:17 crc kubenswrapper[4902]: I0121 14:46:17.676212 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-9nw4v" podUID="853f0809-8828-4976-9b04-dd078ab64ced" containerName="console" containerID="cri-o://fff0e780f43c17189c7dce1045515753af56428025b126e2b903e1fb3882c9d0" gracePeriod=15 Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.721582 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9nw4v_853f0809-8828-4976-9b04-dd078ab64ced/console/0.log" Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.721651 4902 generic.go:334] "Generic (PLEG): container finished" podID="853f0809-8828-4976-9b04-dd078ab64ced" containerID="fff0e780f43c17189c7dce1045515753af56428025b126e2b903e1fb3882c9d0" exitCode=2 Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.721697 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9nw4v" event={"ID":"853f0809-8828-4976-9b04-dd078ab64ced","Type":"ContainerDied","Data":"fff0e780f43c17189c7dce1045515753af56428025b126e2b903e1fb3882c9d0"} Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.819082 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9nw4v_853f0809-8828-4976-9b04-dd078ab64ced/console/0.log" Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.819385 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.929608 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/853f0809-8828-4976-9b04-dd078ab64ced-console-oauth-config\") pod \"853f0809-8828-4976-9b04-dd078ab64ced\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.929686 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-trusted-ca-bundle\") pod \"853f0809-8828-4976-9b04-dd078ab64ced\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.929711 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-console-config\") pod \"853f0809-8828-4976-9b04-dd078ab64ced\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.929757 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-oauth-serving-cert\") pod \"853f0809-8828-4976-9b04-dd078ab64ced\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.929834 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/853f0809-8828-4976-9b04-dd078ab64ced-console-serving-cert\") pod \"853f0809-8828-4976-9b04-dd078ab64ced\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.929897 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg77p\" (UniqueName: \"kubernetes.io/projected/853f0809-8828-4976-9b04-dd078ab64ced-kube-api-access-xg77p\") pod \"853f0809-8828-4976-9b04-dd078ab64ced\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.929960 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-service-ca\") pod \"853f0809-8828-4976-9b04-dd078ab64ced\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.930651 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-console-config" (OuterVolumeSpecName: "console-config") pod "853f0809-8828-4976-9b04-dd078ab64ced" (UID: "853f0809-8828-4976-9b04-dd078ab64ced"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.930670 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "853f0809-8828-4976-9b04-dd078ab64ced" (UID: "853f0809-8828-4976-9b04-dd078ab64ced"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.930698 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "853f0809-8828-4976-9b04-dd078ab64ced" (UID: "853f0809-8828-4976-9b04-dd078ab64ced"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.930881 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-service-ca" (OuterVolumeSpecName: "service-ca") pod "853f0809-8828-4976-9b04-dd078ab64ced" (UID: "853f0809-8828-4976-9b04-dd078ab64ced"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.935214 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/853f0809-8828-4976-9b04-dd078ab64ced-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "853f0809-8828-4976-9b04-dd078ab64ced" (UID: "853f0809-8828-4976-9b04-dd078ab64ced"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.935274 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/853f0809-8828-4976-9b04-dd078ab64ced-kube-api-access-xg77p" (OuterVolumeSpecName: "kube-api-access-xg77p") pod "853f0809-8828-4976-9b04-dd078ab64ced" (UID: "853f0809-8828-4976-9b04-dd078ab64ced"). InnerVolumeSpecName "kube-api-access-xg77p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.936150 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/853f0809-8828-4976-9b04-dd078ab64ced-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "853f0809-8828-4976-9b04-dd078ab64ced" (UID: "853f0809-8828-4976-9b04-dd078ab64ced"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.031472 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg77p\" (UniqueName: \"kubernetes.io/projected/853f0809-8828-4976-9b04-dd078ab64ced-kube-api-access-xg77p\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.031520 4902 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.031535 4902 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/853f0809-8828-4976-9b04-dd078ab64ced-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.031547 4902 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.031559 4902 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.031570 4902 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.031581 4902 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/853f0809-8828-4976-9b04-dd078ab64ced-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.731890 4902 generic.go:334] "Generic (PLEG): container finished" podID="5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" containerID="8dd4a2451a9a58dac129a4d9b7533aef9b799c33e51b43be85745f80c57a2168" exitCode=0 Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.732007 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" event={"ID":"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea","Type":"ContainerDied","Data":"8dd4a2451a9a58dac129a4d9b7533aef9b799c33e51b43be85745f80c57a2168"} Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.736935 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9nw4v_853f0809-8828-4976-9b04-dd078ab64ced/console/0.log" Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.737000 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9nw4v" event={"ID":"853f0809-8828-4976-9b04-dd078ab64ced","Type":"ContainerDied","Data":"11dbd86a6b371ca7401386f5e9d390f798d2eff9c897fbde80c73fd4547eac53"} Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.737069 4902 scope.go:117] "RemoveContainer" containerID="fff0e780f43c17189c7dce1045515753af56428025b126e2b903e1fb3882c9d0" Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.737154 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.844938 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9nw4v"] Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.856592 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-9nw4v"] Jan 21 14:46:20 crc kubenswrapper[4902]: I0121 14:46:20.302367 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="853f0809-8828-4976-9b04-dd078ab64ced" path="/var/lib/kubelet/pods/853f0809-8828-4976-9b04-dd078ab64ced/volumes" Jan 21 14:46:20 crc kubenswrapper[4902]: I0121 14:46:20.747619 4902 generic.go:334] "Generic (PLEG): container finished" podID="5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" containerID="3ad4015d5c8074c484485ea54b519048f0f8f14c880d8f9b8af977414a857907" exitCode=0 Jan 21 14:46:20 crc kubenswrapper[4902]: I0121 14:46:20.747768 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" event={"ID":"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea","Type":"ContainerDied","Data":"3ad4015d5c8074c484485ea54b519048f0f8f14c880d8f9b8af977414a857907"} Jan 21 14:46:21 crc kubenswrapper[4902]: I0121 14:46:21.993592 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:22 crc kubenswrapper[4902]: I0121 14:46:22.171498 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-bundle\") pod \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\" (UID: \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\") " Jan 21 14:46:22 crc kubenswrapper[4902]: I0121 14:46:22.171604 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6gp6\" (UniqueName: \"kubernetes.io/projected/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-kube-api-access-r6gp6\") pod \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\" (UID: \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\") " Jan 21 14:46:22 crc kubenswrapper[4902]: I0121 14:46:22.171656 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-util\") pod \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\" (UID: \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\") " Jan 21 14:46:22 crc kubenswrapper[4902]: I0121 14:46:22.172392 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-bundle" (OuterVolumeSpecName: "bundle") pod "5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" (UID: "5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:46:22 crc kubenswrapper[4902]: I0121 14:46:22.178312 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-kube-api-access-r6gp6" (OuterVolumeSpecName: "kube-api-access-r6gp6") pod "5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" (UID: "5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea"). InnerVolumeSpecName "kube-api-access-r6gp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:46:22 crc kubenswrapper[4902]: I0121 14:46:22.182203 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-util" (OuterVolumeSpecName: "util") pod "5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" (UID: "5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:46:22 crc kubenswrapper[4902]: I0121 14:46:22.272851 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6gp6\" (UniqueName: \"kubernetes.io/projected/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-kube-api-access-r6gp6\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:22 crc kubenswrapper[4902]: I0121 14:46:22.272906 4902 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-util\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:22 crc kubenswrapper[4902]: I0121 14:46:22.272929 4902 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:22 crc kubenswrapper[4902]: I0121 14:46:22.761884 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" event={"ID":"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea","Type":"ContainerDied","Data":"2a81671ed1bad6d4359aa6ae00fb18ca71b35101ea8e33f3a97da3ccb4d25f90"} Jan 21 14:46:22 crc kubenswrapper[4902]: I0121 14:46:22.761918 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a81671ed1bad6d4359aa6ae00fb18ca71b35101ea8e33f3a97da3ccb4d25f90" Jan 21 14:46:22 crc kubenswrapper[4902]: I0121 14:46:22.761949 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:32 crc kubenswrapper[4902]: I0121 14:46:32.113012 4902 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.092040 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68"] Jan 21 14:46:38 crc kubenswrapper[4902]: E0121 14:46:38.092826 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="853f0809-8828-4976-9b04-dd078ab64ced" containerName="console" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.092841 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="853f0809-8828-4976-9b04-dd078ab64ced" containerName="console" Jan 21 14:46:38 crc kubenswrapper[4902]: E0121 14:46:38.092859 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" containerName="pull" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.092867 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" containerName="pull" Jan 21 14:46:38 crc kubenswrapper[4902]: E0121 14:46:38.092893 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" containerName="util" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.092901 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" containerName="util" Jan 21 14:46:38 crc kubenswrapper[4902]: E0121 14:46:38.092911 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" containerName="extract" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.092918 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" containerName="extract" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.093031 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" containerName="extract" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.093070 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="853f0809-8828-4976-9b04-dd078ab64ced" containerName="console" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.093502 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.097888 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.098216 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.098609 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.098746 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.099562 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-7wtgs" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.113379 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68"] Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.171083 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs9bt\" (UniqueName: \"kubernetes.io/projected/1ddec7fa-7afd-4d77-af77-509910e52c70-kube-api-access-fs9bt\") pod \"metallb-operator-controller-manager-6c6bfc4dcb-mzr68\" (UID: \"1ddec7fa-7afd-4d77-af77-509910e52c70\") " pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.171183 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1ddec7fa-7afd-4d77-af77-509910e52c70-apiservice-cert\") pod \"metallb-operator-controller-manager-6c6bfc4dcb-mzr68\" (UID: \"1ddec7fa-7afd-4d77-af77-509910e52c70\") " pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.171211 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ddec7fa-7afd-4d77-af77-509910e52c70-webhook-cert\") pod \"metallb-operator-controller-manager-6c6bfc4dcb-mzr68\" (UID: \"1ddec7fa-7afd-4d77-af77-509910e52c70\") " pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.272110 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1ddec7fa-7afd-4d77-af77-509910e52c70-apiservice-cert\") pod \"metallb-operator-controller-manager-6c6bfc4dcb-mzr68\" (UID: \"1ddec7fa-7afd-4d77-af77-509910e52c70\") " pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.272401 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ddec7fa-7afd-4d77-af77-509910e52c70-webhook-cert\") pod \"metallb-operator-controller-manager-6c6bfc4dcb-mzr68\" (UID: \"1ddec7fa-7afd-4d77-af77-509910e52c70\") " pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.272524 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs9bt\" (UniqueName: \"kubernetes.io/projected/1ddec7fa-7afd-4d77-af77-509910e52c70-kube-api-access-fs9bt\") pod \"metallb-operator-controller-manager-6c6bfc4dcb-mzr68\" (UID: \"1ddec7fa-7afd-4d77-af77-509910e52c70\") " pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.278525 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1ddec7fa-7afd-4d77-af77-509910e52c70-apiservice-cert\") pod \"metallb-operator-controller-manager-6c6bfc4dcb-mzr68\" (UID: \"1ddec7fa-7afd-4d77-af77-509910e52c70\") " pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.282811 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ddec7fa-7afd-4d77-af77-509910e52c70-webhook-cert\") pod \"metallb-operator-controller-manager-6c6bfc4dcb-mzr68\" (UID: \"1ddec7fa-7afd-4d77-af77-509910e52c70\") " pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.310647 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs9bt\" (UniqueName: \"kubernetes.io/projected/1ddec7fa-7afd-4d77-af77-509910e52c70-kube-api-access-fs9bt\") pod \"metallb-operator-controller-manager-6c6bfc4dcb-mzr68\" (UID: \"1ddec7fa-7afd-4d77-af77-509910e52c70\") " pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.409542 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.429616 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn"] Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.430320 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.433179 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.453881 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.459173 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn"] Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.459848 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-tm4rt" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.578715 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwlzb\" (UniqueName: \"kubernetes.io/projected/050f3d44-1ff2-4334-8fa8-c5124c7199d9-kube-api-access-xwlzb\") pod \"metallb-operator-webhook-server-79cc595b65-5xnzn\" (UID: \"050f3d44-1ff2-4334-8fa8-c5124c7199d9\") " pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.579095 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/050f3d44-1ff2-4334-8fa8-c5124c7199d9-apiservice-cert\") pod \"metallb-operator-webhook-server-79cc595b65-5xnzn\" (UID: \"050f3d44-1ff2-4334-8fa8-c5124c7199d9\") " pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.579194 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/050f3d44-1ff2-4334-8fa8-c5124c7199d9-webhook-cert\") pod \"metallb-operator-webhook-server-79cc595b65-5xnzn\" (UID: \"050f3d44-1ff2-4334-8fa8-c5124c7199d9\") " pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.680926 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/050f3d44-1ff2-4334-8fa8-c5124c7199d9-apiservice-cert\") pod \"metallb-operator-webhook-server-79cc595b65-5xnzn\" (UID: \"050f3d44-1ff2-4334-8fa8-c5124c7199d9\") " pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.680964 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/050f3d44-1ff2-4334-8fa8-c5124c7199d9-webhook-cert\") pod \"metallb-operator-webhook-server-79cc595b65-5xnzn\" (UID: \"050f3d44-1ff2-4334-8fa8-c5124c7199d9\") " pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.681000 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwlzb\" (UniqueName: \"kubernetes.io/projected/050f3d44-1ff2-4334-8fa8-c5124c7199d9-kube-api-access-xwlzb\") pod \"metallb-operator-webhook-server-79cc595b65-5xnzn\" (UID: \"050f3d44-1ff2-4334-8fa8-c5124c7199d9\") " pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.685743 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/050f3d44-1ff2-4334-8fa8-c5124c7199d9-webhook-cert\") pod \"metallb-operator-webhook-server-79cc595b65-5xnzn\" (UID: \"050f3d44-1ff2-4334-8fa8-c5124c7199d9\") " pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.685813 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/050f3d44-1ff2-4334-8fa8-c5124c7199d9-apiservice-cert\") pod \"metallb-operator-webhook-server-79cc595b65-5xnzn\" (UID: \"050f3d44-1ff2-4334-8fa8-c5124c7199d9\") " pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.695804 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwlzb\" (UniqueName: \"kubernetes.io/projected/050f3d44-1ff2-4334-8fa8-c5124c7199d9-kube-api-access-xwlzb\") pod \"metallb-operator-webhook-server-79cc595b65-5xnzn\" (UID: \"050f3d44-1ff2-4334-8fa8-c5124c7199d9\") " pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.793677 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.882294 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68"] Jan 21 14:46:38 crc kubenswrapper[4902]: W0121 14:46:38.895445 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ddec7fa_7afd_4d77_af77_509910e52c70.slice/crio-9eb019b3ebc29d56f418f06e561e1946d0224b3fa30fb264add3c7e61150309b WatchSource:0}: Error finding container 9eb019b3ebc29d56f418f06e561e1946d0224b3fa30fb264add3c7e61150309b: Status 404 returned error can't find the container with id 9eb019b3ebc29d56f418f06e561e1946d0224b3fa30fb264add3c7e61150309b Jan 21 14:46:39 crc kubenswrapper[4902]: I0121 14:46:39.015240 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn"] Jan 21 14:46:39 crc kubenswrapper[4902]: W0121 14:46:39.024278 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod050f3d44_1ff2_4334_8fa8_c5124c7199d9.slice/crio-e6061992f15f7c646e86924c7a52c204e160c01ed4bf75ca4f08665781fe2863 WatchSource:0}: Error finding container e6061992f15f7c646e86924c7a52c204e160c01ed4bf75ca4f08665781fe2863: Status 404 returned error can't find the container with id e6061992f15f7c646e86924c7a52c204e160c01ed4bf75ca4f08665781fe2863 Jan 21 14:46:39 crc kubenswrapper[4902]: I0121 14:46:39.866602 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" event={"ID":"1ddec7fa-7afd-4d77-af77-509910e52c70","Type":"ContainerStarted","Data":"9eb019b3ebc29d56f418f06e561e1946d0224b3fa30fb264add3c7e61150309b"} Jan 21 14:46:39 crc kubenswrapper[4902]: I0121 14:46:39.867860 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" event={"ID":"050f3d44-1ff2-4334-8fa8-c5124c7199d9","Type":"ContainerStarted","Data":"e6061992f15f7c646e86924c7a52c204e160c01ed4bf75ca4f08665781fe2863"} Jan 21 14:46:43 crc kubenswrapper[4902]: I0121 14:46:43.891715 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" event={"ID":"050f3d44-1ff2-4334-8fa8-c5124c7199d9","Type":"ContainerStarted","Data":"d0d579ffc2ae50b314775f0c499c83a641ccc1d184e9970f60e46ef2957e16a0"} Jan 21 14:46:43 crc kubenswrapper[4902]: I0121 14:46:43.892190 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:46:43 crc kubenswrapper[4902]: I0121 14:46:43.893350 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" event={"ID":"1ddec7fa-7afd-4d77-af77-509910e52c70","Type":"ContainerStarted","Data":"ffee974c6326a69b230da23362ffb000b08e60c3f1bc0cdbea4581eaaa918bd3"} Jan 21 14:46:43 crc kubenswrapper[4902]: I0121 14:46:43.893548 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:46:43 crc kubenswrapper[4902]: I0121 14:46:43.920362 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" podStartSLOduration=1.4386846580000001 podStartE2EDuration="5.920341788s" podCreationTimestamp="2026-01-21 14:46:38 +0000 UTC" firstStartedPulling="2026-01-21 14:46:39.027233067 +0000 UTC m=+761.104066096" lastFinishedPulling="2026-01-21 14:46:43.508890197 +0000 UTC m=+765.585723226" observedRunningTime="2026-01-21 14:46:43.915617403 +0000 UTC m=+765.992450472" watchObservedRunningTime="2026-01-21 14:46:43.920341788 +0000 UTC m=+765.997174817" Jan 21 14:46:43 crc kubenswrapper[4902]: I0121 14:46:43.938584 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" podStartSLOduration=1.350254058 podStartE2EDuration="5.938560982s" podCreationTimestamp="2026-01-21 14:46:38 +0000 UTC" firstStartedPulling="2026-01-21 14:46:38.899163498 +0000 UTC m=+760.975996537" lastFinishedPulling="2026-01-21 14:46:43.487470432 +0000 UTC m=+765.564303461" observedRunningTime="2026-01-21 14:46:43.934773623 +0000 UTC m=+766.011606662" watchObservedRunningTime="2026-01-21 14:46:43.938560982 +0000 UTC m=+766.015394021" Jan 21 14:46:58 crc kubenswrapper[4902]: I0121 14:46:58.798753 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:47:17 crc kubenswrapper[4902]: I0121 14:47:17.769420 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:47:17 crc kubenswrapper[4902]: I0121 14:47:17.769990 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:47:18 crc kubenswrapper[4902]: I0121 14:47:18.412621 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.210542 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-xpzj8"] Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.213137 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.215259 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj"] Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.216093 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.216853 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.216896 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.217079 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-z9vpk" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.217822 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.227396 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj"] Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.275513 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4f8bf62b-aae0-4080-a5ee-2472a60fe41f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-72rgj\" (UID: \"4f8bf62b-aae0-4080-a5ee-2472a60fe41f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.275571 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fc6639b-9150-4158-836f-1ffc1c4f5339-metrics-certs\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.275616 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6fc6639b-9150-4158-836f-1ffc1c4f5339-frr-conf\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.275646 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvhsh\" (UniqueName: \"kubernetes.io/projected/6fc6639b-9150-4158-836f-1ffc1c4f5339-kube-api-access-fvhsh\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.275758 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6fc6639b-9150-4158-836f-1ffc1c4f5339-frr-startup\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.275785 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6fc6639b-9150-4158-836f-1ffc1c4f5339-metrics\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.275814 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg5ss\" (UniqueName: \"kubernetes.io/projected/4f8bf62b-aae0-4080-a5ee-2472a60fe41f-kube-api-access-bg5ss\") pod \"frr-k8s-webhook-server-7df86c4f6c-72rgj\" (UID: \"4f8bf62b-aae0-4080-a5ee-2472a60fe41f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.275890 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6fc6639b-9150-4158-836f-1ffc1c4f5339-reloader\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.275926 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6fc6639b-9150-4158-836f-1ffc1c4f5339-frr-sockets\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.321978 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-5m6ct"] Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.322892 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.326094 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.326191 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.327471 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-4mbx4" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.327775 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.335448 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-h2pgt"] Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.340112 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.348917 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.363152 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-h2pgt"] Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.376711 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6fc6639b-9150-4158-836f-1ffc1c4f5339-reloader\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.376781 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6fc6639b-9150-4158-836f-1ffc1c4f5339-frr-sockets\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.376812 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kwhk\" (UniqueName: \"kubernetes.io/projected/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-kube-api-access-5kwhk\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.376828 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4f8bf62b-aae0-4080-a5ee-2472a60fe41f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-72rgj\" (UID: \"4f8bf62b-aae0-4080-a5ee-2472a60fe41f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.376845 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-metrics-certs\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.376866 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fc6639b-9150-4158-836f-1ffc1c4f5339-metrics-certs\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.376894 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6fc6639b-9150-4158-836f-1ffc1c4f5339-frr-conf\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.376912 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvhsh\" (UniqueName: \"kubernetes.io/projected/6fc6639b-9150-4158-836f-1ffc1c4f5339-kube-api-access-fvhsh\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.376941 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6fc6639b-9150-4158-836f-1ffc1c4f5339-frr-startup\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.376956 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-metallb-excludel2\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.376973 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6fc6639b-9150-4158-836f-1ffc1c4f5339-metrics\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.376987 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-memberlist\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.377003 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg5ss\" (UniqueName: \"kubernetes.io/projected/4f8bf62b-aae0-4080-a5ee-2472a60fe41f-kube-api-access-bg5ss\") pod \"frr-k8s-webhook-server-7df86c4f6c-72rgj\" (UID: \"4f8bf62b-aae0-4080-a5ee-2472a60fe41f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.377601 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6fc6639b-9150-4158-836f-1ffc1c4f5339-reloader\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.377778 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6fc6639b-9150-4158-836f-1ffc1c4f5339-frr-sockets\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.378557 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6fc6639b-9150-4158-836f-1ffc1c4f5339-frr-conf\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.378750 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6fc6639b-9150-4158-836f-1ffc1c4f5339-metrics\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.379190 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6fc6639b-9150-4158-836f-1ffc1c4f5339-frr-startup\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.383357 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4f8bf62b-aae0-4080-a5ee-2472a60fe41f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-72rgj\" (UID: \"4f8bf62b-aae0-4080-a5ee-2472a60fe41f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.386360 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fc6639b-9150-4158-836f-1ffc1c4f5339-metrics-certs\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.395651 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg5ss\" (UniqueName: \"kubernetes.io/projected/4f8bf62b-aae0-4080-a5ee-2472a60fe41f-kube-api-access-bg5ss\") pod \"frr-k8s-webhook-server-7df86c4f6c-72rgj\" (UID: \"4f8bf62b-aae0-4080-a5ee-2472a60fe41f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.415867 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvhsh\" (UniqueName: \"kubernetes.io/projected/6fc6639b-9150-4158-836f-1ffc1c4f5339-kube-api-access-fvhsh\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.478645 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kwhk\" (UniqueName: \"kubernetes.io/projected/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-kube-api-access-5kwhk\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.479127 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-metrics-certs\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.479242 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/694bf42b-c612-44c2-964b-c91336b8afa1-cert\") pod \"controller-6968d8fdc4-h2pgt\" (UID: \"694bf42b-c612-44c2-964b-c91336b8afa1\") " pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.479346 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/694bf42b-c612-44c2-964b-c91336b8afa1-metrics-certs\") pod \"controller-6968d8fdc4-h2pgt\" (UID: \"694bf42b-c612-44c2-964b-c91336b8afa1\") " pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:19 crc kubenswrapper[4902]: E0121 14:47:19.479257 4902 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 21 14:47:19 crc kubenswrapper[4902]: E0121 14:47:19.479541 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-metrics-certs podName:4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501 nodeName:}" failed. No retries permitted until 2026-01-21 14:47:19.979507862 +0000 UTC m=+802.056340881 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-metrics-certs") pod "speaker-5m6ct" (UID: "4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501") : secret "speaker-certs-secret" not found Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.480167 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fn6d\" (UniqueName: \"kubernetes.io/projected/694bf42b-c612-44c2-964b-c91336b8afa1-kube-api-access-2fn6d\") pod \"controller-6968d8fdc4-h2pgt\" (UID: \"694bf42b-c612-44c2-964b-c91336b8afa1\") " pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.480326 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-metallb-excludel2\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.480460 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-memberlist\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: E0121 14:47:19.480642 4902 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 14:47:19 crc kubenswrapper[4902]: E0121 14:47:19.480816 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-memberlist podName:4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501 nodeName:}" failed. No retries permitted until 2026-01-21 14:47:19.980792918 +0000 UTC m=+802.057625947 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-memberlist") pod "speaker-5m6ct" (UID: "4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501") : secret "metallb-memberlist" not found Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.481202 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-metallb-excludel2\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.498542 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kwhk\" (UniqueName: \"kubernetes.io/projected/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-kube-api-access-5kwhk\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.545018 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.557415 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.584010 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/694bf42b-c612-44c2-964b-c91336b8afa1-cert\") pod \"controller-6968d8fdc4-h2pgt\" (UID: \"694bf42b-c612-44c2-964b-c91336b8afa1\") " pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.584295 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/694bf42b-c612-44c2-964b-c91336b8afa1-metrics-certs\") pod \"controller-6968d8fdc4-h2pgt\" (UID: \"694bf42b-c612-44c2-964b-c91336b8afa1\") " pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.584366 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fn6d\" (UniqueName: \"kubernetes.io/projected/694bf42b-c612-44c2-964b-c91336b8afa1-kube-api-access-2fn6d\") pod \"controller-6968d8fdc4-h2pgt\" (UID: \"694bf42b-c612-44c2-964b-c91336b8afa1\") " pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.587272 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.590592 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/694bf42b-c612-44c2-964b-c91336b8afa1-metrics-certs\") pod \"controller-6968d8fdc4-h2pgt\" (UID: \"694bf42b-c612-44c2-964b-c91336b8afa1\") " pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.597291 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/694bf42b-c612-44c2-964b-c91336b8afa1-cert\") pod \"controller-6968d8fdc4-h2pgt\" (UID: \"694bf42b-c612-44c2-964b-c91336b8afa1\") " pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.601680 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fn6d\" (UniqueName: \"kubernetes.io/projected/694bf42b-c612-44c2-964b-c91336b8afa1-kube-api-access-2fn6d\") pod \"controller-6968d8fdc4-h2pgt\" (UID: \"694bf42b-c612-44c2-964b-c91336b8afa1\") " pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.668948 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.781199 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj"] Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.988832 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-memberlist\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.988900 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-metrics-certs\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: E0121 14:47:19.989079 4902 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 14:47:19 crc kubenswrapper[4902]: E0121 14:47:19.989240 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-memberlist podName:4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501 nodeName:}" failed. No retries permitted until 2026-01-21 14:47:20.989215143 +0000 UTC m=+803.066048202 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-memberlist") pod "speaker-5m6ct" (UID: "4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501") : secret "metallb-memberlist" not found Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.994471 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-metrics-certs\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:20 crc kubenswrapper[4902]: I0121 14:47:20.061897 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-h2pgt"] Jan 21 14:47:20 crc kubenswrapper[4902]: W0121 14:47:20.062227 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod694bf42b_c612_44c2_964b_c91336b8afa1.slice/crio-fc563eb4070b9583b0d9ef6ecbd3c8ca154fb03044038095e7275d6e8955a104 WatchSource:0}: Error finding container fc563eb4070b9583b0d9ef6ecbd3c8ca154fb03044038095e7275d6e8955a104: Status 404 returned error can't find the container with id fc563eb4070b9583b0d9ef6ecbd3c8ca154fb03044038095e7275d6e8955a104 Jan 21 14:47:20 crc kubenswrapper[4902]: I0121 14:47:20.097355 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-h2pgt" event={"ID":"694bf42b-c612-44c2-964b-c91336b8afa1","Type":"ContainerStarted","Data":"fc563eb4070b9583b0d9ef6ecbd3c8ca154fb03044038095e7275d6e8955a104"} Jan 21 14:47:20 crc kubenswrapper[4902]: I0121 14:47:20.098674 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" event={"ID":"4f8bf62b-aae0-4080-a5ee-2472a60fe41f","Type":"ContainerStarted","Data":"3ebc5e0972d1ffc10954a17589e962b34e16d9c215cbf7732d46b98beb35449b"} Jan 21 14:47:21 crc kubenswrapper[4902]: I0121 14:47:21.006382 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-memberlist\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:21 crc kubenswrapper[4902]: I0121 14:47:21.011624 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-memberlist\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:21 crc kubenswrapper[4902]: I0121 14:47:21.105873 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpzj8" event={"ID":"6fc6639b-9150-4158-836f-1ffc1c4f5339","Type":"ContainerStarted","Data":"d0d982d2efc1c22113547c6f8b2c34d2f38c9e25c7ad8b1d4994153eb7424112"} Jan 21 14:47:21 crc kubenswrapper[4902]: I0121 14:47:21.107500 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-h2pgt" event={"ID":"694bf42b-c612-44c2-964b-c91336b8afa1","Type":"ContainerStarted","Data":"2dc60d565ba8f1f83ee0c26cd8fc741bc8f7fa061135cf861a1cbc885fda2c83"} Jan 21 14:47:21 crc kubenswrapper[4902]: I0121 14:47:21.107532 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-h2pgt" event={"ID":"694bf42b-c612-44c2-964b-c91336b8afa1","Type":"ContainerStarted","Data":"25c8154f40f7603c4d8d3a5d2c09aab2847e96fd9b1220d7afe61265f5eacc6d"} Jan 21 14:47:21 crc kubenswrapper[4902]: I0121 14:47:21.107669 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:21 crc kubenswrapper[4902]: I0121 14:47:21.144756 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-h2pgt" podStartSLOduration=2.144737557 podStartE2EDuration="2.144737557s" podCreationTimestamp="2026-01-21 14:47:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:47:21.139717706 +0000 UTC m=+803.216550735" watchObservedRunningTime="2026-01-21 14:47:21.144737557 +0000 UTC m=+803.221570576" Jan 21 14:47:21 crc kubenswrapper[4902]: I0121 14:47:21.144965 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-5m6ct" Jan 21 14:47:21 crc kubenswrapper[4902]: W0121 14:47:21.177746 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fbfffc0_8fac_4684_9cc8_2a3bcc3cb501.slice/crio-a70fc81265026e8babae3f6efb71ecce387fdea03a2b3759a5ae48cf5a1abb7b WatchSource:0}: Error finding container a70fc81265026e8babae3f6efb71ecce387fdea03a2b3759a5ae48cf5a1abb7b: Status 404 returned error can't find the container with id a70fc81265026e8babae3f6efb71ecce387fdea03a2b3759a5ae48cf5a1abb7b Jan 21 14:47:22 crc kubenswrapper[4902]: I0121 14:47:22.136380 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5m6ct" event={"ID":"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501","Type":"ContainerStarted","Data":"4fc0a47cc3cd82a1abbef70b7e66a4dda864f4b181f8a7bb00e4670b82e62a14"} Jan 21 14:47:22 crc kubenswrapper[4902]: I0121 14:47:22.136423 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5m6ct" event={"ID":"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501","Type":"ContainerStarted","Data":"229065e795540674d1aa9b00ea4d50f98e2e29a00f768bd69418f69dc9189cac"} Jan 21 14:47:22 crc kubenswrapper[4902]: I0121 14:47:22.136433 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5m6ct" event={"ID":"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501","Type":"ContainerStarted","Data":"a70fc81265026e8babae3f6efb71ecce387fdea03a2b3759a5ae48cf5a1abb7b"} Jan 21 14:47:22 crc kubenswrapper[4902]: I0121 14:47:22.136951 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-5m6ct" Jan 21 14:47:22 crc kubenswrapper[4902]: I0121 14:47:22.156751 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-5m6ct" podStartSLOduration=3.156735049 podStartE2EDuration="3.156735049s" podCreationTimestamp="2026-01-21 14:47:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:47:22.153272382 +0000 UTC m=+804.230105411" watchObservedRunningTime="2026-01-21 14:47:22.156735049 +0000 UTC m=+804.233568078" Jan 21 14:47:27 crc kubenswrapper[4902]: I0121 14:47:27.176901 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" event={"ID":"4f8bf62b-aae0-4080-a5ee-2472a60fe41f","Type":"ContainerStarted","Data":"2a2242ac31d7b6d5e1f5b204f94d2bdf1563dd9b5c76f0dea6ea42216c2a245d"} Jan 21 14:47:28 crc kubenswrapper[4902]: I0121 14:47:28.184291 4902 generic.go:334] "Generic (PLEG): container finished" podID="6fc6639b-9150-4158-836f-1ffc1c4f5339" containerID="c40e98fb7dd2e59e131c732ec995037207c2554ff24797630bce0da5de4d7313" exitCode=0 Jan 21 14:47:28 crc kubenswrapper[4902]: I0121 14:47:28.184373 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpzj8" event={"ID":"6fc6639b-9150-4158-836f-1ffc1c4f5339","Type":"ContainerDied","Data":"c40e98fb7dd2e59e131c732ec995037207c2554ff24797630bce0da5de4d7313"} Jan 21 14:47:28 crc kubenswrapper[4902]: I0121 14:47:28.207978 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" podStartSLOduration=1.986228116 podStartE2EDuration="9.207957686s" podCreationTimestamp="2026-01-21 14:47:19 +0000 UTC" firstStartedPulling="2026-01-21 14:47:19.793992453 +0000 UTC m=+801.870825482" lastFinishedPulling="2026-01-21 14:47:27.015722023 +0000 UTC m=+809.092555052" observedRunningTime="2026-01-21 14:47:28.202871394 +0000 UTC m=+810.279704433" watchObservedRunningTime="2026-01-21 14:47:28.207957686 +0000 UTC m=+810.284790705" Jan 21 14:47:29 crc kubenswrapper[4902]: I0121 14:47:29.191573 4902 generic.go:334] "Generic (PLEG): container finished" podID="6fc6639b-9150-4158-836f-1ffc1c4f5339" containerID="6eda013d84d74c5cd2bf23a93379960e8c3955d812e4b16d01246d6591b5b0f0" exitCode=0 Jan 21 14:47:29 crc kubenswrapper[4902]: I0121 14:47:29.191664 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpzj8" event={"ID":"6fc6639b-9150-4158-836f-1ffc1c4f5339","Type":"ContainerDied","Data":"6eda013d84d74c5cd2bf23a93379960e8c3955d812e4b16d01246d6591b5b0f0"} Jan 21 14:47:29 crc kubenswrapper[4902]: I0121 14:47:29.192075 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" Jan 21 14:47:30 crc kubenswrapper[4902]: I0121 14:47:30.200800 4902 generic.go:334] "Generic (PLEG): container finished" podID="6fc6639b-9150-4158-836f-1ffc1c4f5339" containerID="410f3ac73f83325698e7da819f9835d8c0c4f5701a1b23386391064a5f04454e" exitCode=0 Jan 21 14:47:30 crc kubenswrapper[4902]: I0121 14:47:30.200875 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpzj8" event={"ID":"6fc6639b-9150-4158-836f-1ffc1c4f5339","Type":"ContainerDied","Data":"410f3ac73f83325698e7da819f9835d8c0c4f5701a1b23386391064a5f04454e"} Jan 21 14:47:31 crc kubenswrapper[4902]: I0121 14:47:31.154383 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-5m6ct" Jan 21 14:47:31 crc kubenswrapper[4902]: I0121 14:47:31.212985 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpzj8" event={"ID":"6fc6639b-9150-4158-836f-1ffc1c4f5339","Type":"ContainerStarted","Data":"273414f1045bbfa16013d2380fed0883dbefd979d6449007cfb94e6c9f7fc4b0"} Jan 21 14:47:31 crc kubenswrapper[4902]: I0121 14:47:31.213033 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpzj8" event={"ID":"6fc6639b-9150-4158-836f-1ffc1c4f5339","Type":"ContainerStarted","Data":"aefcc73b21c339b9a55d374c0caaf50e585a32c4e5bdc8b6e3fda20de9709e6f"} Jan 21 14:47:31 crc kubenswrapper[4902]: I0121 14:47:31.213064 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpzj8" event={"ID":"6fc6639b-9150-4158-836f-1ffc1c4f5339","Type":"ContainerStarted","Data":"3f9e094ffe7c03f4513dca55d7eb8b90a6abcfd2d9fc2413ca33bc769e18f5cb"} Jan 21 14:47:31 crc kubenswrapper[4902]: I0121 14:47:31.213075 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpzj8" event={"ID":"6fc6639b-9150-4158-836f-1ffc1c4f5339","Type":"ContainerStarted","Data":"642d3400012bd7e6aea09781934cabf9f6d86a98a245d3730ef906a9a9c14b6f"} Jan 21 14:47:31 crc kubenswrapper[4902]: I0121 14:47:31.213086 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpzj8" event={"ID":"6fc6639b-9150-4158-836f-1ffc1c4f5339","Type":"ContainerStarted","Data":"26a2ffbba2b4c68814f3f037364db515fed647f6b2e66468ba5f01a929a1b21b"} Jan 21 14:47:32 crc kubenswrapper[4902]: I0121 14:47:32.223377 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpzj8" event={"ID":"6fc6639b-9150-4158-836f-1ffc1c4f5339","Type":"ContainerStarted","Data":"a4c38dbe988b16a8b5b7e5aa5fe888bb86be8a463b28730e6f363daf0391799f"} Jan 21 14:47:32 crc kubenswrapper[4902]: I0121 14:47:32.223566 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:32 crc kubenswrapper[4902]: I0121 14:47:32.248912 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-xpzj8" podStartSLOduration=6.883316598 podStartE2EDuration="13.248895521s" podCreationTimestamp="2026-01-21 14:47:19 +0000 UTC" firstStartedPulling="2026-01-21 14:47:20.670069488 +0000 UTC m=+802.746902537" lastFinishedPulling="2026-01-21 14:47:27.035648431 +0000 UTC m=+809.112481460" observedRunningTime="2026-01-21 14:47:32.247994255 +0000 UTC m=+814.324827284" watchObservedRunningTime="2026-01-21 14:47:32.248895521 +0000 UTC m=+814.325728550" Jan 21 14:47:32 crc kubenswrapper[4902]: I0121 14:47:32.950812 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj"] Jan 21 14:47:32 crc kubenswrapper[4902]: I0121 14:47:32.952180 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:32 crc kubenswrapper[4902]: I0121 14:47:32.954171 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 14:47:32 crc kubenswrapper[4902]: I0121 14:47:32.964250 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj"] Jan 21 14:47:33 crc kubenswrapper[4902]: I0121 14:47:33.070334 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4942197-db6e-4bb6-af6d-24694a007a0b-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj\" (UID: \"b4942197-db6e-4bb6-af6d-24694a007a0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:33 crc kubenswrapper[4902]: I0121 14:47:33.070410 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh52k\" (UniqueName: \"kubernetes.io/projected/b4942197-db6e-4bb6-af6d-24694a007a0b-kube-api-access-fh52k\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj\" (UID: \"b4942197-db6e-4bb6-af6d-24694a007a0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:33 crc kubenswrapper[4902]: I0121 14:47:33.070601 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4942197-db6e-4bb6-af6d-24694a007a0b-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj\" (UID: \"b4942197-db6e-4bb6-af6d-24694a007a0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:33 crc kubenswrapper[4902]: I0121 14:47:33.172139 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4942197-db6e-4bb6-af6d-24694a007a0b-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj\" (UID: \"b4942197-db6e-4bb6-af6d-24694a007a0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:33 crc kubenswrapper[4902]: I0121 14:47:33.172475 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4942197-db6e-4bb6-af6d-24694a007a0b-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj\" (UID: \"b4942197-db6e-4bb6-af6d-24694a007a0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:33 crc kubenswrapper[4902]: I0121 14:47:33.172583 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh52k\" (UniqueName: \"kubernetes.io/projected/b4942197-db6e-4bb6-af6d-24694a007a0b-kube-api-access-fh52k\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj\" (UID: \"b4942197-db6e-4bb6-af6d-24694a007a0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:33 crc kubenswrapper[4902]: I0121 14:47:33.172631 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4942197-db6e-4bb6-af6d-24694a007a0b-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj\" (UID: \"b4942197-db6e-4bb6-af6d-24694a007a0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:33 crc kubenswrapper[4902]: I0121 14:47:33.172779 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4942197-db6e-4bb6-af6d-24694a007a0b-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj\" (UID: \"b4942197-db6e-4bb6-af6d-24694a007a0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:33 crc kubenswrapper[4902]: I0121 14:47:33.194874 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh52k\" (UniqueName: \"kubernetes.io/projected/b4942197-db6e-4bb6-af6d-24694a007a0b-kube-api-access-fh52k\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj\" (UID: \"b4942197-db6e-4bb6-af6d-24694a007a0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:33 crc kubenswrapper[4902]: I0121 14:47:33.270007 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:33 crc kubenswrapper[4902]: I0121 14:47:33.723720 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj"] Jan 21 14:47:34 crc kubenswrapper[4902]: I0121 14:47:34.240064 4902 generic.go:334] "Generic (PLEG): container finished" podID="b4942197-db6e-4bb6-af6d-24694a007a0b" containerID="0a4fd68fee547445dae37cf6af19dedf174c5364ea24c8619910287dbe0d9338" exitCode=0 Jan 21 14:47:34 crc kubenswrapper[4902]: I0121 14:47:34.240108 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" event={"ID":"b4942197-db6e-4bb6-af6d-24694a007a0b","Type":"ContainerDied","Data":"0a4fd68fee547445dae37cf6af19dedf174c5364ea24c8619910287dbe0d9338"} Jan 21 14:47:34 crc kubenswrapper[4902]: I0121 14:47:34.240135 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" event={"ID":"b4942197-db6e-4bb6-af6d-24694a007a0b","Type":"ContainerStarted","Data":"6962038ec7ce5218af0f0011e64727b716daf49cb4483a70fdeb952e8753254a"} Jan 21 14:47:34 crc kubenswrapper[4902]: I0121 14:47:34.546111 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:34 crc kubenswrapper[4902]: I0121 14:47:34.616435 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.521883 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fzz8m"] Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.523245 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.535308 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fzz8m"] Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.618871 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a05951-5e50-40de-92cd-2e064b9251f6-catalog-content\") pod \"redhat-operators-fzz8m\" (UID: \"89a05951-5e50-40de-92cd-2e064b9251f6\") " pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.618982 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a05951-5e50-40de-92cd-2e064b9251f6-utilities\") pod \"redhat-operators-fzz8m\" (UID: \"89a05951-5e50-40de-92cd-2e064b9251f6\") " pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.619007 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp7dc\" (UniqueName: \"kubernetes.io/projected/89a05951-5e50-40de-92cd-2e064b9251f6-kube-api-access-zp7dc\") pod \"redhat-operators-fzz8m\" (UID: \"89a05951-5e50-40de-92cd-2e064b9251f6\") " pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.719766 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a05951-5e50-40de-92cd-2e064b9251f6-utilities\") pod \"redhat-operators-fzz8m\" (UID: \"89a05951-5e50-40de-92cd-2e064b9251f6\") " pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.719863 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp7dc\" (UniqueName: \"kubernetes.io/projected/89a05951-5e50-40de-92cd-2e064b9251f6-kube-api-access-zp7dc\") pod \"redhat-operators-fzz8m\" (UID: \"89a05951-5e50-40de-92cd-2e064b9251f6\") " pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.719927 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a05951-5e50-40de-92cd-2e064b9251f6-catalog-content\") pod \"redhat-operators-fzz8m\" (UID: \"89a05951-5e50-40de-92cd-2e064b9251f6\") " pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.720672 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a05951-5e50-40de-92cd-2e064b9251f6-utilities\") pod \"redhat-operators-fzz8m\" (UID: \"89a05951-5e50-40de-92cd-2e064b9251f6\") " pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.720773 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a05951-5e50-40de-92cd-2e064b9251f6-catalog-content\") pod \"redhat-operators-fzz8m\" (UID: \"89a05951-5e50-40de-92cd-2e064b9251f6\") " pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.738069 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp7dc\" (UniqueName: \"kubernetes.io/projected/89a05951-5e50-40de-92cd-2e064b9251f6-kube-api-access-zp7dc\") pod \"redhat-operators-fzz8m\" (UID: \"89a05951-5e50-40de-92cd-2e064b9251f6\") " pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.843692 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:37 crc kubenswrapper[4902]: I0121 14:47:37.277311 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fzz8m"] Jan 21 14:47:37 crc kubenswrapper[4902]: W0121 14:47:37.289587 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89a05951_5e50_40de_92cd_2e064b9251f6.slice/crio-9ed3a42d2948444a026315cf05bef1569614af4c0ee778024e46e86063755697 WatchSource:0}: Error finding container 9ed3a42d2948444a026315cf05bef1569614af4c0ee778024e46e86063755697: Status 404 returned error can't find the container with id 9ed3a42d2948444a026315cf05bef1569614af4c0ee778024e46e86063755697 Jan 21 14:47:38 crc kubenswrapper[4902]: I0121 14:47:38.277678 4902 generic.go:334] "Generic (PLEG): container finished" podID="89a05951-5e50-40de-92cd-2e064b9251f6" containerID="3d196365e1bdeb6ab716f591fdd270c7c2eac6e0fbbe252b78e6f91b6f50d1b9" exitCode=0 Jan 21 14:47:38 crc kubenswrapper[4902]: I0121 14:47:38.277795 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzz8m" event={"ID":"89a05951-5e50-40de-92cd-2e064b9251f6","Type":"ContainerDied","Data":"3d196365e1bdeb6ab716f591fdd270c7c2eac6e0fbbe252b78e6f91b6f50d1b9"} Jan 21 14:47:38 crc kubenswrapper[4902]: I0121 14:47:38.278162 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzz8m" event={"ID":"89a05951-5e50-40de-92cd-2e064b9251f6","Type":"ContainerStarted","Data":"9ed3a42d2948444a026315cf05bef1569614af4c0ee778024e46e86063755697"} Jan 21 14:47:39 crc kubenswrapper[4902]: I0121 14:47:39.288427 4902 generic.go:334] "Generic (PLEG): container finished" podID="b4942197-db6e-4bb6-af6d-24694a007a0b" containerID="d60a0eed6453f9d43c23a2f26215345a7e683257d0707aefa6f32dff6ebd53be" exitCode=0 Jan 21 14:47:39 crc kubenswrapper[4902]: I0121 14:47:39.288513 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" event={"ID":"b4942197-db6e-4bb6-af6d-24694a007a0b","Type":"ContainerDied","Data":"d60a0eed6453f9d43c23a2f26215345a7e683257d0707aefa6f32dff6ebd53be"} Jan 21 14:47:39 crc kubenswrapper[4902]: I0121 14:47:39.562521 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" Jan 21 14:47:39 crc kubenswrapper[4902]: I0121 14:47:39.672933 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:40 crc kubenswrapper[4902]: I0121 14:47:40.300931 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" event={"ID":"b4942197-db6e-4bb6-af6d-24694a007a0b","Type":"ContainerStarted","Data":"fd0d369540fc1d16ed445672cd614b883042045aff9a1666bb0a4653dbd19f05"} Jan 21 14:47:41 crc kubenswrapper[4902]: I0121 14:47:41.307078 4902 generic.go:334] "Generic (PLEG): container finished" podID="b4942197-db6e-4bb6-af6d-24694a007a0b" containerID="fd0d369540fc1d16ed445672cd614b883042045aff9a1666bb0a4653dbd19f05" exitCode=0 Jan 21 14:47:41 crc kubenswrapper[4902]: I0121 14:47:41.307368 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" event={"ID":"b4942197-db6e-4bb6-af6d-24694a007a0b","Type":"ContainerDied","Data":"fd0d369540fc1d16ed445672cd614b883042045aff9a1666bb0a4653dbd19f05"} Jan 21 14:47:41 crc kubenswrapper[4902]: I0121 14:47:41.324442 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzz8m" event={"ID":"89a05951-5e50-40de-92cd-2e064b9251f6","Type":"ContainerStarted","Data":"989671a55d0c7614bf83ff9827e3fbd009059194c2b1426b5974e1f4db0fb8b8"} Jan 21 14:47:42 crc kubenswrapper[4902]: I0121 14:47:42.334121 4902 generic.go:334] "Generic (PLEG): container finished" podID="89a05951-5e50-40de-92cd-2e064b9251f6" containerID="989671a55d0c7614bf83ff9827e3fbd009059194c2b1426b5974e1f4db0fb8b8" exitCode=0 Jan 21 14:47:42 crc kubenswrapper[4902]: I0121 14:47:42.334269 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzz8m" event={"ID":"89a05951-5e50-40de-92cd-2e064b9251f6","Type":"ContainerDied","Data":"989671a55d0c7614bf83ff9827e3fbd009059194c2b1426b5974e1f4db0fb8b8"} Jan 21 14:47:42 crc kubenswrapper[4902]: I0121 14:47:42.566165 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:42 crc kubenswrapper[4902]: I0121 14:47:42.597377 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh52k\" (UniqueName: \"kubernetes.io/projected/b4942197-db6e-4bb6-af6d-24694a007a0b-kube-api-access-fh52k\") pod \"b4942197-db6e-4bb6-af6d-24694a007a0b\" (UID: \"b4942197-db6e-4bb6-af6d-24694a007a0b\") " Jan 21 14:47:42 crc kubenswrapper[4902]: I0121 14:47:42.597485 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4942197-db6e-4bb6-af6d-24694a007a0b-bundle\") pod \"b4942197-db6e-4bb6-af6d-24694a007a0b\" (UID: \"b4942197-db6e-4bb6-af6d-24694a007a0b\") " Jan 21 14:47:42 crc kubenswrapper[4902]: I0121 14:47:42.597545 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4942197-db6e-4bb6-af6d-24694a007a0b-util\") pod \"b4942197-db6e-4bb6-af6d-24694a007a0b\" (UID: \"b4942197-db6e-4bb6-af6d-24694a007a0b\") " Jan 21 14:47:42 crc kubenswrapper[4902]: I0121 14:47:42.599289 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4942197-db6e-4bb6-af6d-24694a007a0b-bundle" (OuterVolumeSpecName: "bundle") pod "b4942197-db6e-4bb6-af6d-24694a007a0b" (UID: "b4942197-db6e-4bb6-af6d-24694a007a0b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:47:42 crc kubenswrapper[4902]: I0121 14:47:42.603528 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4942197-db6e-4bb6-af6d-24694a007a0b-kube-api-access-fh52k" (OuterVolumeSpecName: "kube-api-access-fh52k") pod "b4942197-db6e-4bb6-af6d-24694a007a0b" (UID: "b4942197-db6e-4bb6-af6d-24694a007a0b"). InnerVolumeSpecName "kube-api-access-fh52k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:47:42 crc kubenswrapper[4902]: I0121 14:47:42.608469 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4942197-db6e-4bb6-af6d-24694a007a0b-util" (OuterVolumeSpecName: "util") pod "b4942197-db6e-4bb6-af6d-24694a007a0b" (UID: "b4942197-db6e-4bb6-af6d-24694a007a0b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:47:42 crc kubenswrapper[4902]: I0121 14:47:42.698692 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fh52k\" (UniqueName: \"kubernetes.io/projected/b4942197-db6e-4bb6-af6d-24694a007a0b-kube-api-access-fh52k\") on node \"crc\" DevicePath \"\"" Jan 21 14:47:42 crc kubenswrapper[4902]: I0121 14:47:42.698732 4902 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4942197-db6e-4bb6-af6d-24694a007a0b-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:47:42 crc kubenswrapper[4902]: I0121 14:47:42.698741 4902 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4942197-db6e-4bb6-af6d-24694a007a0b-util\") on node \"crc\" DevicePath \"\"" Jan 21 14:47:43 crc kubenswrapper[4902]: I0121 14:47:43.340323 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:43 crc kubenswrapper[4902]: I0121 14:47:43.340326 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" event={"ID":"b4942197-db6e-4bb6-af6d-24694a007a0b","Type":"ContainerDied","Data":"6962038ec7ce5218af0f0011e64727b716daf49cb4483a70fdeb952e8753254a"} Jan 21 14:47:43 crc kubenswrapper[4902]: I0121 14:47:43.340368 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6962038ec7ce5218af0f0011e64727b716daf49cb4483a70fdeb952e8753254a" Jan 21 14:47:43 crc kubenswrapper[4902]: I0121 14:47:43.342035 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzz8m" event={"ID":"89a05951-5e50-40de-92cd-2e064b9251f6","Type":"ContainerStarted","Data":"9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2"} Jan 21 14:47:43 crc kubenswrapper[4902]: I0121 14:47:43.369435 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fzz8m" podStartSLOduration=3.231805329 podStartE2EDuration="7.369416292s" podCreationTimestamp="2026-01-21 14:47:36 +0000 UTC" firstStartedPulling="2026-01-21 14:47:38.625730859 +0000 UTC m=+820.702563888" lastFinishedPulling="2026-01-21 14:47:42.763341812 +0000 UTC m=+824.840174851" observedRunningTime="2026-01-21 14:47:43.365001478 +0000 UTC m=+825.441834527" watchObservedRunningTime="2026-01-21 14:47:43.369416292 +0000 UTC m=+825.446249321" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.035317 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb"] Jan 21 14:47:46 crc kubenswrapper[4902]: E0121 14:47:46.036500 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4942197-db6e-4bb6-af6d-24694a007a0b" containerName="extract" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.036579 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4942197-db6e-4bb6-af6d-24694a007a0b" containerName="extract" Jan 21 14:47:46 crc kubenswrapper[4902]: E0121 14:47:46.036659 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4942197-db6e-4bb6-af6d-24694a007a0b" containerName="util" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.036737 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4942197-db6e-4bb6-af6d-24694a007a0b" containerName="util" Jan 21 14:47:46 crc kubenswrapper[4902]: E0121 14:47:46.036819 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4942197-db6e-4bb6-af6d-24694a007a0b" containerName="pull" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.036890 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4942197-db6e-4bb6-af6d-24694a007a0b" containerName="pull" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.037145 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4942197-db6e-4bb6-af6d-24694a007a0b" containerName="extract" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.037597 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.039218 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.039440 4902 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-8fbs6" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.039480 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.125396 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb"] Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.141511 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58qph\" (UniqueName: \"kubernetes.io/projected/0d7d00e8-0d08-48df-82d4-1427b10adbf9-kube-api-access-58qph\") pod \"cert-manager-operator-controller-manager-64cf6dff88-lppwb\" (UID: \"0d7d00e8-0d08-48df-82d4-1427b10adbf9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.141766 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0d7d00e8-0d08-48df-82d4-1427b10adbf9-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-lppwb\" (UID: \"0d7d00e8-0d08-48df-82d4-1427b10adbf9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.242885 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58qph\" (UniqueName: \"kubernetes.io/projected/0d7d00e8-0d08-48df-82d4-1427b10adbf9-kube-api-access-58qph\") pod \"cert-manager-operator-controller-manager-64cf6dff88-lppwb\" (UID: \"0d7d00e8-0d08-48df-82d4-1427b10adbf9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.242951 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0d7d00e8-0d08-48df-82d4-1427b10adbf9-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-lppwb\" (UID: \"0d7d00e8-0d08-48df-82d4-1427b10adbf9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.243480 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0d7d00e8-0d08-48df-82d4-1427b10adbf9-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-lppwb\" (UID: \"0d7d00e8-0d08-48df-82d4-1427b10adbf9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.269268 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58qph\" (UniqueName: \"kubernetes.io/projected/0d7d00e8-0d08-48df-82d4-1427b10adbf9-kube-api-access-58qph\") pod \"cert-manager-operator-controller-manager-64cf6dff88-lppwb\" (UID: \"0d7d00e8-0d08-48df-82d4-1427b10adbf9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.352410 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.615474 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb"] Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.844833 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.844873 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:47 crc kubenswrapper[4902]: I0121 14:47:47.365526 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb" event={"ID":"0d7d00e8-0d08-48df-82d4-1427b10adbf9","Type":"ContainerStarted","Data":"bc17a62a4053588b66f8088ee78e88091ced66d4df1ed9671a0d220865ec3e2d"} Jan 21 14:47:47 crc kubenswrapper[4902]: I0121 14:47:47.769976 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:47:47 crc kubenswrapper[4902]: I0121 14:47:47.770076 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:47:47 crc kubenswrapper[4902]: I0121 14:47:47.908534 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fzz8m" podUID="89a05951-5e50-40de-92cd-2e064b9251f6" containerName="registry-server" probeResult="failure" output=< Jan 21 14:47:47 crc kubenswrapper[4902]: timeout: failed to connect service ":50051" within 1s Jan 21 14:47:47 crc kubenswrapper[4902]: > Jan 21 14:47:49 crc kubenswrapper[4902]: I0121 14:47:49.549446 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:56 crc kubenswrapper[4902]: I0121 14:47:56.886543 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:56 crc kubenswrapper[4902]: I0121 14:47:56.943783 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:58 crc kubenswrapper[4902]: I0121 14:47:58.105656 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fzz8m"] Jan 21 14:47:58 crc kubenswrapper[4902]: I0121 14:47:58.450594 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb" event={"ID":"0d7d00e8-0d08-48df-82d4-1427b10adbf9","Type":"ContainerStarted","Data":"2a082becd3fb0d88001474c2e54e6d3ff7b369a02a05e06721cfed6d710bdf6e"} Jan 21 14:47:58 crc kubenswrapper[4902]: I0121 14:47:58.450807 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fzz8m" podUID="89a05951-5e50-40de-92cd-2e064b9251f6" containerName="registry-server" containerID="cri-o://9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2" gracePeriod=2 Jan 21 14:47:58 crc kubenswrapper[4902]: I0121 14:47:58.480840 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb" podStartSLOduration=1.4153507570000001 podStartE2EDuration="12.480817695s" podCreationTimestamp="2026-01-21 14:47:46 +0000 UTC" firstStartedPulling="2026-01-21 14:47:46.6384348 +0000 UTC m=+828.715267829" lastFinishedPulling="2026-01-21 14:47:57.703901728 +0000 UTC m=+839.780734767" observedRunningTime="2026-01-21 14:47:58.476949857 +0000 UTC m=+840.553782906" watchObservedRunningTime="2026-01-21 14:47:58.480817695 +0000 UTC m=+840.557650724" Jan 21 14:47:58 crc kubenswrapper[4902]: I0121 14:47:58.825371 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:58 crc kubenswrapper[4902]: I0121 14:47:58.952463 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a05951-5e50-40de-92cd-2e064b9251f6-catalog-content\") pod \"89a05951-5e50-40de-92cd-2e064b9251f6\" (UID: \"89a05951-5e50-40de-92cd-2e064b9251f6\") " Jan 21 14:47:58 crc kubenswrapper[4902]: I0121 14:47:58.952551 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp7dc\" (UniqueName: \"kubernetes.io/projected/89a05951-5e50-40de-92cd-2e064b9251f6-kube-api-access-zp7dc\") pod \"89a05951-5e50-40de-92cd-2e064b9251f6\" (UID: \"89a05951-5e50-40de-92cd-2e064b9251f6\") " Jan 21 14:47:58 crc kubenswrapper[4902]: I0121 14:47:58.952668 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a05951-5e50-40de-92cd-2e064b9251f6-utilities\") pod \"89a05951-5e50-40de-92cd-2e064b9251f6\" (UID: \"89a05951-5e50-40de-92cd-2e064b9251f6\") " Jan 21 14:47:58 crc kubenswrapper[4902]: I0121 14:47:58.955806 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89a05951-5e50-40de-92cd-2e064b9251f6-utilities" (OuterVolumeSpecName: "utilities") pod "89a05951-5e50-40de-92cd-2e064b9251f6" (UID: "89a05951-5e50-40de-92cd-2e064b9251f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:47:58 crc kubenswrapper[4902]: I0121 14:47:58.960414 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a05951-5e50-40de-92cd-2e064b9251f6-kube-api-access-zp7dc" (OuterVolumeSpecName: "kube-api-access-zp7dc") pod "89a05951-5e50-40de-92cd-2e064b9251f6" (UID: "89a05951-5e50-40de-92cd-2e064b9251f6"). InnerVolumeSpecName "kube-api-access-zp7dc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.055054 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a05951-5e50-40de-92cd-2e064b9251f6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.055090 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp7dc\" (UniqueName: \"kubernetes.io/projected/89a05951-5e50-40de-92cd-2e064b9251f6-kube-api-access-zp7dc\") on node \"crc\" DevicePath \"\"" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.072200 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89a05951-5e50-40de-92cd-2e064b9251f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89a05951-5e50-40de-92cd-2e064b9251f6" (UID: "89a05951-5e50-40de-92cd-2e064b9251f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.156697 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a05951-5e50-40de-92cd-2e064b9251f6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.457776 4902 generic.go:334] "Generic (PLEG): container finished" podID="89a05951-5e50-40de-92cd-2e064b9251f6" containerID="9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2" exitCode=0 Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.457831 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzz8m" event={"ID":"89a05951-5e50-40de-92cd-2e064b9251f6","Type":"ContainerDied","Data":"9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2"} Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.457862 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.457906 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzz8m" event={"ID":"89a05951-5e50-40de-92cd-2e064b9251f6","Type":"ContainerDied","Data":"9ed3a42d2948444a026315cf05bef1569614af4c0ee778024e46e86063755697"} Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.457926 4902 scope.go:117] "RemoveContainer" containerID="9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.475173 4902 scope.go:117] "RemoveContainer" containerID="989671a55d0c7614bf83ff9827e3fbd009059194c2b1426b5974e1f4db0fb8b8" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.492612 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fzz8m"] Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.497679 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fzz8m"] Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.502074 4902 scope.go:117] "RemoveContainer" containerID="3d196365e1bdeb6ab716f591fdd270c7c2eac6e0fbbe252b78e6f91b6f50d1b9" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.519294 4902 scope.go:117] "RemoveContainer" containerID="9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2" Jan 21 14:47:59 crc kubenswrapper[4902]: E0121 14:47:59.519719 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2\": container with ID starting with 9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2 not found: ID does not exist" containerID="9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.519768 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2"} err="failed to get container status \"9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2\": rpc error: code = NotFound desc = could not find container \"9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2\": container with ID starting with 9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2 not found: ID does not exist" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.519807 4902 scope.go:117] "RemoveContainer" containerID="989671a55d0c7614bf83ff9827e3fbd009059194c2b1426b5974e1f4db0fb8b8" Jan 21 14:47:59 crc kubenswrapper[4902]: E0121 14:47:59.520459 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"989671a55d0c7614bf83ff9827e3fbd009059194c2b1426b5974e1f4db0fb8b8\": container with ID starting with 989671a55d0c7614bf83ff9827e3fbd009059194c2b1426b5974e1f4db0fb8b8 not found: ID does not exist" containerID="989671a55d0c7614bf83ff9827e3fbd009059194c2b1426b5974e1f4db0fb8b8" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.520506 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"989671a55d0c7614bf83ff9827e3fbd009059194c2b1426b5974e1f4db0fb8b8"} err="failed to get container status \"989671a55d0c7614bf83ff9827e3fbd009059194c2b1426b5974e1f4db0fb8b8\": rpc error: code = NotFound desc = could not find container \"989671a55d0c7614bf83ff9827e3fbd009059194c2b1426b5974e1f4db0fb8b8\": container with ID starting with 989671a55d0c7614bf83ff9827e3fbd009059194c2b1426b5974e1f4db0fb8b8 not found: ID does not exist" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.520538 4902 scope.go:117] "RemoveContainer" containerID="3d196365e1bdeb6ab716f591fdd270c7c2eac6e0fbbe252b78e6f91b6f50d1b9" Jan 21 14:47:59 crc kubenswrapper[4902]: E0121 14:47:59.520843 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d196365e1bdeb6ab716f591fdd270c7c2eac6e0fbbe252b78e6f91b6f50d1b9\": container with ID starting with 3d196365e1bdeb6ab716f591fdd270c7c2eac6e0fbbe252b78e6f91b6f50d1b9 not found: ID does not exist" containerID="3d196365e1bdeb6ab716f591fdd270c7c2eac6e0fbbe252b78e6f91b6f50d1b9" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.520866 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d196365e1bdeb6ab716f591fdd270c7c2eac6e0fbbe252b78e6f91b6f50d1b9"} err="failed to get container status \"3d196365e1bdeb6ab716f591fdd270c7c2eac6e0fbbe252b78e6f91b6f50d1b9\": rpc error: code = NotFound desc = could not find container \"3d196365e1bdeb6ab716f591fdd270c7c2eac6e0fbbe252b78e6f91b6f50d1b9\": container with ID starting with 3d196365e1bdeb6ab716f591fdd270c7c2eac6e0fbbe252b78e6f91b6f50d1b9 not found: ID does not exist" Jan 21 14:48:00 crc kubenswrapper[4902]: I0121 14:48:00.302279 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89a05951-5e50-40de-92cd-2e064b9251f6" path="/var/lib/kubelet/pods/89a05951-5e50-40de-92cd-2e064b9251f6/volumes" Jan 21 14:48:00 crc kubenswrapper[4902]: I0121 14:48:00.933793 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-p2522"] Jan 21 14:48:00 crc kubenswrapper[4902]: E0121 14:48:00.934086 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a05951-5e50-40de-92cd-2e064b9251f6" containerName="extract-content" Jan 21 14:48:00 crc kubenswrapper[4902]: I0121 14:48:00.934101 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a05951-5e50-40de-92cd-2e064b9251f6" containerName="extract-content" Jan 21 14:48:00 crc kubenswrapper[4902]: E0121 14:48:00.934114 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a05951-5e50-40de-92cd-2e064b9251f6" containerName="registry-server" Jan 21 14:48:00 crc kubenswrapper[4902]: I0121 14:48:00.934122 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a05951-5e50-40de-92cd-2e064b9251f6" containerName="registry-server" Jan 21 14:48:00 crc kubenswrapper[4902]: E0121 14:48:00.934149 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a05951-5e50-40de-92cd-2e064b9251f6" containerName="extract-utilities" Jan 21 14:48:00 crc kubenswrapper[4902]: I0121 14:48:00.934159 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a05951-5e50-40de-92cd-2e064b9251f6" containerName="extract-utilities" Jan 21 14:48:00 crc kubenswrapper[4902]: I0121 14:48:00.934285 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a05951-5e50-40de-92cd-2e064b9251f6" containerName="registry-server" Jan 21 14:48:00 crc kubenswrapper[4902]: I0121 14:48:00.935397 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" Jan 21 14:48:00 crc kubenswrapper[4902]: I0121 14:48:00.938324 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 21 14:48:00 crc kubenswrapper[4902]: I0121 14:48:00.938849 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 21 14:48:00 crc kubenswrapper[4902]: I0121 14:48:00.944990 4902 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-9np5k" Jan 21 14:48:00 crc kubenswrapper[4902]: I0121 14:48:00.945146 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-p2522"] Jan 21 14:48:01 crc kubenswrapper[4902]: I0121 14:48:01.086108 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9093daac-4fd2-4075-8e73-d358cd885c3c-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-p2522\" (UID: \"9093daac-4fd2-4075-8e73-d358cd885c3c\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" Jan 21 14:48:01 crc kubenswrapper[4902]: I0121 14:48:01.086180 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgr6d\" (UniqueName: \"kubernetes.io/projected/9093daac-4fd2-4075-8e73-d358cd885c3c-kube-api-access-dgr6d\") pod \"cert-manager-webhook-f4fb5df64-p2522\" (UID: \"9093daac-4fd2-4075-8e73-d358cd885c3c\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" Jan 21 14:48:01 crc kubenswrapper[4902]: I0121 14:48:01.187256 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9093daac-4fd2-4075-8e73-d358cd885c3c-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-p2522\" (UID: \"9093daac-4fd2-4075-8e73-d358cd885c3c\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" Jan 21 14:48:01 crc kubenswrapper[4902]: I0121 14:48:01.187317 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgr6d\" (UniqueName: \"kubernetes.io/projected/9093daac-4fd2-4075-8e73-d358cd885c3c-kube-api-access-dgr6d\") pod \"cert-manager-webhook-f4fb5df64-p2522\" (UID: \"9093daac-4fd2-4075-8e73-d358cd885c3c\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" Jan 21 14:48:01 crc kubenswrapper[4902]: I0121 14:48:01.207839 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgr6d\" (UniqueName: \"kubernetes.io/projected/9093daac-4fd2-4075-8e73-d358cd885c3c-kube-api-access-dgr6d\") pod \"cert-manager-webhook-f4fb5df64-p2522\" (UID: \"9093daac-4fd2-4075-8e73-d358cd885c3c\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" Jan 21 14:48:01 crc kubenswrapper[4902]: I0121 14:48:01.211412 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9093daac-4fd2-4075-8e73-d358cd885c3c-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-p2522\" (UID: \"9093daac-4fd2-4075-8e73-d358cd885c3c\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" Jan 21 14:48:01 crc kubenswrapper[4902]: I0121 14:48:01.303781 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" Jan 21 14:48:01 crc kubenswrapper[4902]: I0121 14:48:01.493737 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-p2522"] Jan 21 14:48:01 crc kubenswrapper[4902]: W0121 14:48:01.505526 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9093daac_4fd2_4075_8e73_d358cd885c3c.slice/crio-38ad53a80d5cca05802d38a7a845a8fd4913860bd8d324fd5d8e7167b91f8813 WatchSource:0}: Error finding container 38ad53a80d5cca05802d38a7a845a8fd4913860bd8d324fd5d8e7167b91f8813: Status 404 returned error can't find the container with id 38ad53a80d5cca05802d38a7a845a8fd4913860bd8d324fd5d8e7167b91f8813 Jan 21 14:48:02 crc kubenswrapper[4902]: I0121 14:48:02.478101 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" event={"ID":"9093daac-4fd2-4075-8e73-d358cd885c3c","Type":"ContainerStarted","Data":"38ad53a80d5cca05802d38a7a845a8fd4913860bd8d324fd5d8e7167b91f8813"} Jan 21 14:48:04 crc kubenswrapper[4902]: I0121 14:48:04.485815 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-llf68"] Jan 21 14:48:04 crc kubenswrapper[4902]: I0121 14:48:04.487088 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-llf68" Jan 21 14:48:04 crc kubenswrapper[4902]: I0121 14:48:04.490445 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-llf68"] Jan 21 14:48:04 crc kubenswrapper[4902]: I0121 14:48:04.493382 4902 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-6m2lz" Jan 21 14:48:04 crc kubenswrapper[4902]: I0121 14:48:04.643469 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h42m\" (UniqueName: \"kubernetes.io/projected/21799993-1de7-4aef-9cfa-c132249ecf74-kube-api-access-7h42m\") pod \"cert-manager-cainjector-855d9ccff4-llf68\" (UID: \"21799993-1de7-4aef-9cfa-c132249ecf74\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-llf68" Jan 21 14:48:04 crc kubenswrapper[4902]: I0121 14:48:04.643518 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/21799993-1de7-4aef-9cfa-c132249ecf74-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-llf68\" (UID: \"21799993-1de7-4aef-9cfa-c132249ecf74\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-llf68" Jan 21 14:48:04 crc kubenswrapper[4902]: I0121 14:48:04.745252 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h42m\" (UniqueName: \"kubernetes.io/projected/21799993-1de7-4aef-9cfa-c132249ecf74-kube-api-access-7h42m\") pod \"cert-manager-cainjector-855d9ccff4-llf68\" (UID: \"21799993-1de7-4aef-9cfa-c132249ecf74\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-llf68" Jan 21 14:48:04 crc kubenswrapper[4902]: I0121 14:48:04.745651 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/21799993-1de7-4aef-9cfa-c132249ecf74-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-llf68\" (UID: \"21799993-1de7-4aef-9cfa-c132249ecf74\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-llf68" Jan 21 14:48:04 crc kubenswrapper[4902]: I0121 14:48:04.777239 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h42m\" (UniqueName: \"kubernetes.io/projected/21799993-1de7-4aef-9cfa-c132249ecf74-kube-api-access-7h42m\") pod \"cert-manager-cainjector-855d9ccff4-llf68\" (UID: \"21799993-1de7-4aef-9cfa-c132249ecf74\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-llf68" Jan 21 14:48:04 crc kubenswrapper[4902]: I0121 14:48:04.778506 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/21799993-1de7-4aef-9cfa-c132249ecf74-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-llf68\" (UID: \"21799993-1de7-4aef-9cfa-c132249ecf74\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-llf68" Jan 21 14:48:04 crc kubenswrapper[4902]: I0121 14:48:04.826826 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-llf68" Jan 21 14:48:05 crc kubenswrapper[4902]: I0121 14:48:05.212717 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-llf68"] Jan 21 14:48:05 crc kubenswrapper[4902]: W0121 14:48:05.219617 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21799993_1de7_4aef_9cfa_c132249ecf74.slice/crio-b6821c4081e9b32522ffad5787b7c27e5741889901baf84d22740f4f20de8910 WatchSource:0}: Error finding container b6821c4081e9b32522ffad5787b7c27e5741889901baf84d22740f4f20de8910: Status 404 returned error can't find the container with id b6821c4081e9b32522ffad5787b7c27e5741889901baf84d22740f4f20de8910 Jan 21 14:48:05 crc kubenswrapper[4902]: I0121 14:48:05.510851 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-llf68" event={"ID":"21799993-1de7-4aef-9cfa-c132249ecf74","Type":"ContainerStarted","Data":"b6821c4081e9b32522ffad5787b7c27e5741889901baf84d22740f4f20de8910"} Jan 21 14:48:09 crc kubenswrapper[4902]: I0121 14:48:09.535871 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-llf68" event={"ID":"21799993-1de7-4aef-9cfa-c132249ecf74","Type":"ContainerStarted","Data":"475de260633d7bc549a9d606755d85a160d93bf66693fbc3c171cfdc76134fa5"} Jan 21 14:48:09 crc kubenswrapper[4902]: I0121 14:48:09.541313 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" event={"ID":"9093daac-4fd2-4075-8e73-d358cd885c3c","Type":"ContainerStarted","Data":"d67cf11162782bdcacfb014ce4cc0261c7e396c13e5ad0547d2ce7520c149dca"} Jan 21 14:48:09 crc kubenswrapper[4902]: I0121 14:48:09.541488 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" Jan 21 14:48:09 crc kubenswrapper[4902]: I0121 14:48:09.553384 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-llf68" podStartSLOduration=1.460956391 podStartE2EDuration="5.553367559s" podCreationTimestamp="2026-01-21 14:48:04 +0000 UTC" firstStartedPulling="2026-01-21 14:48:05.221832294 +0000 UTC m=+847.298665323" lastFinishedPulling="2026-01-21 14:48:09.314243462 +0000 UTC m=+851.391076491" observedRunningTime="2026-01-21 14:48:09.55046345 +0000 UTC m=+851.627296489" watchObservedRunningTime="2026-01-21 14:48:09.553367559 +0000 UTC m=+851.630200598" Jan 21 14:48:09 crc kubenswrapper[4902]: I0121 14:48:09.578611 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" podStartSLOduration=1.749928361 podStartE2EDuration="9.578588783s" podCreationTimestamp="2026-01-21 14:48:00 +0000 UTC" firstStartedPulling="2026-01-21 14:48:01.507635037 +0000 UTC m=+843.584468066" lastFinishedPulling="2026-01-21 14:48:09.336295459 +0000 UTC m=+851.413128488" observedRunningTime="2026-01-21 14:48:09.574366447 +0000 UTC m=+851.651199496" watchObservedRunningTime="2026-01-21 14:48:09.578588783 +0000 UTC m=+851.655421822" Jan 21 14:48:14 crc kubenswrapper[4902]: I0121 14:48:14.342603 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-4cd6m"] Jan 21 14:48:14 crc kubenswrapper[4902]: I0121 14:48:14.343441 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-4cd6m" Jan 21 14:48:14 crc kubenswrapper[4902]: I0121 14:48:14.346543 4902 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-lczzx" Jan 21 14:48:14 crc kubenswrapper[4902]: I0121 14:48:14.359390 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-4cd6m"] Jan 21 14:48:14 crc kubenswrapper[4902]: I0121 14:48:14.490677 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12dae6d4-a2b1-4ef8-ae74-369697c9172b-bound-sa-token\") pod \"cert-manager-86cb77c54b-4cd6m\" (UID: \"12dae6d4-a2b1-4ef8-ae74-369697c9172b\") " pod="cert-manager/cert-manager-86cb77c54b-4cd6m" Jan 21 14:48:14 crc kubenswrapper[4902]: I0121 14:48:14.491290 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djpgl\" (UniqueName: \"kubernetes.io/projected/12dae6d4-a2b1-4ef8-ae74-369697c9172b-kube-api-access-djpgl\") pod \"cert-manager-86cb77c54b-4cd6m\" (UID: \"12dae6d4-a2b1-4ef8-ae74-369697c9172b\") " pod="cert-manager/cert-manager-86cb77c54b-4cd6m" Jan 21 14:48:14 crc kubenswrapper[4902]: I0121 14:48:14.592973 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12dae6d4-a2b1-4ef8-ae74-369697c9172b-bound-sa-token\") pod \"cert-manager-86cb77c54b-4cd6m\" (UID: \"12dae6d4-a2b1-4ef8-ae74-369697c9172b\") " pod="cert-manager/cert-manager-86cb77c54b-4cd6m" Jan 21 14:48:14 crc kubenswrapper[4902]: I0121 14:48:14.593010 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djpgl\" (UniqueName: \"kubernetes.io/projected/12dae6d4-a2b1-4ef8-ae74-369697c9172b-kube-api-access-djpgl\") pod \"cert-manager-86cb77c54b-4cd6m\" (UID: \"12dae6d4-a2b1-4ef8-ae74-369697c9172b\") " pod="cert-manager/cert-manager-86cb77c54b-4cd6m" Jan 21 14:48:14 crc kubenswrapper[4902]: I0121 14:48:14.617133 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djpgl\" (UniqueName: \"kubernetes.io/projected/12dae6d4-a2b1-4ef8-ae74-369697c9172b-kube-api-access-djpgl\") pod \"cert-manager-86cb77c54b-4cd6m\" (UID: \"12dae6d4-a2b1-4ef8-ae74-369697c9172b\") " pod="cert-manager/cert-manager-86cb77c54b-4cd6m" Jan 21 14:48:14 crc kubenswrapper[4902]: I0121 14:48:14.623809 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12dae6d4-a2b1-4ef8-ae74-369697c9172b-bound-sa-token\") pod \"cert-manager-86cb77c54b-4cd6m\" (UID: \"12dae6d4-a2b1-4ef8-ae74-369697c9172b\") " pod="cert-manager/cert-manager-86cb77c54b-4cd6m" Jan 21 14:48:14 crc kubenswrapper[4902]: I0121 14:48:14.720602 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-4cd6m" Jan 21 14:48:15 crc kubenswrapper[4902]: I0121 14:48:15.120777 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-4cd6m"] Jan 21 14:48:15 crc kubenswrapper[4902]: I0121 14:48:15.597915 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-4cd6m" event={"ID":"12dae6d4-a2b1-4ef8-ae74-369697c9172b","Type":"ContainerStarted","Data":"14a05c26a1f2ba10b45056581c18b90ba5160d665c6fc1cce2ec1f89dc7f4fe2"} Jan 21 14:48:15 crc kubenswrapper[4902]: I0121 14:48:15.598332 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-4cd6m" event={"ID":"12dae6d4-a2b1-4ef8-ae74-369697c9172b","Type":"ContainerStarted","Data":"8ce00bffc240c82186d2c71f5b5500b76190af8418b7e930f030f584e9c68ae9"} Jan 21 14:48:15 crc kubenswrapper[4902]: I0121 14:48:15.616667 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-4cd6m" podStartSLOduration=1.6166471919999998 podStartE2EDuration="1.616647192s" podCreationTimestamp="2026-01-21 14:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:48:15.614104932 +0000 UTC m=+857.690937981" watchObservedRunningTime="2026-01-21 14:48:15.616647192 +0000 UTC m=+857.693480221" Jan 21 14:48:16 crc kubenswrapper[4902]: I0121 14:48:16.307453 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" Jan 21 14:48:17 crc kubenswrapper[4902]: I0121 14:48:17.769298 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:48:17 crc kubenswrapper[4902]: I0121 14:48:17.769609 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:48:17 crc kubenswrapper[4902]: I0121 14:48:17.769658 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:48:17 crc kubenswrapper[4902]: I0121 14:48:17.770149 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"097b55fd9fa87b27fef8f06ba3cbfef04c2339f11dc61a41eeced54a3451dbca"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:48:17 crc kubenswrapper[4902]: I0121 14:48:17.770206 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://097b55fd9fa87b27fef8f06ba3cbfef04c2339f11dc61a41eeced54a3451dbca" gracePeriod=600 Jan 21 14:48:18 crc kubenswrapper[4902]: I0121 14:48:18.622278 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="097b55fd9fa87b27fef8f06ba3cbfef04c2339f11dc61a41eeced54a3451dbca" exitCode=0 Jan 21 14:48:18 crc kubenswrapper[4902]: I0121 14:48:18.622361 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"097b55fd9fa87b27fef8f06ba3cbfef04c2339f11dc61a41eeced54a3451dbca"} Jan 21 14:48:18 crc kubenswrapper[4902]: I0121 14:48:18.623141 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"f9ca57ec1458d1c5cf7c9248bedd6ee378b9620abbe566738ff33d6096aeb8f1"} Jan 21 14:48:18 crc kubenswrapper[4902]: I0121 14:48:18.623184 4902 scope.go:117] "RemoveContainer" containerID="1d2da13e9ad46e483ffa722259ad0a8b94b5c2e16fcacdd89045b1d8ac2afd0e" Jan 21 14:48:19 crc kubenswrapper[4902]: I0121 14:48:19.844705 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-9nh8f"] Jan 21 14:48:19 crc kubenswrapper[4902]: I0121 14:48:19.846196 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9nh8f" Jan 21 14:48:19 crc kubenswrapper[4902]: I0121 14:48:19.849237 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 21 14:48:19 crc kubenswrapper[4902]: I0121 14:48:19.849951 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 21 14:48:19 crc kubenswrapper[4902]: I0121 14:48:19.850277 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-w7v62" Jan 21 14:48:19 crc kubenswrapper[4902]: I0121 14:48:19.870738 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9nh8f"] Jan 21 14:48:19 crc kubenswrapper[4902]: I0121 14:48:19.961184 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9v5z\" (UniqueName: \"kubernetes.io/projected/bac350c4-c5e3-4fa1-8a4d-88ba0729a776-kube-api-access-q9v5z\") pod \"openstack-operator-index-9nh8f\" (UID: \"bac350c4-c5e3-4fa1-8a4d-88ba0729a776\") " pod="openstack-operators/openstack-operator-index-9nh8f" Jan 21 14:48:20 crc kubenswrapper[4902]: I0121 14:48:20.062370 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9v5z\" (UniqueName: \"kubernetes.io/projected/bac350c4-c5e3-4fa1-8a4d-88ba0729a776-kube-api-access-q9v5z\") pod \"openstack-operator-index-9nh8f\" (UID: \"bac350c4-c5e3-4fa1-8a4d-88ba0729a776\") " pod="openstack-operators/openstack-operator-index-9nh8f" Jan 21 14:48:20 crc kubenswrapper[4902]: I0121 14:48:20.086483 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9v5z\" (UniqueName: \"kubernetes.io/projected/bac350c4-c5e3-4fa1-8a4d-88ba0729a776-kube-api-access-q9v5z\") pod \"openstack-operator-index-9nh8f\" (UID: \"bac350c4-c5e3-4fa1-8a4d-88ba0729a776\") " pod="openstack-operators/openstack-operator-index-9nh8f" Jan 21 14:48:20 crc kubenswrapper[4902]: I0121 14:48:20.172733 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9nh8f" Jan 21 14:48:20 crc kubenswrapper[4902]: I0121 14:48:20.658853 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9nh8f"] Jan 21 14:48:21 crc kubenswrapper[4902]: I0121 14:48:21.650364 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9nh8f" event={"ID":"bac350c4-c5e3-4fa1-8a4d-88ba0729a776","Type":"ContainerStarted","Data":"96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d"} Jan 21 14:48:21 crc kubenswrapper[4902]: I0121 14:48:21.650809 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9nh8f" event={"ID":"bac350c4-c5e3-4fa1-8a4d-88ba0729a776","Type":"ContainerStarted","Data":"8701aa92f151895298f0a19d3131c4e6a3110a63aaae0aaedd69e0fff8da5ce8"} Jan 21 14:48:21 crc kubenswrapper[4902]: I0121 14:48:21.663753 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-9nh8f" podStartSLOduration=1.836481289 podStartE2EDuration="2.663731062s" podCreationTimestamp="2026-01-21 14:48:19 +0000 UTC" firstStartedPulling="2026-01-21 14:48:20.677562948 +0000 UTC m=+862.754395977" lastFinishedPulling="2026-01-21 14:48:21.504812721 +0000 UTC m=+863.581645750" observedRunningTime="2026-01-21 14:48:21.662107558 +0000 UTC m=+863.738940587" watchObservedRunningTime="2026-01-21 14:48:21.663731062 +0000 UTC m=+863.740564101" Jan 21 14:48:23 crc kubenswrapper[4902]: I0121 14:48:23.621417 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9nh8f"] Jan 21 14:48:23 crc kubenswrapper[4902]: I0121 14:48:23.661236 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-9nh8f" podUID="bac350c4-c5e3-4fa1-8a4d-88ba0729a776" containerName="registry-server" containerID="cri-o://96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d" gracePeriod=2 Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.026935 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9nh8f" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.125226 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9v5z\" (UniqueName: \"kubernetes.io/projected/bac350c4-c5e3-4fa1-8a4d-88ba0729a776-kube-api-access-q9v5z\") pod \"bac350c4-c5e3-4fa1-8a4d-88ba0729a776\" (UID: \"bac350c4-c5e3-4fa1-8a4d-88ba0729a776\") " Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.131371 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bac350c4-c5e3-4fa1-8a4d-88ba0729a776-kube-api-access-q9v5z" (OuterVolumeSpecName: "kube-api-access-q9v5z") pod "bac350c4-c5e3-4fa1-8a4d-88ba0729a776" (UID: "bac350c4-c5e3-4fa1-8a4d-88ba0729a776"). InnerVolumeSpecName "kube-api-access-q9v5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.227062 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9v5z\" (UniqueName: \"kubernetes.io/projected/bac350c4-c5e3-4fa1-8a4d-88ba0729a776-kube-api-access-q9v5z\") on node \"crc\" DevicePath \"\"" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.435524 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-dp8mf"] Jan 21 14:48:24 crc kubenswrapper[4902]: E0121 14:48:24.436494 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac350c4-c5e3-4fa1-8a4d-88ba0729a776" containerName="registry-server" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.436702 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac350c4-c5e3-4fa1-8a4d-88ba0729a776" containerName="registry-server" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.437183 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="bac350c4-c5e3-4fa1-8a4d-88ba0729a776" containerName="registry-server" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.438301 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dp8mf" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.455697 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-dp8mf"] Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.532538 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gltv2\" (UniqueName: \"kubernetes.io/projected/2d05d6f5-a861-4117-b4a0-00e98da2fe57-kube-api-access-gltv2\") pod \"openstack-operator-index-dp8mf\" (UID: \"2d05d6f5-a861-4117-b4a0-00e98da2fe57\") " pod="openstack-operators/openstack-operator-index-dp8mf" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.634535 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gltv2\" (UniqueName: \"kubernetes.io/projected/2d05d6f5-a861-4117-b4a0-00e98da2fe57-kube-api-access-gltv2\") pod \"openstack-operator-index-dp8mf\" (UID: \"2d05d6f5-a861-4117-b4a0-00e98da2fe57\") " pod="openstack-operators/openstack-operator-index-dp8mf" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.660412 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gltv2\" (UniqueName: \"kubernetes.io/projected/2d05d6f5-a861-4117-b4a0-00e98da2fe57-kube-api-access-gltv2\") pod \"openstack-operator-index-dp8mf\" (UID: \"2d05d6f5-a861-4117-b4a0-00e98da2fe57\") " pod="openstack-operators/openstack-operator-index-dp8mf" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.668831 4902 generic.go:334] "Generic (PLEG): container finished" podID="bac350c4-c5e3-4fa1-8a4d-88ba0729a776" containerID="96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d" exitCode=0 Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.668877 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9nh8f" event={"ID":"bac350c4-c5e3-4fa1-8a4d-88ba0729a776","Type":"ContainerDied","Data":"96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d"} Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.668905 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9nh8f" event={"ID":"bac350c4-c5e3-4fa1-8a4d-88ba0729a776","Type":"ContainerDied","Data":"8701aa92f151895298f0a19d3131c4e6a3110a63aaae0aaedd69e0fff8da5ce8"} Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.668923 4902 scope.go:117] "RemoveContainer" containerID="96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.669044 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9nh8f" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.707333 4902 scope.go:117] "RemoveContainer" containerID="96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d" Jan 21 14:48:24 crc kubenswrapper[4902]: E0121 14:48:24.708008 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d\": container with ID starting with 96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d not found: ID does not exist" containerID="96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.708062 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d"} err="failed to get container status \"96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d\": rpc error: code = NotFound desc = could not find container \"96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d\": container with ID starting with 96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d not found: ID does not exist" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.712941 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9nh8f"] Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.720713 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-9nh8f"] Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.775259 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dp8mf" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.957149 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-dp8mf"] Jan 21 14:48:24 crc kubenswrapper[4902]: W0121 14:48:24.962417 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d05d6f5_a861_4117_b4a0_00e98da2fe57.slice/crio-892b16d9b552981852dbae22ecf0709c1ddbb9dec02bc25f44045253ac7f1624 WatchSource:0}: Error finding container 892b16d9b552981852dbae22ecf0709c1ddbb9dec02bc25f44045253ac7f1624: Status 404 returned error can't find the container with id 892b16d9b552981852dbae22ecf0709c1ddbb9dec02bc25f44045253ac7f1624 Jan 21 14:48:25 crc kubenswrapper[4902]: I0121 14:48:25.676631 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dp8mf" event={"ID":"2d05d6f5-a861-4117-b4a0-00e98da2fe57","Type":"ContainerStarted","Data":"bfdb1bdf9425993b52dca8ccf0e0a670eeaa8e6ee6394f9cfc2852d0f2226b0c"} Jan 21 14:48:25 crc kubenswrapper[4902]: I0121 14:48:25.676929 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dp8mf" event={"ID":"2d05d6f5-a861-4117-b4a0-00e98da2fe57","Type":"ContainerStarted","Data":"892b16d9b552981852dbae22ecf0709c1ddbb9dec02bc25f44045253ac7f1624"} Jan 21 14:48:25 crc kubenswrapper[4902]: I0121 14:48:25.688914 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-dp8mf" podStartSLOduration=1.244428336 podStartE2EDuration="1.688897371s" podCreationTimestamp="2026-01-21 14:48:24 +0000 UTC" firstStartedPulling="2026-01-21 14:48:24.965531705 +0000 UTC m=+867.042364734" lastFinishedPulling="2026-01-21 14:48:25.41000072 +0000 UTC m=+867.486833769" observedRunningTime="2026-01-21 14:48:25.688260653 +0000 UTC m=+867.765093682" watchObservedRunningTime="2026-01-21 14:48:25.688897371 +0000 UTC m=+867.765730400" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.231623 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7lxbr"] Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.233017 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.249906 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7lxbr"] Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.340397 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bac350c4-c5e3-4fa1-8a4d-88ba0729a776" path="/var/lib/kubelet/pods/bac350c4-c5e3-4fa1-8a4d-88ba0729a776/volumes" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.358878 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22vvw\" (UniqueName: \"kubernetes.io/projected/0eb25c9d-2c71-4c7c-892b-bce263563735-kube-api-access-22vvw\") pod \"certified-operators-7lxbr\" (UID: \"0eb25c9d-2c71-4c7c-892b-bce263563735\") " pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.358992 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eb25c9d-2c71-4c7c-892b-bce263563735-catalog-content\") pod \"certified-operators-7lxbr\" (UID: \"0eb25c9d-2c71-4c7c-892b-bce263563735\") " pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.359077 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eb25c9d-2c71-4c7c-892b-bce263563735-utilities\") pod \"certified-operators-7lxbr\" (UID: \"0eb25c9d-2c71-4c7c-892b-bce263563735\") " pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.460672 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eb25c9d-2c71-4c7c-892b-bce263563735-catalog-content\") pod \"certified-operators-7lxbr\" (UID: \"0eb25c9d-2c71-4c7c-892b-bce263563735\") " pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.461010 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eb25c9d-2c71-4c7c-892b-bce263563735-utilities\") pod \"certified-operators-7lxbr\" (UID: \"0eb25c9d-2c71-4c7c-892b-bce263563735\") " pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.461074 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22vvw\" (UniqueName: \"kubernetes.io/projected/0eb25c9d-2c71-4c7c-892b-bce263563735-kube-api-access-22vvw\") pod \"certified-operators-7lxbr\" (UID: \"0eb25c9d-2c71-4c7c-892b-bce263563735\") " pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.461220 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eb25c9d-2c71-4c7c-892b-bce263563735-catalog-content\") pod \"certified-operators-7lxbr\" (UID: \"0eb25c9d-2c71-4c7c-892b-bce263563735\") " pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.461543 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eb25c9d-2c71-4c7c-892b-bce263563735-utilities\") pod \"certified-operators-7lxbr\" (UID: \"0eb25c9d-2c71-4c7c-892b-bce263563735\") " pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.484701 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22vvw\" (UniqueName: \"kubernetes.io/projected/0eb25c9d-2c71-4c7c-892b-bce263563735-kube-api-access-22vvw\") pod \"certified-operators-7lxbr\" (UID: \"0eb25c9d-2c71-4c7c-892b-bce263563735\") " pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.548033 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.828755 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7lxbr"] Jan 21 14:48:27 crc kubenswrapper[4902]: I0121 14:48:27.690505 4902 generic.go:334] "Generic (PLEG): container finished" podID="0eb25c9d-2c71-4c7c-892b-bce263563735" containerID="e6887d84ac19036b8225ed94be4c49f586fb8bc64a8e5b4a5855f54696ee47ce" exitCode=0 Jan 21 14:48:27 crc kubenswrapper[4902]: I0121 14:48:27.690620 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lxbr" event={"ID":"0eb25c9d-2c71-4c7c-892b-bce263563735","Type":"ContainerDied","Data":"e6887d84ac19036b8225ed94be4c49f586fb8bc64a8e5b4a5855f54696ee47ce"} Jan 21 14:48:27 crc kubenswrapper[4902]: I0121 14:48:27.691312 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lxbr" event={"ID":"0eb25c9d-2c71-4c7c-892b-bce263563735","Type":"ContainerStarted","Data":"64d3c73fd6fc6e7756ced5576e58d05676abc9945d3ce06fd36084d244f71002"} Jan 21 14:48:28 crc kubenswrapper[4902]: I0121 14:48:28.704850 4902 generic.go:334] "Generic (PLEG): container finished" podID="0eb25c9d-2c71-4c7c-892b-bce263563735" containerID="bfa8270d976a4fc7059e932914709e76561f9cb9d254e260afd041df52378c93" exitCode=0 Jan 21 14:48:28 crc kubenswrapper[4902]: I0121 14:48:28.704936 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lxbr" event={"ID":"0eb25c9d-2c71-4c7c-892b-bce263563735","Type":"ContainerDied","Data":"bfa8270d976a4fc7059e932914709e76561f9cb9d254e260afd041df52378c93"} Jan 21 14:48:29 crc kubenswrapper[4902]: I0121 14:48:29.713640 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lxbr" event={"ID":"0eb25c9d-2c71-4c7c-892b-bce263563735","Type":"ContainerStarted","Data":"f493d4635a60733e8d14df746c92f5036e8289697348e22bd097ef7a88cc56c2"} Jan 21 14:48:34 crc kubenswrapper[4902]: I0121 14:48:34.775941 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-dp8mf" Jan 21 14:48:34 crc kubenswrapper[4902]: I0121 14:48:34.776527 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-dp8mf" Jan 21 14:48:34 crc kubenswrapper[4902]: I0121 14:48:34.801868 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-dp8mf" Jan 21 14:48:34 crc kubenswrapper[4902]: I0121 14:48:34.815990 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7lxbr" podStartSLOduration=7.429198801 podStartE2EDuration="8.815957513s" podCreationTimestamp="2026-01-21 14:48:26 +0000 UTC" firstStartedPulling="2026-01-21 14:48:27.69259773 +0000 UTC m=+869.769430759" lastFinishedPulling="2026-01-21 14:48:29.079356422 +0000 UTC m=+871.156189471" observedRunningTime="2026-01-21 14:48:29.736852167 +0000 UTC m=+871.813685186" watchObservedRunningTime="2026-01-21 14:48:34.815957513 +0000 UTC m=+876.892790542" Jan 21 14:48:35 crc kubenswrapper[4902]: I0121 14:48:35.794958 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-dp8mf" Jan 21 14:48:36 crc kubenswrapper[4902]: I0121 14:48:36.548463 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:36 crc kubenswrapper[4902]: I0121 14:48:36.548865 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:36 crc kubenswrapper[4902]: I0121 14:48:36.591789 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:36 crc kubenswrapper[4902]: I0121 14:48:36.795241 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:36 crc kubenswrapper[4902]: I0121 14:48:36.866193 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv"] Jan 21 14:48:36 crc kubenswrapper[4902]: I0121 14:48:36.867521 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:36 crc kubenswrapper[4902]: I0121 14:48:36.871110 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-w9277" Jan 21 14:48:36 crc kubenswrapper[4902]: I0121 14:48:36.874513 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv"] Jan 21 14:48:36 crc kubenswrapper[4902]: I0121 14:48:36.994130 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7119ded-6a7d-468d-acc4-9d1d1045656c-bundle\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv\" (UID: \"f7119ded-6a7d-468d-acc4-9d1d1045656c\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:36 crc kubenswrapper[4902]: I0121 14:48:36.994184 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb27m\" (UniqueName: \"kubernetes.io/projected/f7119ded-6a7d-468d-acc4-9d1d1045656c-kube-api-access-hb27m\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv\" (UID: \"f7119ded-6a7d-468d-acc4-9d1d1045656c\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:36 crc kubenswrapper[4902]: I0121 14:48:36.994210 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7119ded-6a7d-468d-acc4-9d1d1045656c-util\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv\" (UID: \"f7119ded-6a7d-468d-acc4-9d1d1045656c\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:37 crc kubenswrapper[4902]: I0121 14:48:37.095817 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7119ded-6a7d-468d-acc4-9d1d1045656c-bundle\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv\" (UID: \"f7119ded-6a7d-468d-acc4-9d1d1045656c\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:37 crc kubenswrapper[4902]: I0121 14:48:37.095869 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb27m\" (UniqueName: \"kubernetes.io/projected/f7119ded-6a7d-468d-acc4-9d1d1045656c-kube-api-access-hb27m\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv\" (UID: \"f7119ded-6a7d-468d-acc4-9d1d1045656c\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:37 crc kubenswrapper[4902]: I0121 14:48:37.095895 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7119ded-6a7d-468d-acc4-9d1d1045656c-util\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv\" (UID: \"f7119ded-6a7d-468d-acc4-9d1d1045656c\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:37 crc kubenswrapper[4902]: I0121 14:48:37.096438 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7119ded-6a7d-468d-acc4-9d1d1045656c-util\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv\" (UID: \"f7119ded-6a7d-468d-acc4-9d1d1045656c\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:37 crc kubenswrapper[4902]: I0121 14:48:37.096709 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7119ded-6a7d-468d-acc4-9d1d1045656c-bundle\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv\" (UID: \"f7119ded-6a7d-468d-acc4-9d1d1045656c\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:37 crc kubenswrapper[4902]: I0121 14:48:37.120878 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb27m\" (UniqueName: \"kubernetes.io/projected/f7119ded-6a7d-468d-acc4-9d1d1045656c-kube-api-access-hb27m\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv\" (UID: \"f7119ded-6a7d-468d-acc4-9d1d1045656c\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:37 crc kubenswrapper[4902]: I0121 14:48:37.190249 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:37 crc kubenswrapper[4902]: I0121 14:48:37.424184 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv"] Jan 21 14:48:37 crc kubenswrapper[4902]: I0121 14:48:37.784659 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" event={"ID":"f7119ded-6a7d-468d-acc4-9d1d1045656c","Type":"ContainerStarted","Data":"73c301ca2159c0c33d1998bfdd53bb9012794bc26bd074af6c573b20ff2d743a"} Jan 21 14:48:38 crc kubenswrapper[4902]: I0121 14:48:38.792393 4902 generic.go:334] "Generic (PLEG): container finished" podID="f7119ded-6a7d-468d-acc4-9d1d1045656c" containerID="6850fc9e636e2b34d7f029fff57d259c26e305ce55ddc05019a045301860f0eb" exitCode=0 Jan 21 14:48:38 crc kubenswrapper[4902]: I0121 14:48:38.792455 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" event={"ID":"f7119ded-6a7d-468d-acc4-9d1d1045656c","Type":"ContainerDied","Data":"6850fc9e636e2b34d7f029fff57d259c26e305ce55ddc05019a045301860f0eb"} Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.029686 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gxj9j"] Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.031471 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.042626 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gxj9j"] Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.133966 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzph4\" (UniqueName: \"kubernetes.io/projected/fabdb2a2-7f8e-40a4-a150-6bb794482383-kube-api-access-rzph4\") pod \"community-operators-gxj9j\" (UID: \"fabdb2a2-7f8e-40a4-a150-6bb794482383\") " pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.134144 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabdb2a2-7f8e-40a4-a150-6bb794482383-utilities\") pod \"community-operators-gxj9j\" (UID: \"fabdb2a2-7f8e-40a4-a150-6bb794482383\") " pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.134184 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabdb2a2-7f8e-40a4-a150-6bb794482383-catalog-content\") pod \"community-operators-gxj9j\" (UID: \"fabdb2a2-7f8e-40a4-a150-6bb794482383\") " pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.235263 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzph4\" (UniqueName: \"kubernetes.io/projected/fabdb2a2-7f8e-40a4-a150-6bb794482383-kube-api-access-rzph4\") pod \"community-operators-gxj9j\" (UID: \"fabdb2a2-7f8e-40a4-a150-6bb794482383\") " pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.235346 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabdb2a2-7f8e-40a4-a150-6bb794482383-utilities\") pod \"community-operators-gxj9j\" (UID: \"fabdb2a2-7f8e-40a4-a150-6bb794482383\") " pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.235395 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabdb2a2-7f8e-40a4-a150-6bb794482383-catalog-content\") pod \"community-operators-gxj9j\" (UID: \"fabdb2a2-7f8e-40a4-a150-6bb794482383\") " pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.235929 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabdb2a2-7f8e-40a4-a150-6bb794482383-utilities\") pod \"community-operators-gxj9j\" (UID: \"fabdb2a2-7f8e-40a4-a150-6bb794482383\") " pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.236015 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabdb2a2-7f8e-40a4-a150-6bb794482383-catalog-content\") pod \"community-operators-gxj9j\" (UID: \"fabdb2a2-7f8e-40a4-a150-6bb794482383\") " pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.254926 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzph4\" (UniqueName: \"kubernetes.io/projected/fabdb2a2-7f8e-40a4-a150-6bb794482383-kube-api-access-rzph4\") pod \"community-operators-gxj9j\" (UID: \"fabdb2a2-7f8e-40a4-a150-6bb794482383\") " pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.352395 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.423848 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7lxbr"] Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.424071 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7lxbr" podUID="0eb25c9d-2c71-4c7c-892b-bce263563735" containerName="registry-server" containerID="cri-o://f493d4635a60733e8d14df746c92f5036e8289697348e22bd097ef7a88cc56c2" gracePeriod=2 Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.802998 4902 generic.go:334] "Generic (PLEG): container finished" podID="f7119ded-6a7d-468d-acc4-9d1d1045656c" containerID="bcb171e1d455922261e79a352408e9c0c7a85f4fccfdacbb80bea720c98a917f" exitCode=0 Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.803242 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" event={"ID":"f7119ded-6a7d-468d-acc4-9d1d1045656c","Type":"ContainerDied","Data":"bcb171e1d455922261e79a352408e9c0c7a85f4fccfdacbb80bea720c98a917f"} Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.805957 4902 generic.go:334] "Generic (PLEG): container finished" podID="0eb25c9d-2c71-4c7c-892b-bce263563735" containerID="f493d4635a60733e8d14df746c92f5036e8289697348e22bd097ef7a88cc56c2" exitCode=0 Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.805987 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lxbr" event={"ID":"0eb25c9d-2c71-4c7c-892b-bce263563735","Type":"ContainerDied","Data":"f493d4635a60733e8d14df746c92f5036e8289697348e22bd097ef7a88cc56c2"} Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.892963 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.924101 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gxj9j"] Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.986996 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22vvw\" (UniqueName: \"kubernetes.io/projected/0eb25c9d-2c71-4c7c-892b-bce263563735-kube-api-access-22vvw\") pod \"0eb25c9d-2c71-4c7c-892b-bce263563735\" (UID: \"0eb25c9d-2c71-4c7c-892b-bce263563735\") " Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.987115 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eb25c9d-2c71-4c7c-892b-bce263563735-catalog-content\") pod \"0eb25c9d-2c71-4c7c-892b-bce263563735\" (UID: \"0eb25c9d-2c71-4c7c-892b-bce263563735\") " Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.987181 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eb25c9d-2c71-4c7c-892b-bce263563735-utilities\") pod \"0eb25c9d-2c71-4c7c-892b-bce263563735\" (UID: \"0eb25c9d-2c71-4c7c-892b-bce263563735\") " Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.988214 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eb25c9d-2c71-4c7c-892b-bce263563735-utilities" (OuterVolumeSpecName: "utilities") pod "0eb25c9d-2c71-4c7c-892b-bce263563735" (UID: "0eb25c9d-2c71-4c7c-892b-bce263563735"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.994284 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eb25c9d-2c71-4c7c-892b-bce263563735-kube-api-access-22vvw" (OuterVolumeSpecName: "kube-api-access-22vvw") pod "0eb25c9d-2c71-4c7c-892b-bce263563735" (UID: "0eb25c9d-2c71-4c7c-892b-bce263563735"). InnerVolumeSpecName "kube-api-access-22vvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.039976 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eb25c9d-2c71-4c7c-892b-bce263563735-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0eb25c9d-2c71-4c7c-892b-bce263563735" (UID: "0eb25c9d-2c71-4c7c-892b-bce263563735"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.088671 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22vvw\" (UniqueName: \"kubernetes.io/projected/0eb25c9d-2c71-4c7c-892b-bce263563735-kube-api-access-22vvw\") on node \"crc\" DevicePath \"\"" Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.088976 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eb25c9d-2c71-4c7c-892b-bce263563735-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.088988 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eb25c9d-2c71-4c7c-892b-bce263563735-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.816229 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lxbr" event={"ID":"0eb25c9d-2c71-4c7c-892b-bce263563735","Type":"ContainerDied","Data":"64d3c73fd6fc6e7756ced5576e58d05676abc9945d3ce06fd36084d244f71002"} Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.816276 4902 scope.go:117] "RemoveContainer" containerID="f493d4635a60733e8d14df746c92f5036e8289697348e22bd097ef7a88cc56c2" Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.816278 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.818379 4902 generic.go:334] "Generic (PLEG): container finished" podID="fabdb2a2-7f8e-40a4-a150-6bb794482383" containerID="58e01dcf152baa0605dc7a6d72585435564e75b4f638f76712ac46553f8eb051" exitCode=0 Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.818455 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gxj9j" event={"ID":"fabdb2a2-7f8e-40a4-a150-6bb794482383","Type":"ContainerDied","Data":"58e01dcf152baa0605dc7a6d72585435564e75b4f638f76712ac46553f8eb051"} Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.818507 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gxj9j" event={"ID":"fabdb2a2-7f8e-40a4-a150-6bb794482383","Type":"ContainerStarted","Data":"7cfb471925da61d1c73706b42117dc901586ec198cdb9360f6a17d4a36f443f6"} Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.821545 4902 generic.go:334] "Generic (PLEG): container finished" podID="f7119ded-6a7d-468d-acc4-9d1d1045656c" containerID="a7fddc682f8d31b3ea32932b92487e1e92d53984d6e4fe5fb67526e9a2a56398" exitCode=0 Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.821577 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" event={"ID":"f7119ded-6a7d-468d-acc4-9d1d1045656c","Type":"ContainerDied","Data":"a7fddc682f8d31b3ea32932b92487e1e92d53984d6e4fe5fb67526e9a2a56398"} Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.844351 4902 scope.go:117] "RemoveContainer" containerID="bfa8270d976a4fc7059e932914709e76561f9cb9d254e260afd041df52378c93" Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.889427 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7lxbr"] Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.892284 4902 scope.go:117] "RemoveContainer" containerID="e6887d84ac19036b8225ed94be4c49f586fb8bc64a8e5b4a5855f54696ee47ce" Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.894767 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7lxbr"] Jan 21 14:48:42 crc kubenswrapper[4902]: I0121 14:48:42.305253 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eb25c9d-2c71-4c7c-892b-bce263563735" path="/var/lib/kubelet/pods/0eb25c9d-2c71-4c7c-892b-bce263563735/volumes" Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.086292 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.112641 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7119ded-6a7d-468d-acc4-9d1d1045656c-util\") pod \"f7119ded-6a7d-468d-acc4-9d1d1045656c\" (UID: \"f7119ded-6a7d-468d-acc4-9d1d1045656c\") " Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.112740 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb27m\" (UniqueName: \"kubernetes.io/projected/f7119ded-6a7d-468d-acc4-9d1d1045656c-kube-api-access-hb27m\") pod \"f7119ded-6a7d-468d-acc4-9d1d1045656c\" (UID: \"f7119ded-6a7d-468d-acc4-9d1d1045656c\") " Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.112828 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7119ded-6a7d-468d-acc4-9d1d1045656c-bundle\") pod \"f7119ded-6a7d-468d-acc4-9d1d1045656c\" (UID: \"f7119ded-6a7d-468d-acc4-9d1d1045656c\") " Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.113515 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7119ded-6a7d-468d-acc4-9d1d1045656c-bundle" (OuterVolumeSpecName: "bundle") pod "f7119ded-6a7d-468d-acc4-9d1d1045656c" (UID: "f7119ded-6a7d-468d-acc4-9d1d1045656c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.117677 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7119ded-6a7d-468d-acc4-9d1d1045656c-kube-api-access-hb27m" (OuterVolumeSpecName: "kube-api-access-hb27m") pod "f7119ded-6a7d-468d-acc4-9d1d1045656c" (UID: "f7119ded-6a7d-468d-acc4-9d1d1045656c"). InnerVolumeSpecName "kube-api-access-hb27m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.134820 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7119ded-6a7d-468d-acc4-9d1d1045656c-util" (OuterVolumeSpecName: "util") pod "f7119ded-6a7d-468d-acc4-9d1d1045656c" (UID: "f7119ded-6a7d-468d-acc4-9d1d1045656c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.214226 4902 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7119ded-6a7d-468d-acc4-9d1d1045656c-util\") on node \"crc\" DevicePath \"\"" Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.214267 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb27m\" (UniqueName: \"kubernetes.io/projected/f7119ded-6a7d-468d-acc4-9d1d1045656c-kube-api-access-hb27m\") on node \"crc\" DevicePath \"\"" Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.214277 4902 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7119ded-6a7d-468d-acc4-9d1d1045656c-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.839514 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" event={"ID":"f7119ded-6a7d-468d-acc4-9d1d1045656c","Type":"ContainerDied","Data":"73c301ca2159c0c33d1998bfdd53bb9012794bc26bd074af6c573b20ff2d743a"} Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.839571 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73c301ca2159c0c33d1998bfdd53bb9012794bc26bd074af6c573b20ff2d743a" Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.839588 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:45 crc kubenswrapper[4902]: I0121 14:48:45.863156 4902 generic.go:334] "Generic (PLEG): container finished" podID="fabdb2a2-7f8e-40a4-a150-6bb794482383" containerID="4fdf41c6649aa4c07005037da3fcebbfd51eb386bde51c1b9445b8ed1c7ff15a" exitCode=0 Jan 21 14:48:45 crc kubenswrapper[4902]: I0121 14:48:45.863232 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gxj9j" event={"ID":"fabdb2a2-7f8e-40a4-a150-6bb794482383","Type":"ContainerDied","Data":"4fdf41c6649aa4c07005037da3fcebbfd51eb386bde51c1b9445b8ed1c7ff15a"} Jan 21 14:48:46 crc kubenswrapper[4902]: I0121 14:48:46.877686 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gxj9j" event={"ID":"fabdb2a2-7f8e-40a4-a150-6bb794482383","Type":"ContainerStarted","Data":"16997efae7f7e817fb4dcb1190338876eb5cd2dcf755844c0ab32fc4fbef23fc"} Jan 21 14:48:46 crc kubenswrapper[4902]: I0121 14:48:46.902442 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gxj9j" podStartSLOduration=2.425071646 podStartE2EDuration="6.902425232s" podCreationTimestamp="2026-01-21 14:48:40 +0000 UTC" firstStartedPulling="2026-01-21 14:48:41.821007021 +0000 UTC m=+883.897840090" lastFinishedPulling="2026-01-21 14:48:46.298360617 +0000 UTC m=+888.375193676" observedRunningTime="2026-01-21 14:48:46.897018353 +0000 UTC m=+888.973851382" watchObservedRunningTime="2026-01-21 14:48:46.902425232 +0000 UTC m=+888.979258261" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.867764 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp"] Jan 21 14:48:47 crc kubenswrapper[4902]: E0121 14:48:47.867992 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7119ded-6a7d-468d-acc4-9d1d1045656c" containerName="pull" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.868003 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7119ded-6a7d-468d-acc4-9d1d1045656c" containerName="pull" Jan 21 14:48:47 crc kubenswrapper[4902]: E0121 14:48:47.868014 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7119ded-6a7d-468d-acc4-9d1d1045656c" containerName="extract" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.868019 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7119ded-6a7d-468d-acc4-9d1d1045656c" containerName="extract" Jan 21 14:48:47 crc kubenswrapper[4902]: E0121 14:48:47.868028 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eb25c9d-2c71-4c7c-892b-bce263563735" containerName="extract-utilities" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.868035 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb25c9d-2c71-4c7c-892b-bce263563735" containerName="extract-utilities" Jan 21 14:48:47 crc kubenswrapper[4902]: E0121 14:48:47.868065 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7119ded-6a7d-468d-acc4-9d1d1045656c" containerName="util" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.868071 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7119ded-6a7d-468d-acc4-9d1d1045656c" containerName="util" Jan 21 14:48:47 crc kubenswrapper[4902]: E0121 14:48:47.868079 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eb25c9d-2c71-4c7c-892b-bce263563735" containerName="registry-server" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.868085 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb25c9d-2c71-4c7c-892b-bce263563735" containerName="registry-server" Jan 21 14:48:47 crc kubenswrapper[4902]: E0121 14:48:47.868098 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eb25c9d-2c71-4c7c-892b-bce263563735" containerName="extract-content" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.868105 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb25c9d-2c71-4c7c-892b-bce263563735" containerName="extract-content" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.868202 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eb25c9d-2c71-4c7c-892b-bce263563735" containerName="registry-server" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.868219 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7119ded-6a7d-468d-acc4-9d1d1045656c" containerName="extract" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.868607 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.870740 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-4mwhd" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.886903 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx6fm\" (UniqueName: \"kubernetes.io/projected/1fbcd3da-0b42-4d83-b774-776f9d1612d5-kube-api-access-wx6fm\") pod \"openstack-operator-controller-init-6d4d7d8545-mvcwp\" (UID: \"1fbcd3da-0b42-4d83-b774-776f9d1612d5\") " pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.900602 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp"] Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.988726 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx6fm\" (UniqueName: \"kubernetes.io/projected/1fbcd3da-0b42-4d83-b774-776f9d1612d5-kube-api-access-wx6fm\") pod \"openstack-operator-controller-init-6d4d7d8545-mvcwp\" (UID: \"1fbcd3da-0b42-4d83-b774-776f9d1612d5\") " pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp" Jan 21 14:48:48 crc kubenswrapper[4902]: I0121 14:48:48.015595 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx6fm\" (UniqueName: \"kubernetes.io/projected/1fbcd3da-0b42-4d83-b774-776f9d1612d5-kube-api-access-wx6fm\") pod \"openstack-operator-controller-init-6d4d7d8545-mvcwp\" (UID: \"1fbcd3da-0b42-4d83-b774-776f9d1612d5\") " pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp" Jan 21 14:48:48 crc kubenswrapper[4902]: I0121 14:48:48.185766 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp" Jan 21 14:48:48 crc kubenswrapper[4902]: I0121 14:48:48.441920 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp"] Jan 21 14:48:48 crc kubenswrapper[4902]: W0121 14:48:48.450279 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fbcd3da_0b42_4d83_b774_776f9d1612d5.slice/crio-3d5b56f885ca84af16d276764383b7a1ea6a0dc8dfc38d758399af82d3ff862e WatchSource:0}: Error finding container 3d5b56f885ca84af16d276764383b7a1ea6a0dc8dfc38d758399af82d3ff862e: Status 404 returned error can't find the container with id 3d5b56f885ca84af16d276764383b7a1ea6a0dc8dfc38d758399af82d3ff862e Jan 21 14:48:48 crc kubenswrapper[4902]: I0121 14:48:48.904707 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp" event={"ID":"1fbcd3da-0b42-4d83-b774-776f9d1612d5","Type":"ContainerStarted","Data":"3d5b56f885ca84af16d276764383b7a1ea6a0dc8dfc38d758399af82d3ff862e"} Jan 21 14:48:50 crc kubenswrapper[4902]: I0121 14:48:50.352822 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:50 crc kubenswrapper[4902]: I0121 14:48:50.353098 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:50 crc kubenswrapper[4902]: I0121 14:48:50.401651 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:52 crc kubenswrapper[4902]: I0121 14:48:52.950199 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp" event={"ID":"1fbcd3da-0b42-4d83-b774-776f9d1612d5","Type":"ContainerStarted","Data":"f77e686e94457c430a7256fd0ca7237386d9140cf367a6391af6af32f945073c"} Jan 21 14:48:52 crc kubenswrapper[4902]: I0121 14:48:52.950678 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp" Jan 21 14:48:52 crc kubenswrapper[4902]: I0121 14:48:52.984015 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp" podStartSLOduration=1.714633363 podStartE2EDuration="5.983999249s" podCreationTimestamp="2026-01-21 14:48:47 +0000 UTC" firstStartedPulling="2026-01-21 14:48:48.451610009 +0000 UTC m=+890.528443038" lastFinishedPulling="2026-01-21 14:48:52.720975895 +0000 UTC m=+894.797808924" observedRunningTime="2026-01-21 14:48:52.977213953 +0000 UTC m=+895.054046992" watchObservedRunningTime="2026-01-21 14:48:52.983999249 +0000 UTC m=+895.060832268" Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.244286 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-95v9q"] Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.246324 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.265786 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-95v9q"] Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.286742 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzmdr\" (UniqueName: \"kubernetes.io/projected/de16538d-0e36-4daf-8621-a819da9a3cb6-kube-api-access-wzmdr\") pod \"redhat-marketplace-95v9q\" (UID: \"de16538d-0e36-4daf-8621-a819da9a3cb6\") " pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.286800 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de16538d-0e36-4daf-8621-a819da9a3cb6-utilities\") pod \"redhat-marketplace-95v9q\" (UID: \"de16538d-0e36-4daf-8621-a819da9a3cb6\") " pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.286818 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de16538d-0e36-4daf-8621-a819da9a3cb6-catalog-content\") pod \"redhat-marketplace-95v9q\" (UID: \"de16538d-0e36-4daf-8621-a819da9a3cb6\") " pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.387757 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzmdr\" (UniqueName: \"kubernetes.io/projected/de16538d-0e36-4daf-8621-a819da9a3cb6-kube-api-access-wzmdr\") pod \"redhat-marketplace-95v9q\" (UID: \"de16538d-0e36-4daf-8621-a819da9a3cb6\") " pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.387840 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de16538d-0e36-4daf-8621-a819da9a3cb6-utilities\") pod \"redhat-marketplace-95v9q\" (UID: \"de16538d-0e36-4daf-8621-a819da9a3cb6\") " pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.387864 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de16538d-0e36-4daf-8621-a819da9a3cb6-catalog-content\") pod \"redhat-marketplace-95v9q\" (UID: \"de16538d-0e36-4daf-8621-a819da9a3cb6\") " pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.388564 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de16538d-0e36-4daf-8621-a819da9a3cb6-catalog-content\") pod \"redhat-marketplace-95v9q\" (UID: \"de16538d-0e36-4daf-8621-a819da9a3cb6\") " pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.389194 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de16538d-0e36-4daf-8621-a819da9a3cb6-utilities\") pod \"redhat-marketplace-95v9q\" (UID: \"de16538d-0e36-4daf-8621-a819da9a3cb6\") " pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.415089 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzmdr\" (UniqueName: \"kubernetes.io/projected/de16538d-0e36-4daf-8621-a819da9a3cb6-kube-api-access-wzmdr\") pod \"redhat-marketplace-95v9q\" (UID: \"de16538d-0e36-4daf-8621-a819da9a3cb6\") " pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.605139 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:48:55 crc kubenswrapper[4902]: I0121 14:48:55.055971 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-95v9q"] Jan 21 14:48:55 crc kubenswrapper[4902]: I0121 14:48:55.969938 4902 generic.go:334] "Generic (PLEG): container finished" podID="de16538d-0e36-4daf-8621-a819da9a3cb6" containerID="509905b535dba8cb83199b97a62088fcef4741892457d8fea33112bfc7c67c0e" exitCode=0 Jan 21 14:48:55 crc kubenswrapper[4902]: I0121 14:48:55.969976 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-95v9q" event={"ID":"de16538d-0e36-4daf-8621-a819da9a3cb6","Type":"ContainerDied","Data":"509905b535dba8cb83199b97a62088fcef4741892457d8fea33112bfc7c67c0e"} Jan 21 14:48:55 crc kubenswrapper[4902]: I0121 14:48:55.970000 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-95v9q" event={"ID":"de16538d-0e36-4daf-8621-a819da9a3cb6","Type":"ContainerStarted","Data":"4cc0fcd4edb2759e0ad0548d3682b79a67201175a41ff2a22f6b50c3bd0b04f8"} Jan 21 14:48:57 crc kubenswrapper[4902]: I0121 14:48:57.986676 4902 generic.go:334] "Generic (PLEG): container finished" podID="de16538d-0e36-4daf-8621-a819da9a3cb6" containerID="21e045e5022e7a8dfff24d47031d4a24b623242982635f3f26f4723fd15701b3" exitCode=0 Jan 21 14:48:57 crc kubenswrapper[4902]: I0121 14:48:57.986714 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-95v9q" event={"ID":"de16538d-0e36-4daf-8621-a819da9a3cb6","Type":"ContainerDied","Data":"21e045e5022e7a8dfff24d47031d4a24b623242982635f3f26f4723fd15701b3"} Jan 21 14:48:58 crc kubenswrapper[4902]: I0121 14:48:58.188698 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp" Jan 21 14:48:58 crc kubenswrapper[4902]: I0121 14:48:58.995702 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-95v9q" event={"ID":"de16538d-0e36-4daf-8621-a819da9a3cb6","Type":"ContainerStarted","Data":"68b24c9e897b0a83b5bad964b916be23b70640bd8873d28be222c9b5977c07be"} Jan 21 14:48:59 crc kubenswrapper[4902]: I0121 14:48:59.025307 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-95v9q" podStartSLOduration=2.650801001 podStartE2EDuration="5.025292419s" podCreationTimestamp="2026-01-21 14:48:54 +0000 UTC" firstStartedPulling="2026-01-21 14:48:55.971317543 +0000 UTC m=+898.048150572" lastFinishedPulling="2026-01-21 14:48:58.345808961 +0000 UTC m=+900.422641990" observedRunningTime="2026-01-21 14:48:59.023715736 +0000 UTC m=+901.100548765" watchObservedRunningTime="2026-01-21 14:48:59.025292419 +0000 UTC m=+901.102125448" Jan 21 14:49:00 crc kubenswrapper[4902]: I0121 14:49:00.431613 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:49:04 crc kubenswrapper[4902]: I0121 14:49:04.020535 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gxj9j"] Jan 21 14:49:04 crc kubenswrapper[4902]: I0121 14:49:04.021794 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gxj9j" podUID="fabdb2a2-7f8e-40a4-a150-6bb794482383" containerName="registry-server" containerID="cri-o://16997efae7f7e817fb4dcb1190338876eb5cd2dcf755844c0ab32fc4fbef23fc" gracePeriod=2 Jan 21 14:49:04 crc kubenswrapper[4902]: I0121 14:49:04.605549 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:49:04 crc kubenswrapper[4902]: I0121 14:49:04.605796 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:49:04 crc kubenswrapper[4902]: I0121 14:49:04.661979 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.056036 4902 generic.go:334] "Generic (PLEG): container finished" podID="fabdb2a2-7f8e-40a4-a150-6bb794482383" containerID="16997efae7f7e817fb4dcb1190338876eb5cd2dcf755844c0ab32fc4fbef23fc" exitCode=0 Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.056089 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gxj9j" event={"ID":"fabdb2a2-7f8e-40a4-a150-6bb794482383","Type":"ContainerDied","Data":"16997efae7f7e817fb4dcb1190338876eb5cd2dcf755844c0ab32fc4fbef23fc"} Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.117557 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.502215 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.657823 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabdb2a2-7f8e-40a4-a150-6bb794482383-catalog-content\") pod \"fabdb2a2-7f8e-40a4-a150-6bb794482383\" (UID: \"fabdb2a2-7f8e-40a4-a150-6bb794482383\") " Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.657910 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzph4\" (UniqueName: \"kubernetes.io/projected/fabdb2a2-7f8e-40a4-a150-6bb794482383-kube-api-access-rzph4\") pod \"fabdb2a2-7f8e-40a4-a150-6bb794482383\" (UID: \"fabdb2a2-7f8e-40a4-a150-6bb794482383\") " Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.657981 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabdb2a2-7f8e-40a4-a150-6bb794482383-utilities\") pod \"fabdb2a2-7f8e-40a4-a150-6bb794482383\" (UID: \"fabdb2a2-7f8e-40a4-a150-6bb794482383\") " Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.659005 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fabdb2a2-7f8e-40a4-a150-6bb794482383-utilities" (OuterVolumeSpecName: "utilities") pod "fabdb2a2-7f8e-40a4-a150-6bb794482383" (UID: "fabdb2a2-7f8e-40a4-a150-6bb794482383"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.664991 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fabdb2a2-7f8e-40a4-a150-6bb794482383-kube-api-access-rzph4" (OuterVolumeSpecName: "kube-api-access-rzph4") pod "fabdb2a2-7f8e-40a4-a150-6bb794482383" (UID: "fabdb2a2-7f8e-40a4-a150-6bb794482383"). InnerVolumeSpecName "kube-api-access-rzph4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.704920 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fabdb2a2-7f8e-40a4-a150-6bb794482383-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fabdb2a2-7f8e-40a4-a150-6bb794482383" (UID: "fabdb2a2-7f8e-40a4-a150-6bb794482383"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.759806 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzph4\" (UniqueName: \"kubernetes.io/projected/fabdb2a2-7f8e-40a4-a150-6bb794482383-kube-api-access-rzph4\") on node \"crc\" DevicePath \"\"" Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.759846 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabdb2a2-7f8e-40a4-a150-6bb794482383-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.759856 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabdb2a2-7f8e-40a4-a150-6bb794482383-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:49:06 crc kubenswrapper[4902]: I0121 14:49:06.066065 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:49:06 crc kubenswrapper[4902]: I0121 14:49:06.066219 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gxj9j" event={"ID":"fabdb2a2-7f8e-40a4-a150-6bb794482383","Type":"ContainerDied","Data":"7cfb471925da61d1c73706b42117dc901586ec198cdb9360f6a17d4a36f443f6"} Jan 21 14:49:06 crc kubenswrapper[4902]: I0121 14:49:06.066877 4902 scope.go:117] "RemoveContainer" containerID="16997efae7f7e817fb4dcb1190338876eb5cd2dcf755844c0ab32fc4fbef23fc" Jan 21 14:49:06 crc kubenswrapper[4902]: I0121 14:49:06.091713 4902 scope.go:117] "RemoveContainer" containerID="4fdf41c6649aa4c07005037da3fcebbfd51eb386bde51c1b9445b8ed1c7ff15a" Jan 21 14:49:06 crc kubenswrapper[4902]: I0121 14:49:06.105375 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gxj9j"] Jan 21 14:49:06 crc kubenswrapper[4902]: I0121 14:49:06.111352 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gxj9j"] Jan 21 14:49:06 crc kubenswrapper[4902]: I0121 14:49:06.122823 4902 scope.go:117] "RemoveContainer" containerID="58e01dcf152baa0605dc7a6d72585435564e75b4f638f76712ac46553f8eb051" Jan 21 14:49:06 crc kubenswrapper[4902]: I0121 14:49:06.303836 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fabdb2a2-7f8e-40a4-a150-6bb794482383" path="/var/lib/kubelet/pods/fabdb2a2-7f8e-40a4-a150-6bb794482383/volumes" Jan 21 14:49:08 crc kubenswrapper[4902]: I0121 14:49:08.221286 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-95v9q"] Jan 21 14:49:08 crc kubenswrapper[4902]: I0121 14:49:08.221503 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-95v9q" podUID="de16538d-0e36-4daf-8621-a819da9a3cb6" containerName="registry-server" containerID="cri-o://68b24c9e897b0a83b5bad964b916be23b70640bd8873d28be222c9b5977c07be" gracePeriod=2 Jan 21 14:49:09 crc kubenswrapper[4902]: I0121 14:49:09.105619 4902 generic.go:334] "Generic (PLEG): container finished" podID="de16538d-0e36-4daf-8621-a819da9a3cb6" containerID="68b24c9e897b0a83b5bad964b916be23b70640bd8873d28be222c9b5977c07be" exitCode=0 Jan 21 14:49:09 crc kubenswrapper[4902]: I0121 14:49:09.105708 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-95v9q" event={"ID":"de16538d-0e36-4daf-8621-a819da9a3cb6","Type":"ContainerDied","Data":"68b24c9e897b0a83b5bad964b916be23b70640bd8873d28be222c9b5977c07be"} Jan 21 14:49:10 crc kubenswrapper[4902]: I0121 14:49:10.226310 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:49:10 crc kubenswrapper[4902]: I0121 14:49:10.319713 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de16538d-0e36-4daf-8621-a819da9a3cb6-utilities\") pod \"de16538d-0e36-4daf-8621-a819da9a3cb6\" (UID: \"de16538d-0e36-4daf-8621-a819da9a3cb6\") " Jan 21 14:49:10 crc kubenswrapper[4902]: I0121 14:49:10.320101 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de16538d-0e36-4daf-8621-a819da9a3cb6-catalog-content\") pod \"de16538d-0e36-4daf-8621-a819da9a3cb6\" (UID: \"de16538d-0e36-4daf-8621-a819da9a3cb6\") " Jan 21 14:49:10 crc kubenswrapper[4902]: I0121 14:49:10.320286 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzmdr\" (UniqueName: \"kubernetes.io/projected/de16538d-0e36-4daf-8621-a819da9a3cb6-kube-api-access-wzmdr\") pod \"de16538d-0e36-4daf-8621-a819da9a3cb6\" (UID: \"de16538d-0e36-4daf-8621-a819da9a3cb6\") " Jan 21 14:49:10 crc kubenswrapper[4902]: I0121 14:49:10.320596 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de16538d-0e36-4daf-8621-a819da9a3cb6-utilities" (OuterVolumeSpecName: "utilities") pod "de16538d-0e36-4daf-8621-a819da9a3cb6" (UID: "de16538d-0e36-4daf-8621-a819da9a3cb6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:49:10 crc kubenswrapper[4902]: I0121 14:49:10.321490 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de16538d-0e36-4daf-8621-a819da9a3cb6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:49:10 crc kubenswrapper[4902]: I0121 14:49:10.324891 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de16538d-0e36-4daf-8621-a819da9a3cb6-kube-api-access-wzmdr" (OuterVolumeSpecName: "kube-api-access-wzmdr") pod "de16538d-0e36-4daf-8621-a819da9a3cb6" (UID: "de16538d-0e36-4daf-8621-a819da9a3cb6"). InnerVolumeSpecName "kube-api-access-wzmdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:49:10 crc kubenswrapper[4902]: I0121 14:49:10.345906 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de16538d-0e36-4daf-8621-a819da9a3cb6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de16538d-0e36-4daf-8621-a819da9a3cb6" (UID: "de16538d-0e36-4daf-8621-a819da9a3cb6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:49:10 crc kubenswrapper[4902]: I0121 14:49:10.422347 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de16538d-0e36-4daf-8621-a819da9a3cb6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:49:10 crc kubenswrapper[4902]: I0121 14:49:10.422384 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzmdr\" (UniqueName: \"kubernetes.io/projected/de16538d-0e36-4daf-8621-a819da9a3cb6-kube-api-access-wzmdr\") on node \"crc\" DevicePath \"\"" Jan 21 14:49:11 crc kubenswrapper[4902]: I0121 14:49:11.121330 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-95v9q" event={"ID":"de16538d-0e36-4daf-8621-a819da9a3cb6","Type":"ContainerDied","Data":"4cc0fcd4edb2759e0ad0548d3682b79a67201175a41ff2a22f6b50c3bd0b04f8"} Jan 21 14:49:11 crc kubenswrapper[4902]: I0121 14:49:11.121646 4902 scope.go:117] "RemoveContainer" containerID="68b24c9e897b0a83b5bad964b916be23b70640bd8873d28be222c9b5977c07be" Jan 21 14:49:11 crc kubenswrapper[4902]: I0121 14:49:11.121757 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:49:11 crc kubenswrapper[4902]: I0121 14:49:11.141825 4902 scope.go:117] "RemoveContainer" containerID="21e045e5022e7a8dfff24d47031d4a24b623242982635f3f26f4723fd15701b3" Jan 21 14:49:11 crc kubenswrapper[4902]: I0121 14:49:11.161272 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-95v9q"] Jan 21 14:49:11 crc kubenswrapper[4902]: I0121 14:49:11.168367 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-95v9q"] Jan 21 14:49:11 crc kubenswrapper[4902]: I0121 14:49:11.180949 4902 scope.go:117] "RemoveContainer" containerID="509905b535dba8cb83199b97a62088fcef4741892457d8fea33112bfc7c67c0e" Jan 21 14:49:12 crc kubenswrapper[4902]: I0121 14:49:12.301289 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de16538d-0e36-4daf-8621-a819da9a3cb6" path="/var/lib/kubelet/pods/de16538d-0e36-4daf-8621-a819da9a3cb6/volumes" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.322552 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd"] Jan 21 14:49:17 crc kubenswrapper[4902]: E0121 14:49:17.323066 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de16538d-0e36-4daf-8621-a819da9a3cb6" containerName="extract-utilities" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.323083 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="de16538d-0e36-4daf-8621-a819da9a3cb6" containerName="extract-utilities" Jan 21 14:49:17 crc kubenswrapper[4902]: E0121 14:49:17.323099 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de16538d-0e36-4daf-8621-a819da9a3cb6" containerName="extract-content" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.323107 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="de16538d-0e36-4daf-8621-a819da9a3cb6" containerName="extract-content" Jan 21 14:49:17 crc kubenswrapper[4902]: E0121 14:49:17.323119 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fabdb2a2-7f8e-40a4-a150-6bb794482383" containerName="extract-content" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.323129 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fabdb2a2-7f8e-40a4-a150-6bb794482383" containerName="extract-content" Jan 21 14:49:17 crc kubenswrapper[4902]: E0121 14:49:17.323138 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fabdb2a2-7f8e-40a4-a150-6bb794482383" containerName="extract-utilities" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.323145 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fabdb2a2-7f8e-40a4-a150-6bb794482383" containerName="extract-utilities" Jan 21 14:49:17 crc kubenswrapper[4902]: E0121 14:49:17.323155 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de16538d-0e36-4daf-8621-a819da9a3cb6" containerName="registry-server" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.323162 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="de16538d-0e36-4daf-8621-a819da9a3cb6" containerName="registry-server" Jan 21 14:49:17 crc kubenswrapper[4902]: E0121 14:49:17.323171 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fabdb2a2-7f8e-40a4-a150-6bb794482383" containerName="registry-server" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.323178 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fabdb2a2-7f8e-40a4-a150-6bb794482383" containerName="registry-server" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.323304 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="de16538d-0e36-4daf-8621-a819da9a3cb6" containerName="registry-server" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.323317 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="fabdb2a2-7f8e-40a4-a150-6bb794482383" containerName="registry-server" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.323823 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.326011 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-zxs6r" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.334620 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.335539 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.338635 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-k94tp" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.347014 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.355888 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.374887 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.375560 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-gffs4"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.376134 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gffs4" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.376631 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.381231 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-qbtkd" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.381462 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-2tlm5" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.392652 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.452116 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-gffs4"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.459899 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9hl4\" (UniqueName: \"kubernetes.io/projected/3c1e8b4d-a47d-4a6e-be63-bfc41d04d964-kube-api-access-r9hl4\") pod \"glance-operator-controller-manager-c6994669c-gffs4\" (UID: \"3c1e8b4d-a47d-4a6e-be63-bfc41d04d964\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-gffs4" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.460006 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjbr4\" (UniqueName: \"kubernetes.io/projected/66bb9ed9-5aee-41c4-a7d0-4b2ff5cff91e-kube-api-access-bjbr4\") pod \"barbican-operator-controller-manager-7ddb5c749-j6fwd\" (UID: \"66bb9ed9-5aee-41c4-a7d0-4b2ff5cff91e\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.460120 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bngmh\" (UniqueName: \"kubernetes.io/projected/b924ea4f-71c9-4f42-aa0a-a4945ea589e3-kube-api-access-bngmh\") pod \"cinder-operator-controller-manager-9b68f5989-nh8zr\" (UID: \"b924ea4f-71c9-4f42-aa0a-a4945ea589e3\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.478256 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.481535 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.486481 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-wmkds" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.511747 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.518287 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.537506 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-jwshd" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.561297 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwcjk\" (UniqueName: \"kubernetes.io/projected/bc4c2749-7073-4bb8-8c87-736187565b08-kube-api-access-cwcjk\") pod \"designate-operator-controller-manager-9f958b845-sdkxs\" (UID: \"bc4c2749-7073-4bb8-8c87-736187565b08\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.561367 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6tz8\" (UniqueName: \"kubernetes.io/projected/56c38bff-8549-485e-a91f-1d89d801a8ee-kube-api-access-z6tz8\") pod \"heat-operator-controller-manager-594c8c9d5d-lttm9\" (UID: \"56c38bff-8549-485e-a91f-1d89d801a8ee\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.561414 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjbr4\" (UniqueName: \"kubernetes.io/projected/66bb9ed9-5aee-41c4-a7d0-4b2ff5cff91e-kube-api-access-bjbr4\") pod \"barbican-operator-controller-manager-7ddb5c749-j6fwd\" (UID: \"66bb9ed9-5aee-41c4-a7d0-4b2ff5cff91e\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.561452 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqspm\" (UniqueName: \"kubernetes.io/projected/05001c4b-c8f0-46ea-bf02-d7537d8a373b-kube-api-access-zqspm\") pod \"horizon-operator-controller-manager-77d5c5b54f-nqnfh\" (UID: \"05001c4b-c8f0-46ea-bf02-d7537d8a373b\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.561496 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bngmh\" (UniqueName: \"kubernetes.io/projected/b924ea4f-71c9-4f42-aa0a-a4945ea589e3-kube-api-access-bngmh\") pod \"cinder-operator-controller-manager-9b68f5989-nh8zr\" (UID: \"b924ea4f-71c9-4f42-aa0a-a4945ea589e3\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.561550 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9hl4\" (UniqueName: \"kubernetes.io/projected/3c1e8b4d-a47d-4a6e-be63-bfc41d04d964-kube-api-access-r9hl4\") pod \"glance-operator-controller-manager-c6994669c-gffs4\" (UID: \"3c1e8b4d-a47d-4a6e-be63-bfc41d04d964\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-gffs4" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.564552 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.575193 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.586947 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.587892 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.595422 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bngmh\" (UniqueName: \"kubernetes.io/projected/b924ea4f-71c9-4f42-aa0a-a4945ea589e3-kube-api-access-bngmh\") pod \"cinder-operator-controller-manager-9b68f5989-nh8zr\" (UID: \"b924ea4f-71c9-4f42-aa0a-a4945ea589e3\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.606362 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-ltdfb" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.606548 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.617615 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjbr4\" (UniqueName: \"kubernetes.io/projected/66bb9ed9-5aee-41c4-a7d0-4b2ff5cff91e-kube-api-access-bjbr4\") pod \"barbican-operator-controller-manager-7ddb5c749-j6fwd\" (UID: \"66bb9ed9-5aee-41c4-a7d0-4b2ff5cff91e\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.639134 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9hl4\" (UniqueName: \"kubernetes.io/projected/3c1e8b4d-a47d-4a6e-be63-bfc41d04d964-kube-api-access-r9hl4\") pod \"glance-operator-controller-manager-c6994669c-gffs4\" (UID: \"3c1e8b4d-a47d-4a6e-be63-bfc41d04d964\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-gffs4" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.652827 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.653380 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.656442 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.656729 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.660197 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-qwhpd" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.666087 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqspm\" (UniqueName: \"kubernetes.io/projected/05001c4b-c8f0-46ea-bf02-d7537d8a373b-kube-api-access-zqspm\") pod \"horizon-operator-controller-manager-77d5c5b54f-nqnfh\" (UID: \"05001c4b-c8f0-46ea-bf02-d7537d8a373b\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.666425 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwcjk\" (UniqueName: \"kubernetes.io/projected/bc4c2749-7073-4bb8-8c87-736187565b08-kube-api-access-cwcjk\") pod \"designate-operator-controller-manager-9f958b845-sdkxs\" (UID: \"bc4c2749-7073-4bb8-8c87-736187565b08\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.666450 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6tz8\" (UniqueName: \"kubernetes.io/projected/56c38bff-8549-485e-a91f-1d89d801a8ee-kube-api-access-z6tz8\") pod \"heat-operator-controller-manager-594c8c9d5d-lttm9\" (UID: \"56c38bff-8549-485e-a91f-1d89d801a8ee\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.682754 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.686207 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.696197 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.708370 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-vlt4w" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.708551 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.709961 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gffs4" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.724199 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.735959 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwcjk\" (UniqueName: \"kubernetes.io/projected/bc4c2749-7073-4bb8-8c87-736187565b08-kube-api-access-cwcjk\") pod \"designate-operator-controller-manager-9f958b845-sdkxs\" (UID: \"bc4c2749-7073-4bb8-8c87-736187565b08\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.737904 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6tz8\" (UniqueName: \"kubernetes.io/projected/56c38bff-8549-485e-a91f-1d89d801a8ee-kube-api-access-z6tz8\") pod \"heat-operator-controller-manager-594c8c9d5d-lttm9\" (UID: \"56c38bff-8549-485e-a91f-1d89d801a8ee\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.738539 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqspm\" (UniqueName: \"kubernetes.io/projected/05001c4b-c8f0-46ea-bf02-d7537d8a373b-kube-api-access-zqspm\") pod \"horizon-operator-controller-manager-77d5c5b54f-nqnfh\" (UID: \"05001c4b-c8f0-46ea-bf02-d7537d8a373b\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.755179 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.756068 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.768153 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqlvh\" (UniqueName: \"kubernetes.io/projected/f3f5f576-48b8-4175-8d70-d8de7e41a63a-kube-api-access-wqlvh\") pod \"ironic-operator-controller-manager-78757b4889-khcxt\" (UID: \"f3f5f576-48b8-4175-8d70-d8de7e41a63a\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.768210 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v2r9\" (UniqueName: \"kubernetes.io/projected/cea39ffd-421f-4b74-9f26-065f49e00786-kube-api-access-7v2r9\") pod \"infra-operator-controller-manager-77c48c7859-46xm9\" (UID: \"cea39ffd-421f-4b74-9f26-065f49e00786\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.768257 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert\") pod \"infra-operator-controller-manager-77c48c7859-46xm9\" (UID: \"cea39ffd-421f-4b74-9f26-065f49e00786\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.768556 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-tvc2d" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.791348 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.792262 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.796127 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-6dxsm" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.817628 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.827818 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.833839 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.869769 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqlvh\" (UniqueName: \"kubernetes.io/projected/f3f5f576-48b8-4175-8d70-d8de7e41a63a-kube-api-access-wqlvh\") pod \"ironic-operator-controller-manager-78757b4889-khcxt\" (UID: \"f3f5f576-48b8-4175-8d70-d8de7e41a63a\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.869835 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v2r9\" (UniqueName: \"kubernetes.io/projected/cea39ffd-421f-4b74-9f26-065f49e00786-kube-api-access-7v2r9\") pod \"infra-operator-controller-manager-77c48c7859-46xm9\" (UID: \"cea39ffd-421f-4b74-9f26-065f49e00786\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.869862 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64d49\" (UniqueName: \"kubernetes.io/projected/7d33c2a4-c369-4a5f-9592-289c162f095c-kube-api-access-64d49\") pod \"keystone-operator-controller-manager-767fdc4f47-qwcvn\" (UID: \"7d33c2a4-c369-4a5f-9592-289c162f095c\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.869908 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert\") pod \"infra-operator-controller-manager-77c48c7859-46xm9\" (UID: \"cea39ffd-421f-4b74-9f26-065f49e00786\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.869934 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkxds\" (UniqueName: \"kubernetes.io/projected/a5d9aa95-7d14-4a6e-af38-dddad85007f4-kube-api-access-fkxds\") pod \"manila-operator-controller-manager-864f6b75bf-x6xrb\" (UID: \"a5d9aa95-7d14-4a6e-af38-dddad85007f4\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb" Jan 21 14:49:17 crc kubenswrapper[4902]: E0121 14:49:17.870810 4902 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 14:49:17 crc kubenswrapper[4902]: E0121 14:49:17.870862 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert podName:cea39ffd-421f-4b74-9f26-065f49e00786 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:18.37084608 +0000 UTC m=+920.447679109 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert") pod "infra-operator-controller-manager-77c48c7859-46xm9" (UID: "cea39ffd-421f-4b74-9f26-065f49e00786") : secret "infra-operator-webhook-server-cert" not found Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.871010 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.871834 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.879586 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.889375 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-8f6dv" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.896568 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-nql9r"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.897553 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-nql9r" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.908745 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.918354 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-tndh9" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.957078 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-nql9r"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.966193 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.966919 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.967690 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqlvh\" (UniqueName: \"kubernetes.io/projected/f3f5f576-48b8-4175-8d70-d8de7e41a63a-kube-api-access-wqlvh\") pod \"ironic-operator-controller-manager-78757b4889-khcxt\" (UID: \"f3f5f576-48b8-4175-8d70-d8de7e41a63a\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.972907 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.973960 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf6dh\" (UniqueName: \"kubernetes.io/projected/0b55bf9c-cc65-446c-849e-035fb1bba4c4-kube-api-access-nf6dh\") pod \"neutron-operator-controller-manager-cb4666565-8vfnj\" (UID: \"0b55bf9c-cc65-446c-849e-035fb1bba4c4\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.974057 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkxds\" (UniqueName: \"kubernetes.io/projected/a5d9aa95-7d14-4a6e-af38-dddad85007f4-kube-api-access-fkxds\") pod \"manila-operator-controller-manager-864f6b75bf-x6xrb\" (UID: \"a5d9aa95-7d14-4a6e-af38-dddad85007f4\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.974168 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64d49\" (UniqueName: \"kubernetes.io/projected/7d33c2a4-c369-4a5f-9592-289c162f095c-kube-api-access-64d49\") pod \"keystone-operator-controller-manager-767fdc4f47-qwcvn\" (UID: \"7d33c2a4-c369-4a5f-9592-289c162f095c\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.974202 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhtrq\" (UniqueName: \"kubernetes.io/projected/01091192-af46-486f-8890-787505f3b41c-kube-api-access-fhtrq\") pod \"mariadb-operator-controller-manager-c87fff755-xrlqr\" (UID: \"01091192-af46-486f-8890-787505f3b41c\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.976081 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.976846 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.980711 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.981656 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.001413 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.023233 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.030924 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.053579 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.054973 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.065029 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.067275 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.074993 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wtxx\" (UniqueName: \"kubernetes.io/projected/3912b1da-b132-48da-9b67-1f4aeb2203c4-kube-api-access-9wtxx\") pod \"ovn-operator-controller-manager-55db956ddc-lljfd\" (UID: \"3912b1da-b132-48da-9b67-1f4aeb2203c4\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.109916 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgj2z\" (UniqueName: \"kubernetes.io/projected/b01862fd-dfad-4a73-ac90-5ef7823c06ea-kube-api-access-tgj2z\") pod \"nova-operator-controller-manager-65849867d6-nql9r\" (UID: \"b01862fd-dfad-4a73-ac90-5ef7823c06ea\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-nql9r" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.109998 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dhp6x8\" (UID: \"14dc1630-021a-4b05-8ac4-d99368b51726\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.110059 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhtrq\" (UniqueName: \"kubernetes.io/projected/01091192-af46-486f-8890-787505f3b41c-kube-api-access-fhtrq\") pod \"mariadb-operator-controller-manager-c87fff755-xrlqr\" (UID: \"01091192-af46-486f-8890-787505f3b41c\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.110092 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npq9n\" (UniqueName: \"kubernetes.io/projected/bc7bedc3-7b23-4f5c-bfbb-7b05694e6b90-kube-api-access-npq9n\") pod \"octavia-operator-controller-manager-7fc9b76cf6-c2nb6\" (UID: \"bc7bedc3-7b23-4f5c-bfbb-7b05694e6b90\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.110157 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf6dh\" (UniqueName: \"kubernetes.io/projected/0b55bf9c-cc65-446c-849e-035fb1bba4c4-kube-api-access-nf6dh\") pod \"neutron-operator-controller-manager-cb4666565-8vfnj\" (UID: \"0b55bf9c-cc65-446c-849e-035fb1bba4c4\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.110248 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l57j4\" (UniqueName: \"kubernetes.io/projected/14dc1630-021a-4b05-8ac4-d99368b51726-kube-api-access-l57j4\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dhp6x8\" (UID: \"14dc1630-021a-4b05-8ac4-d99368b51726\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.113944 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.120333 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.175655 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-gw9vr" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.176273 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-xqxwr" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.195369 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-b7z56" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.196013 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-tx94m" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.196352 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-7mnnf" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.304124 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.306596 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgj2z\" (UniqueName: \"kubernetes.io/projected/b01862fd-dfad-4a73-ac90-5ef7823c06ea-kube-api-access-tgj2z\") pod \"nova-operator-controller-manager-65849867d6-nql9r\" (UID: \"b01862fd-dfad-4a73-ac90-5ef7823c06ea\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-nql9r" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.306938 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dhp6x8\" (UID: \"14dc1630-021a-4b05-8ac4-d99368b51726\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.306985 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npq9n\" (UniqueName: \"kubernetes.io/projected/bc7bedc3-7b23-4f5c-bfbb-7b05694e6b90-kube-api-access-npq9n\") pod \"octavia-operator-controller-manager-7fc9b76cf6-c2nb6\" (UID: \"bc7bedc3-7b23-4f5c-bfbb-7b05694e6b90\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.307114 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l57j4\" (UniqueName: \"kubernetes.io/projected/14dc1630-021a-4b05-8ac4-d99368b51726-kube-api-access-l57j4\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dhp6x8\" (UID: \"14dc1630-021a-4b05-8ac4-d99368b51726\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.307175 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkv5c\" (UniqueName: \"kubernetes.io/projected/1e685238-529c-4964-af9d-8abed4dfcfae-kube-api-access-xkv5c\") pod \"swift-operator-controller-manager-85dd56d4cc-wqmq2\" (UID: \"1e685238-529c-4964-af9d-8abed4dfcfae\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.307216 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wtxx\" (UniqueName: \"kubernetes.io/projected/3912b1da-b132-48da-9b67-1f4aeb2203c4-kube-api-access-9wtxx\") pod \"ovn-operator-controller-manager-55db956ddc-lljfd\" (UID: \"3912b1da-b132-48da-9b67-1f4aeb2203c4\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.307256 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4x6s\" (UniqueName: \"kubernetes.io/projected/c5d64dc8-80f6-4076-9068-11ec25d524b5-kube-api-access-l4x6s\") pod \"placement-operator-controller-manager-686df47fcb-pmvgc\" (UID: \"c5d64dc8-80f6-4076-9068-11ec25d524b5\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.315845 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v2r9\" (UniqueName: \"kubernetes.io/projected/cea39ffd-421f-4b74-9f26-065f49e00786-kube-api-access-7v2r9\") pod \"infra-operator-controller-manager-77c48c7859-46xm9\" (UID: \"cea39ffd-421f-4b74-9f26-065f49e00786\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:18 crc kubenswrapper[4902]: E0121 14:49:18.316282 4902 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:49:18 crc kubenswrapper[4902]: E0121 14:49:18.316355 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert podName:14dc1630-021a-4b05-8ac4-d99368b51726 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:18.816337133 +0000 UTC m=+920.893170162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" (UID: "14dc1630-021a-4b05-8ac4-d99368b51726") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.342081 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhtrq\" (UniqueName: \"kubernetes.io/projected/01091192-af46-486f-8890-787505f3b41c-kube-api-access-fhtrq\") pod \"mariadb-operator-controller-manager-c87fff755-xrlqr\" (UID: \"01091192-af46-486f-8890-787505f3b41c\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.343134 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf6dh\" (UniqueName: \"kubernetes.io/projected/0b55bf9c-cc65-446c-849e-035fb1bba4c4-kube-api-access-nf6dh\") pod \"neutron-operator-controller-manager-cb4666565-8vfnj\" (UID: \"0b55bf9c-cc65-446c-849e-035fb1bba4c4\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.343789 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkxds\" (UniqueName: \"kubernetes.io/projected/a5d9aa95-7d14-4a6e-af38-dddad85007f4-kube-api-access-fkxds\") pod \"manila-operator-controller-manager-864f6b75bf-x6xrb\" (UID: \"a5d9aa95-7d14-4a6e-af38-dddad85007f4\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.359528 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wtxx\" (UniqueName: \"kubernetes.io/projected/3912b1da-b132-48da-9b67-1f4aeb2203c4-kube-api-access-9wtxx\") pod \"ovn-operator-controller-manager-55db956ddc-lljfd\" (UID: \"3912b1da-b132-48da-9b67-1f4aeb2203c4\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.360133 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npq9n\" (UniqueName: \"kubernetes.io/projected/bc7bedc3-7b23-4f5c-bfbb-7b05694e6b90-kube-api-access-npq9n\") pod \"octavia-operator-controller-manager-7fc9b76cf6-c2nb6\" (UID: \"bc7bedc3-7b23-4f5c-bfbb-7b05694e6b90\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.360955 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgj2z\" (UniqueName: \"kubernetes.io/projected/b01862fd-dfad-4a73-ac90-5ef7823c06ea-kube-api-access-tgj2z\") pod \"nova-operator-controller-manager-65849867d6-nql9r\" (UID: \"b01862fd-dfad-4a73-ac90-5ef7823c06ea\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-nql9r" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.366203 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64d49\" (UniqueName: \"kubernetes.io/projected/7d33c2a4-c369-4a5f-9592-289c162f095c-kube-api-access-64d49\") pod \"keystone-operator-controller-manager-767fdc4f47-qwcvn\" (UID: \"7d33c2a4-c369-4a5f-9592-289c162f095c\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.373155 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l57j4\" (UniqueName: \"kubernetes.io/projected/14dc1630-021a-4b05-8ac4-d99368b51726-kube-api-access-l57j4\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dhp6x8\" (UID: \"14dc1630-021a-4b05-8ac4-d99368b51726\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.407107 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.408584 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert\") pod \"infra-operator-controller-manager-77c48c7859-46xm9\" (UID: \"cea39ffd-421f-4b74-9f26-065f49e00786\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.408652 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkv5c\" (UniqueName: \"kubernetes.io/projected/1e685238-529c-4964-af9d-8abed4dfcfae-kube-api-access-xkv5c\") pod \"swift-operator-controller-manager-85dd56d4cc-wqmq2\" (UID: \"1e685238-529c-4964-af9d-8abed4dfcfae\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.408682 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4x6s\" (UniqueName: \"kubernetes.io/projected/c5d64dc8-80f6-4076-9068-11ec25d524b5-kube-api-access-l4x6s\") pod \"placement-operator-controller-manager-686df47fcb-pmvgc\" (UID: \"c5d64dc8-80f6-4076-9068-11ec25d524b5\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.408993 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.409020 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf"] Jan 21 14:49:18 crc kubenswrapper[4902]: E0121 14:49:18.409067 4902 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 14:49:18 crc kubenswrapper[4902]: E0121 14:49:18.409125 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert podName:cea39ffd-421f-4b74-9f26-065f49e00786 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:19.409100355 +0000 UTC m=+921.485933384 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert") pod "infra-operator-controller-manager-77c48c7859-46xm9" (UID: "cea39ffd-421f-4b74-9f26-065f49e00786") : secret "infra-operator-webhook-server-cert" not found Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.409460 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.409662 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.414537 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.419401 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-4nfmt" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.419509 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-lr9lm" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.440483 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.445250 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.448242 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkv5c\" (UniqueName: \"kubernetes.io/projected/1e685238-529c-4964-af9d-8abed4dfcfae-kube-api-access-xkv5c\") pod \"swift-operator-controller-manager-85dd56d4cc-wqmq2\" (UID: \"1e685238-529c-4964-af9d-8abed4dfcfae\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.460169 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.460185 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-nql9r" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.468139 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4x6s\" (UniqueName: \"kubernetes.io/projected/c5d64dc8-80f6-4076-9068-11ec25d524b5-kube-api-access-l4x6s\") pod \"placement-operator-controller-manager-686df47fcb-pmvgc\" (UID: \"c5d64dc8-80f6-4076-9068-11ec25d524b5\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.486722 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.487882 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.495840 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-jnz88" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.504462 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.510138 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rrr2\" (UniqueName: \"kubernetes.io/projected/624ad6d5-5647-43c8-8e62-751e4c5989b3-kube-api-access-7rrr2\") pod \"test-operator-controller-manager-7cd8bc9dbb-gn5kf\" (UID: \"624ad6d5-5647-43c8-8e62-751e4c5989b3\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.510217 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frm2w\" (UniqueName: \"kubernetes.io/projected/2ad74206-4131-4395-8392-9697c2c164eb-kube-api-access-frm2w\") pod \"telemetry-operator-controller-manager-5f8f495fcf-v7bj9\" (UID: \"2ad74206-4131-4395-8392-9697c2c164eb\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.511512 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.521818 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.533884 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.540961 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.541747 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.548033 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.548344 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.548987 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-zbxwz" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.557190 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.586446 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.608144 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.609612 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.610207 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.610902 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rrr2\" (UniqueName: \"kubernetes.io/projected/624ad6d5-5647-43c8-8e62-751e4c5989b3-kube-api-access-7rrr2\") pod \"test-operator-controller-manager-7cd8bc9dbb-gn5kf\" (UID: \"624ad6d5-5647-43c8-8e62-751e4c5989b3\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.610963 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frm2w\" (UniqueName: \"kubernetes.io/projected/2ad74206-4131-4395-8392-9697c2c164eb-kube-api-access-frm2w\") pod \"telemetry-operator-controller-manager-5f8f495fcf-v7bj9\" (UID: \"2ad74206-4131-4395-8392-9697c2c164eb\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.611005 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrw4x\" (UniqueName: \"kubernetes.io/projected/6783daa1-082d-4ab7-be65-dc2fb211be6c-kube-api-access-hrw4x\") pod \"watcher-operator-controller-manager-64cd966744-s8g8n\" (UID: \"6783daa1-082d-4ab7-be65-dc2fb211be6c\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.624289 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.630448 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-kqtcx" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.648215 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rrr2\" (UniqueName: \"kubernetes.io/projected/624ad6d5-5647-43c8-8e62-751e4c5989b3-kube-api-access-7rrr2\") pod \"test-operator-controller-manager-7cd8bc9dbb-gn5kf\" (UID: \"624ad6d5-5647-43c8-8e62-751e4c5989b3\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.655495 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frm2w\" (UniqueName: \"kubernetes.io/projected/2ad74206-4131-4395-8392-9697c2c164eb-kube-api-access-frm2w\") pod \"telemetry-operator-controller-manager-5f8f495fcf-v7bj9\" (UID: \"2ad74206-4131-4395-8392-9697c2c164eb\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.662065 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.713827 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.713880 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrw4x\" (UniqueName: \"kubernetes.io/projected/6783daa1-082d-4ab7-be65-dc2fb211be6c-kube-api-access-hrw4x\") pod \"watcher-operator-controller-manager-64cd966744-s8g8n\" (UID: \"6783daa1-082d-4ab7-be65-dc2fb211be6c\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.713914 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szjnn\" (UniqueName: \"kubernetes.io/projected/1ffd452b-d331-4c80-a6f6-0b1b21d5fd84-kube-api-access-szjnn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-s7vgs\" (UID: \"1ffd452b-d331-4c80-a6f6-0b1b21d5fd84\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.714030 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.714117 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg6p2\" (UniqueName: \"kubernetes.io/projected/77e35131-84f1-4df7-b6de-ceda247df931-kube-api-access-xg6p2\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.726958 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.754945 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrw4x\" (UniqueName: \"kubernetes.io/projected/6783daa1-082d-4ab7-be65-dc2fb211be6c-kube-api-access-hrw4x\") pod \"watcher-operator-controller-manager-64cd966744-s8g8n\" (UID: \"6783daa1-082d-4ab7-be65-dc2fb211be6c\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.777865 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.820439 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dhp6x8\" (UID: \"14dc1630-021a-4b05-8ac4-d99368b51726\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.820480 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.820557 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg6p2\" (UniqueName: \"kubernetes.io/projected/77e35131-84f1-4df7-b6de-ceda247df931-kube-api-access-xg6p2\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.820629 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.820649 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szjnn\" (UniqueName: \"kubernetes.io/projected/1ffd452b-d331-4c80-a6f6-0b1b21d5fd84-kube-api-access-szjnn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-s7vgs\" (UID: \"1ffd452b-d331-4c80-a6f6-0b1b21d5fd84\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" Jan 21 14:49:18 crc kubenswrapper[4902]: E0121 14:49:18.821590 4902 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:49:18 crc kubenswrapper[4902]: E0121 14:49:18.821652 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert podName:14dc1630-021a-4b05-8ac4-d99368b51726 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:19.821635211 +0000 UTC m=+921.898468240 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" (UID: "14dc1630-021a-4b05-8ac4-d99368b51726") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:49:18 crc kubenswrapper[4902]: E0121 14:49:18.821966 4902 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 14:49:18 crc kubenswrapper[4902]: E0121 14:49:18.822006 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs podName:77e35131-84f1-4df7-b6de-ceda247df931 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:19.321996491 +0000 UTC m=+921.398829520 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-hr66g" (UID: "77e35131-84f1-4df7-b6de-ceda247df931") : secret "webhook-server-cert" not found Jan 21 14:49:18 crc kubenswrapper[4902]: E0121 14:49:18.822921 4902 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 14:49:18 crc kubenswrapper[4902]: E0121 14:49:18.822961 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs podName:77e35131-84f1-4df7-b6de-ceda247df931 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:19.322950357 +0000 UTC m=+921.399783386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-hr66g" (UID: "77e35131-84f1-4df7-b6de-ceda247df931") : secret "metrics-server-cert" not found Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.862466 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg6p2\" (UniqueName: \"kubernetes.io/projected/77e35131-84f1-4df7-b6de-ceda247df931-kube-api-access-xg6p2\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.867465 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szjnn\" (UniqueName: \"kubernetes.io/projected/1ffd452b-d331-4c80-a6f6-0b1b21d5fd84-kube-api-access-szjnn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-s7vgs\" (UID: \"1ffd452b-d331-4c80-a6f6-0b1b21d5fd84\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.883128 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.954004 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.985837 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd"] Jan 21 14:49:19 crc kubenswrapper[4902]: I0121 14:49:19.229725 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9"] Jan 21 14:49:19 crc kubenswrapper[4902]: I0121 14:49:19.236052 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd" event={"ID":"66bb9ed9-5aee-41c4-a7d0-4b2ff5cff91e","Type":"ContainerStarted","Data":"d09f803cdce587e4b8d545fcffcca3989a4dd1ca010981fe40dacf3d29241270"} Jan 21 14:49:19 crc kubenswrapper[4902]: W0121 14:49:19.252219 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56c38bff_8549_485e_a91f_1d89d801a8ee.slice/crio-531c8c4a5d7fd5896c1278df57cafe46baf2d8bc7312a9f3635c06ba9045afe4 WatchSource:0}: Error finding container 531c8c4a5d7fd5896c1278df57cafe46baf2d8bc7312a9f3635c06ba9045afe4: Status 404 returned error can't find the container with id 531c8c4a5d7fd5896c1278df57cafe46baf2d8bc7312a9f3635c06ba9045afe4 Jan 21 14:49:19 crc kubenswrapper[4902]: I0121 14:49:19.274517 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-gffs4"] Jan 21 14:49:19 crc kubenswrapper[4902]: I0121 14:49:19.283979 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr"] Jan 21 14:49:19 crc kubenswrapper[4902]: I0121 14:49:19.335632 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:19 crc kubenswrapper[4902]: I0121 14:49:19.335753 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:19 crc kubenswrapper[4902]: E0121 14:49:19.335915 4902 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 14:49:19 crc kubenswrapper[4902]: E0121 14:49:19.335985 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs podName:77e35131-84f1-4df7-b6de-ceda247df931 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:20.335969128 +0000 UTC m=+922.412802157 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-hr66g" (UID: "77e35131-84f1-4df7-b6de-ceda247df931") : secret "webhook-server-cert" not found Jan 21 14:49:19 crc kubenswrapper[4902]: E0121 14:49:19.337644 4902 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 14:49:19 crc kubenswrapper[4902]: E0121 14:49:19.337709 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs podName:77e35131-84f1-4df7-b6de-ceda247df931 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:20.337695345 +0000 UTC m=+922.414528374 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-hr66g" (UID: "77e35131-84f1-4df7-b6de-ceda247df931") : secret "metrics-server-cert" not found Jan 21 14:49:19 crc kubenswrapper[4902]: I0121 14:49:19.437816 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert\") pod \"infra-operator-controller-manager-77c48c7859-46xm9\" (UID: \"cea39ffd-421f-4b74-9f26-065f49e00786\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:19 crc kubenswrapper[4902]: E0121 14:49:19.438199 4902 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 14:49:19 crc kubenswrapper[4902]: E0121 14:49:19.438287 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert podName:cea39ffd-421f-4b74-9f26-065f49e00786 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:21.438261681 +0000 UTC m=+923.515094770 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert") pod "infra-operator-controller-manager-77c48c7859-46xm9" (UID: "cea39ffd-421f-4b74-9f26-065f49e00786") : secret "infra-operator-webhook-server-cert" not found Jan 21 14:49:19 crc kubenswrapper[4902]: I0121 14:49:19.844966 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dhp6x8\" (UID: \"14dc1630-021a-4b05-8ac4-d99368b51726\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:19 crc kubenswrapper[4902]: E0121 14:49:19.845248 4902 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:49:19 crc kubenswrapper[4902]: E0121 14:49:19.845403 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert podName:14dc1630-021a-4b05-8ac4-d99368b51726 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:21.845385769 +0000 UTC m=+923.922218798 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" (UID: "14dc1630-021a-4b05-8ac4-d99368b51726") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:49:19 crc kubenswrapper[4902]: W0121 14:49:19.929524 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5d64dc8_80f6_4076_9068_11ec25d524b5.slice/crio-2011aad83d7ab0766aaa35b4b5514bcf9be82e40a137c848d439b1d1fdce6cf7 WatchSource:0}: Error finding container 2011aad83d7ab0766aaa35b4b5514bcf9be82e40a137c848d439b1d1fdce6cf7: Status 404 returned error can't find the container with id 2011aad83d7ab0766aaa35b4b5514bcf9be82e40a137c848d439b1d1fdce6cf7 Jan 21 14:49:19 crc kubenswrapper[4902]: I0121 14:49:19.937380 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc"] Jan 21 14:49:19 crc kubenswrapper[4902]: I0121 14:49:19.964404 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb"] Jan 21 14:49:20 crc kubenswrapper[4902]: W0121 14:49:20.001616 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc7bedc3_7b23_4f5c_bfbb_7b05694e6b90.slice/crio-72afda5a413e7ac9c92b85720c39a8b517192179bd090d71f85c8dd8a9c63916 WatchSource:0}: Error finding container 72afda5a413e7ac9c92b85720c39a8b517192179bd090d71f85c8dd8a9c63916: Status 404 returned error can't find the container with id 72afda5a413e7ac9c92b85720c39a8b517192179bd090d71f85c8dd8a9c63916 Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.005273 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6"] Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.010126 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh"] Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.014646 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs"] Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.029622 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt"] Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.182330 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-nql9r"] Jan 21 14:49:20 crc kubenswrapper[4902]: W0121 14:49:20.188500 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb01862fd_dfad_4a73_ac90_5ef7823c06ea.slice/crio-476005716aeaae86727250c0adb0672168bda9dc84fd95f11ca6401a4fa7302b WatchSource:0}: Error finding container 476005716aeaae86727250c0adb0672168bda9dc84fd95f11ca6401a4fa7302b: Status 404 returned error can't find the container with id 476005716aeaae86727250c0adb0672168bda9dc84fd95f11ca6401a4fa7302b Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.245831 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gffs4" event={"ID":"3c1e8b4d-a47d-4a6e-be63-bfc41d04d964","Type":"ContainerStarted","Data":"0e495f7c375b9e06811a8bf274df4e72da6ce9145a34ea2f11e5fa3872413b62"} Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.247228 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt" event={"ID":"f3f5f576-48b8-4175-8d70-d8de7e41a63a","Type":"ContainerStarted","Data":"0b7aefa08581ab163ca0872316e627b351164c9458c789799eb56a6df1feeb5a"} Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.248678 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc" event={"ID":"c5d64dc8-80f6-4076-9068-11ec25d524b5","Type":"ContainerStarted","Data":"2011aad83d7ab0766aaa35b4b5514bcf9be82e40a137c848d439b1d1fdce6cf7"} Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.250175 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9" event={"ID":"56c38bff-8549-485e-a91f-1d89d801a8ee","Type":"ContainerStarted","Data":"531c8c4a5d7fd5896c1278df57cafe46baf2d8bc7312a9f3635c06ba9045afe4"} Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.253294 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr" event={"ID":"b924ea4f-71c9-4f42-aa0a-a4945ea589e3","Type":"ContainerStarted","Data":"a1de2aa57c8376be1b8c65528efc6e3ffa2f9043ebd319277086a0179cc9b46d"} Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.254781 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh" event={"ID":"05001c4b-c8f0-46ea-bf02-d7537d8a373b","Type":"ContainerStarted","Data":"84680913e2093f5caf3b49d2183431d68bda2b464e3351aba53c472b32749618"} Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.256424 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs" event={"ID":"bc4c2749-7073-4bb8-8c87-736187565b08","Type":"ContainerStarted","Data":"cd5bda888051a4fc6e23c69c2e0369b531199ebdd8493e6af8d4876a3710a4e5"} Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.257474 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb" event={"ID":"a5d9aa95-7d14-4a6e-af38-dddad85007f4","Type":"ContainerStarted","Data":"efebdb73d651dcecfd8bcf699e5a49ce41002d6c95d657d3c85221347542ff8c"} Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.259709 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-nql9r" event={"ID":"b01862fd-dfad-4a73-ac90-5ef7823c06ea","Type":"ContainerStarted","Data":"476005716aeaae86727250c0adb0672168bda9dc84fd95f11ca6401a4fa7302b"} Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.261129 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6" event={"ID":"bc7bedc3-7b23-4f5c-bfbb-7b05694e6b90","Type":"ContainerStarted","Data":"72afda5a413e7ac9c92b85720c39a8b517192179bd090d71f85c8dd8a9c63916"} Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.340439 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd"] Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.350905 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.351008 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:20 crc kubenswrapper[4902]: E0121 14:49:20.351687 4902 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 14:49:20 crc kubenswrapper[4902]: E0121 14:49:20.351748 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs podName:77e35131-84f1-4df7-b6de-ceda247df931 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:22.351729565 +0000 UTC m=+924.428562594 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-hr66g" (UID: "77e35131-84f1-4df7-b6de-ceda247df931") : secret "webhook-server-cert" not found Jan 21 14:49:20 crc kubenswrapper[4902]: E0121 14:49:20.352312 4902 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 14:49:20 crc kubenswrapper[4902]: E0121 14:49:20.352352 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs podName:77e35131-84f1-4df7-b6de-ceda247df931 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:22.352341662 +0000 UTC m=+924.429174691 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-hr66g" (UID: "77e35131-84f1-4df7-b6de-ceda247df931") : secret "metrics-server-cert" not found Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.373667 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2"] Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.391718 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj"] Jan 21 14:49:20 crc kubenswrapper[4902]: W0121 14:49:20.392226 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6783daa1_082d_4ab7_be65_dc2fb211be6c.slice/crio-a84babe89ef5ca22a0be2933d44543b6956cf0dcdeb121f886eb367a50389822 WatchSource:0}: Error finding container a84babe89ef5ca22a0be2933d44543b6956cf0dcdeb121f886eb367a50389822: Status 404 returned error can't find the container with id a84babe89ef5ca22a0be2933d44543b6956cf0dcdeb121f886eb367a50389822 Jan 21 14:49:20 crc kubenswrapper[4902]: W0121 14:49:20.394992 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e685238_529c_4964_af9d_8abed4dfcfae.slice/crio-5bb277a81a91f081abbf44e4a00f67bbab3d5088712b8fe699084d9d06ebd8d8 WatchSource:0}: Error finding container 5bb277a81a91f081abbf44e4a00f67bbab3d5088712b8fe699084d9d06ebd8d8: Status 404 returned error can't find the container with id 5bb277a81a91f081abbf44e4a00f67bbab3d5088712b8fe699084d9d06ebd8d8 Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.400973 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr"] Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.412304 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n"] Jan 21 14:49:20 crc kubenswrapper[4902]: W0121 14:49:20.413433 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b55bf9c_cc65_446c_849e_035fb1bba4c4.slice/crio-e10e14d87e0001550e69b4cb3bcf954aab9c87da3a32e92ce3caaf365d0eed6a WatchSource:0}: Error finding container e10e14d87e0001550e69b4cb3bcf954aab9c87da3a32e92ce3caaf365d0eed6a: Status 404 returned error can't find the container with id e10e14d87e0001550e69b4cb3bcf954aab9c87da3a32e92ce3caaf365d0eed6a Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.417637 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9"] Jan 21 14:49:20 crc kubenswrapper[4902]: W0121 14:49:20.425285 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ad74206_4131_4395_8392_9697c2c164eb.slice/crio-c543ef60c825f8e1d74bca6b41e90de5a83a9c82b3207b3c32594e1772df6f68 WatchSource:0}: Error finding container c543ef60c825f8e1d74bca6b41e90de5a83a9c82b3207b3c32594e1772df6f68: Status 404 returned error can't find the container with id c543ef60c825f8e1d74bca6b41e90de5a83a9c82b3207b3c32594e1772df6f68 Jan 21 14:49:20 crc kubenswrapper[4902]: W0121 14:49:20.426406 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01091192_af46_486f_8890_787505f3b41c.slice/crio-b5b6b6fa7cf8fb796626afdcc27f26238e0e181c53c4f6ff361752d7b8d7148a WatchSource:0}: Error finding container b5b6b6fa7cf8fb796626afdcc27f26238e0e181c53c4f6ff361752d7b8d7148a: Status 404 returned error can't find the container with id b5b6b6fa7cf8fb796626afdcc27f26238e0e181c53c4f6ff361752d7b8d7148a Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.426476 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn"] Jan 21 14:49:20 crc kubenswrapper[4902]: W0121 14:49:20.427410 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod624ad6d5_5647_43c8_8e62_751e4c5989b3.slice/crio-03af43db7907b9f96b39c44e3212f0f413456154db6e1fb754b18514e0179dc9 WatchSource:0}: Error finding container 03af43db7907b9f96b39c44e3212f0f413456154db6e1fb754b18514e0179dc9: Status 404 returned error can't find the container with id 03af43db7907b9f96b39c44e3212f0f413456154db6e1fb754b18514e0179dc9 Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.432824 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs"] Jan 21 14:49:20 crc kubenswrapper[4902]: E0121 14:49:20.435796 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-szjnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-s7vgs_openstack-operators(1ffd452b-d331-4c80-a6f6-0b1b21d5fd84): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 14:49:20 crc kubenswrapper[4902]: E0121 14:49:20.436034 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7rrr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7cd8bc9dbb-gn5kf_openstack-operators(624ad6d5-5647-43c8-8e62-751e4c5989b3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 14:49:20 crc kubenswrapper[4902]: E0121 14:49:20.436029 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-frm2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8f495fcf-v7bj9_openstack-operators(2ad74206-4131-4395-8392-9697c2c164eb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 14:49:20 crc kubenswrapper[4902]: E0121 14:49:20.436266 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-64d49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-767fdc4f47-qwcvn_openstack-operators(7d33c2a4-c369-4a5f-9592-289c162f095c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 14:49:20 crc kubenswrapper[4902]: E0121 14:49:20.437538 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" podUID="624ad6d5-5647-43c8-8e62-751e4c5989b3" Jan 21 14:49:20 crc kubenswrapper[4902]: E0121 14:49:20.437568 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" podUID="1ffd452b-d331-4c80-a6f6-0b1b21d5fd84" Jan 21 14:49:20 crc kubenswrapper[4902]: E0121 14:49:20.437583 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" podUID="7d33c2a4-c369-4a5f-9592-289c162f095c" Jan 21 14:49:20 crc kubenswrapper[4902]: E0121 14:49:20.437586 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" podUID="2ad74206-4131-4395-8392-9697c2c164eb" Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.439873 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf"] Jan 21 14:49:21 crc kubenswrapper[4902]: I0121 14:49:21.286805 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" event={"ID":"7d33c2a4-c369-4a5f-9592-289c162f095c","Type":"ContainerStarted","Data":"bc8c2ab20c735a10d99185659594096ce7c73887f21c29118a5143ec7300e0c3"} Jan 21 14:49:21 crc kubenswrapper[4902]: E0121 14:49:21.297479 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" podUID="7d33c2a4-c369-4a5f-9592-289c162f095c" Jan 21 14:49:21 crc kubenswrapper[4902]: I0121 14:49:21.322767 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd" event={"ID":"3912b1da-b132-48da-9b67-1f4aeb2203c4","Type":"ContainerStarted","Data":"ec059c628d28ef3c5fab56f7434e6532dcf70f387cfb37b54135ca6090315cc2"} Jan 21 14:49:21 crc kubenswrapper[4902]: I0121 14:49:21.325564 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" event={"ID":"1ffd452b-d331-4c80-a6f6-0b1b21d5fd84","Type":"ContainerStarted","Data":"8cce3972789784b00226c241d9c6385d6b6cffa30263d10db1f45ebc58d8cca6"} Jan 21 14:49:21 crc kubenswrapper[4902]: E0121 14:49:21.327618 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" podUID="1ffd452b-d331-4c80-a6f6-0b1b21d5fd84" Jan 21 14:49:21 crc kubenswrapper[4902]: I0121 14:49:21.330772 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" event={"ID":"2ad74206-4131-4395-8392-9697c2c164eb","Type":"ContainerStarted","Data":"c543ef60c825f8e1d74bca6b41e90de5a83a9c82b3207b3c32594e1772df6f68"} Jan 21 14:49:21 crc kubenswrapper[4902]: E0121 14:49:21.332679 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" podUID="2ad74206-4131-4395-8392-9697c2c164eb" Jan 21 14:49:21 crc kubenswrapper[4902]: I0121 14:49:21.338133 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2" event={"ID":"1e685238-529c-4964-af9d-8abed4dfcfae","Type":"ContainerStarted","Data":"5bb277a81a91f081abbf44e4a00f67bbab3d5088712b8fe699084d9d06ebd8d8"} Jan 21 14:49:21 crc kubenswrapper[4902]: I0121 14:49:21.381882 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" event={"ID":"624ad6d5-5647-43c8-8e62-751e4c5989b3","Type":"ContainerStarted","Data":"03af43db7907b9f96b39c44e3212f0f413456154db6e1fb754b18514e0179dc9"} Jan 21 14:49:21 crc kubenswrapper[4902]: E0121 14:49:21.385015 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" podUID="624ad6d5-5647-43c8-8e62-751e4c5989b3" Jan 21 14:49:21 crc kubenswrapper[4902]: I0121 14:49:21.385892 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n" event={"ID":"6783daa1-082d-4ab7-be65-dc2fb211be6c","Type":"ContainerStarted","Data":"a84babe89ef5ca22a0be2933d44543b6956cf0dcdeb121f886eb367a50389822"} Jan 21 14:49:21 crc kubenswrapper[4902]: I0121 14:49:21.395075 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj" event={"ID":"0b55bf9c-cc65-446c-849e-035fb1bba4c4","Type":"ContainerStarted","Data":"e10e14d87e0001550e69b4cb3bcf954aab9c87da3a32e92ce3caaf365d0eed6a"} Jan 21 14:49:21 crc kubenswrapper[4902]: I0121 14:49:21.401158 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr" event={"ID":"01091192-af46-486f-8890-787505f3b41c","Type":"ContainerStarted","Data":"b5b6b6fa7cf8fb796626afdcc27f26238e0e181c53c4f6ff361752d7b8d7148a"} Jan 21 14:49:21 crc kubenswrapper[4902]: I0121 14:49:21.487877 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert\") pod \"infra-operator-controller-manager-77c48c7859-46xm9\" (UID: \"cea39ffd-421f-4b74-9f26-065f49e00786\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:21 crc kubenswrapper[4902]: E0121 14:49:21.488127 4902 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 14:49:21 crc kubenswrapper[4902]: E0121 14:49:21.488238 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert podName:cea39ffd-421f-4b74-9f26-065f49e00786 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:25.488185552 +0000 UTC m=+927.565018581 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert") pod "infra-operator-controller-manager-77c48c7859-46xm9" (UID: "cea39ffd-421f-4b74-9f26-065f49e00786") : secret "infra-operator-webhook-server-cert" not found Jan 21 14:49:21 crc kubenswrapper[4902]: I0121 14:49:21.933452 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dhp6x8\" (UID: \"14dc1630-021a-4b05-8ac4-d99368b51726\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:21 crc kubenswrapper[4902]: E0121 14:49:21.933669 4902 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:49:21 crc kubenswrapper[4902]: E0121 14:49:21.933732 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert podName:14dc1630-021a-4b05-8ac4-d99368b51726 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:25.933715846 +0000 UTC m=+928.010548875 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" (UID: "14dc1630-021a-4b05-8ac4-d99368b51726") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:49:22 crc kubenswrapper[4902]: I0121 14:49:22.355354 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:22 crc kubenswrapper[4902]: I0121 14:49:22.355613 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:22 crc kubenswrapper[4902]: E0121 14:49:22.355626 4902 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 14:49:22 crc kubenswrapper[4902]: E0121 14:49:22.355766 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs podName:77e35131-84f1-4df7-b6de-ceda247df931 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:26.355752774 +0000 UTC m=+928.432585803 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-hr66g" (UID: "77e35131-84f1-4df7-b6de-ceda247df931") : secret "webhook-server-cert" not found Jan 21 14:49:22 crc kubenswrapper[4902]: E0121 14:49:22.355698 4902 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 14:49:22 crc kubenswrapper[4902]: E0121 14:49:22.355891 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs podName:77e35131-84f1-4df7-b6de-ceda247df931 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:26.355858757 +0000 UTC m=+928.432691866 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-hr66g" (UID: "77e35131-84f1-4df7-b6de-ceda247df931") : secret "metrics-server-cert" not found Jan 21 14:49:22 crc kubenswrapper[4902]: E0121 14:49:22.439388 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" podUID="2ad74206-4131-4395-8392-9697c2c164eb" Jan 21 14:49:22 crc kubenswrapper[4902]: E0121 14:49:22.439642 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" podUID="624ad6d5-5647-43c8-8e62-751e4c5989b3" Jan 21 14:49:22 crc kubenswrapper[4902]: E0121 14:49:22.439673 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" podUID="1ffd452b-d331-4c80-a6f6-0b1b21d5fd84" Jan 21 14:49:22 crc kubenswrapper[4902]: E0121 14:49:22.439701 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" podUID="7d33c2a4-c369-4a5f-9592-289c162f095c" Jan 21 14:49:25 crc kubenswrapper[4902]: I0121 14:49:25.554271 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert\") pod \"infra-operator-controller-manager-77c48c7859-46xm9\" (UID: \"cea39ffd-421f-4b74-9f26-065f49e00786\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:25 crc kubenswrapper[4902]: E0121 14:49:25.554431 4902 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 14:49:25 crc kubenswrapper[4902]: E0121 14:49:25.554658 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert podName:cea39ffd-421f-4b74-9f26-065f49e00786 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:33.554642165 +0000 UTC m=+935.631475194 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert") pod "infra-operator-controller-manager-77c48c7859-46xm9" (UID: "cea39ffd-421f-4b74-9f26-065f49e00786") : secret "infra-operator-webhook-server-cert" not found Jan 21 14:49:25 crc kubenswrapper[4902]: I0121 14:49:25.959963 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dhp6x8\" (UID: \"14dc1630-021a-4b05-8ac4-d99368b51726\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:25 crc kubenswrapper[4902]: E0121 14:49:25.960184 4902 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:49:25 crc kubenswrapper[4902]: E0121 14:49:25.960287 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert podName:14dc1630-021a-4b05-8ac4-d99368b51726 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:33.960265561 +0000 UTC m=+936.037098610 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" (UID: "14dc1630-021a-4b05-8ac4-d99368b51726") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:49:26 crc kubenswrapper[4902]: I0121 14:49:26.367005 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:26 crc kubenswrapper[4902]: I0121 14:49:26.367462 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:26 crc kubenswrapper[4902]: E0121 14:49:26.367635 4902 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 14:49:26 crc kubenswrapper[4902]: E0121 14:49:26.367696 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs podName:77e35131-84f1-4df7-b6de-ceda247df931 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:34.367676937 +0000 UTC m=+936.444509966 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-hr66g" (UID: "77e35131-84f1-4df7-b6de-ceda247df931") : secret "webhook-server-cert" not found Jan 21 14:49:26 crc kubenswrapper[4902]: E0121 14:49:26.369586 4902 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 14:49:26 crc kubenswrapper[4902]: E0121 14:49:26.369631 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs podName:77e35131-84f1-4df7-b6de-ceda247df931 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:34.3696174 +0000 UTC m=+936.446450439 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-hr66g" (UID: "77e35131-84f1-4df7-b6de-ceda247df931") : secret "metrics-server-cert" not found Jan 21 14:49:33 crc kubenswrapper[4902]: I0121 14:49:33.633205 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert\") pod \"infra-operator-controller-manager-77c48c7859-46xm9\" (UID: \"cea39ffd-421f-4b74-9f26-065f49e00786\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:33 crc kubenswrapper[4902]: E0121 14:49:33.633311 4902 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 14:49:33 crc kubenswrapper[4902]: E0121 14:49:33.633833 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert podName:cea39ffd-421f-4b74-9f26-065f49e00786 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:49.633818655 +0000 UTC m=+951.710651684 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert") pod "infra-operator-controller-manager-77c48c7859-46xm9" (UID: "cea39ffd-421f-4b74-9f26-065f49e00786") : secret "infra-operator-webhook-server-cert" not found Jan 21 14:49:34 crc kubenswrapper[4902]: I0121 14:49:34.038659 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dhp6x8\" (UID: \"14dc1630-021a-4b05-8ac4-d99368b51726\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:34 crc kubenswrapper[4902]: E0121 14:49:34.038858 4902 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:49:34 crc kubenswrapper[4902]: E0121 14:49:34.038913 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert podName:14dc1630-021a-4b05-8ac4-d99368b51726 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:50.038896046 +0000 UTC m=+952.115729075 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" (UID: "14dc1630-021a-4b05-8ac4-d99368b51726") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:49:34 crc kubenswrapper[4902]: I0121 14:49:34.444381 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:34 crc kubenswrapper[4902]: I0121 14:49:34.444540 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:34 crc kubenswrapper[4902]: E0121 14:49:34.444843 4902 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 14:49:34 crc kubenswrapper[4902]: E0121 14:49:34.444958 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs podName:77e35131-84f1-4df7-b6de-ceda247df931 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:50.444937673 +0000 UTC m=+952.521770702 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-hr66g" (UID: "77e35131-84f1-4df7-b6de-ceda247df931") : secret "metrics-server-cert" not found Jan 21 14:49:34 crc kubenswrapper[4902]: E0121 14:49:34.445852 4902 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 14:49:34 crc kubenswrapper[4902]: E0121 14:49:34.445898 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs podName:77e35131-84f1-4df7-b6de-ceda247df931 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:50.445889019 +0000 UTC m=+952.522722048 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-hr66g" (UID: "77e35131-84f1-4df7-b6de-ceda247df931") : secret "webhook-server-cert" not found Jan 21 14:49:34 crc kubenswrapper[4902]: E0121 14:49:34.643933 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92" Jan 21 14:49:34 crc kubenswrapper[4902]: E0121 14:49:34.644129 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xkv5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-85dd56d4cc-wqmq2_openstack-operators(1e685238-529c-4964-af9d-8abed4dfcfae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:34 crc kubenswrapper[4902]: E0121 14:49:34.645305 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2" podUID="1e685238-529c-4964-af9d-8abed4dfcfae" Jan 21 14:49:35 crc kubenswrapper[4902]: E0121 14:49:35.526401 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 21 14:49:35 crc kubenswrapper[4902]: E0121 14:49:35.526821 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z6tz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-lttm9_openstack-operators(56c38bff-8549-485e-a91f-1d89d801a8ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:35 crc kubenswrapper[4902]: E0121 14:49:35.528115 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9" podUID="56c38bff-8549-485e-a91f-1d89d801a8ee" Jan 21 14:49:35 crc kubenswrapper[4902]: E0121 14:49:35.594915 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2" podUID="1e685238-529c-4964-af9d-8abed4dfcfae" Jan 21 14:49:35 crc kubenswrapper[4902]: E0121 14:49:35.595614 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9" podUID="56c38bff-8549-485e-a91f-1d89d801a8ee" Jan 21 14:49:36 crc kubenswrapper[4902]: E0121 14:49:36.129576 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32" Jan 21 14:49:36 crc kubenswrapper[4902]: E0121 14:49:36.129804 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fkxds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-864f6b75bf-x6xrb_openstack-operators(a5d9aa95-7d14-4a6e-af38-dddad85007f4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:36 crc kubenswrapper[4902]: E0121 14:49:36.131723 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb" podUID="a5d9aa95-7d14-4a6e-af38-dddad85007f4" Jan 21 14:49:36 crc kubenswrapper[4902]: E0121 14:49:36.602413 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32\\\"\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb" podUID="a5d9aa95-7d14-4a6e-af38-dddad85007f4" Jan 21 14:49:39 crc kubenswrapper[4902]: E0121 14:49:39.442177 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 21 14:49:39 crc kubenswrapper[4902]: E0121 14:49:39.442467 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9wtxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-lljfd_openstack-operators(3912b1da-b132-48da-9b67-1f4aeb2203c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:39 crc kubenswrapper[4902]: E0121 14:49:39.444153 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd" podUID="3912b1da-b132-48da-9b67-1f4aeb2203c4" Jan 21 14:49:39 crc kubenswrapper[4902]: E0121 14:49:39.628788 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd" podUID="3912b1da-b132-48da-9b67-1f4aeb2203c4" Jan 21 14:49:39 crc kubenswrapper[4902]: E0121 14:49:39.812529 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad" Jan 21 14:49:39 crc kubenswrapper[4902]: E0121 14:49:39.812751 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hrw4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-64cd966744-s8g8n_openstack-operators(6783daa1-082d-4ab7-be65-dc2fb211be6c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:39 crc kubenswrapper[4902]: E0121 14:49:39.813975 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n" podUID="6783daa1-082d-4ab7-be65-dc2fb211be6c" Jan 21 14:49:40 crc kubenswrapper[4902]: E0121 14:49:40.633950 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n" podUID="6783daa1-082d-4ab7-be65-dc2fb211be6c" Jan 21 14:49:49 crc kubenswrapper[4902]: I0121 14:49:49.633887 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert\") pod \"infra-operator-controller-manager-77c48c7859-46xm9\" (UID: \"cea39ffd-421f-4b74-9f26-065f49e00786\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:49 crc kubenswrapper[4902]: E0121 14:49:49.638487 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e" Jan 21 14:49:49 crc kubenswrapper[4902]: E0121 14:49:49.638695 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7rrr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7cd8bc9dbb-gn5kf_openstack-operators(624ad6d5-5647-43c8-8e62-751e4c5989b3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:49 crc kubenswrapper[4902]: E0121 14:49:49.640085 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" podUID="624ad6d5-5647-43c8-8e62-751e4c5989b3" Jan 21 14:49:49 crc kubenswrapper[4902]: I0121 14:49:49.642089 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert\") pod \"infra-operator-controller-manager-77c48c7859-46xm9\" (UID: \"cea39ffd-421f-4b74-9f26-065f49e00786\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:49 crc kubenswrapper[4902]: I0121 14:49:49.775422 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:50 crc kubenswrapper[4902]: I0121 14:49:50.040650 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dhp6x8\" (UID: \"14dc1630-021a-4b05-8ac4-d99368b51726\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:50 crc kubenswrapper[4902]: I0121 14:49:50.045669 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dhp6x8\" (UID: \"14dc1630-021a-4b05-8ac4-d99368b51726\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:50 crc kubenswrapper[4902]: I0121 14:49:50.261098 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:50 crc kubenswrapper[4902]: I0121 14:49:50.447825 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:50 crc kubenswrapper[4902]: I0121 14:49:50.447901 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:50 crc kubenswrapper[4902]: I0121 14:49:50.452063 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:50 crc kubenswrapper[4902]: I0121 14:49:50.456798 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:50 crc kubenswrapper[4902]: I0121 14:49:50.718459 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:51 crc kubenswrapper[4902]: E0121 14:49:51.652291 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e" Jan 21 14:49:51 crc kubenswrapper[4902]: E0121 14:49:51.652455 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-64d49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-767fdc4f47-qwcvn_openstack-operators(7d33c2a4-c369-4a5f-9592-289c162f095c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:51 crc kubenswrapper[4902]: E0121 14:49:51.653647 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" podUID="7d33c2a4-c369-4a5f-9592-289c162f095c" Jan 21 14:49:52 crc kubenswrapper[4902]: E0121 14:49:52.276598 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525" Jan 21 14:49:52 crc kubenswrapper[4902]: E0121 14:49:52.276827 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wqlvh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-78757b4889-khcxt_openstack-operators(f3f5f576-48b8-4175-8d70-d8de7e41a63a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:52 crc kubenswrapper[4902]: E0121 14:49:52.278168 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt" podUID="f3f5f576-48b8-4175-8d70-d8de7e41a63a" Jan 21 14:49:52 crc kubenswrapper[4902]: E0121 14:49:52.730649 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt" podUID="f3f5f576-48b8-4175-8d70-d8de7e41a63a" Jan 21 14:49:53 crc kubenswrapper[4902]: E0121 14:49:53.355267 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8" Jan 21 14:49:53 crc kubenswrapper[4902]: E0121 14:49:53.355486 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cwcjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-9f958b845-sdkxs_openstack-operators(bc4c2749-7073-4bb8-8c87-736187565b08): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:53 crc kubenswrapper[4902]: E0121 14:49:53.356718 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs" podUID="bc4c2749-7073-4bb8-8c87-736187565b08" Jan 21 14:49:53 crc kubenswrapper[4902]: E0121 14:49:53.748581 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8\\\"\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs" podUID="bc4c2749-7073-4bb8-8c87-736187565b08" Jan 21 14:49:54 crc kubenswrapper[4902]: E0121 14:49:54.673507 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843" Jan 21 14:49:54 crc kubenswrapper[4902]: E0121 14:49:54.673786 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-frm2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8f495fcf-v7bj9_openstack-operators(2ad74206-4131-4395-8392-9697c2c164eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:54 crc kubenswrapper[4902]: E0121 14:49:54.675097 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" podUID="2ad74206-4131-4395-8392-9697c2c164eb" Jan 21 14:49:57 crc kubenswrapper[4902]: E0121 14:49:57.527997 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231" Jan 21 14:49:57 crc kubenswrapper[4902]: E0121 14:49:57.529250 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tgj2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-65849867d6-nql9r_openstack-operators(b01862fd-dfad-4a73-ac90-5ef7823c06ea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:57 crc kubenswrapper[4902]: E0121 14:49:57.530638 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-nql9r" podUID="b01862fd-dfad-4a73-ac90-5ef7823c06ea" Jan 21 14:49:57 crc kubenswrapper[4902]: E0121 14:49:57.775736 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-nql9r" podUID="b01862fd-dfad-4a73-ac90-5ef7823c06ea" Jan 21 14:49:58 crc kubenswrapper[4902]: E0121 14:49:58.240839 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c" Jan 21 14:49:58 crc kubenswrapper[4902]: E0121 14:49:58.241237 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nf6dh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-cb4666565-8vfnj_openstack-operators(0b55bf9c-cc65-446c-849e-035fb1bba4c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:58 crc kubenswrapper[4902]: E0121 14:49:58.242666 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj" podUID="0b55bf9c-cc65-446c-849e-035fb1bba4c4" Jan 21 14:49:58 crc kubenswrapper[4902]: E0121 14:49:58.785285 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj" podUID="0b55bf9c-cc65-446c-849e-035fb1bba4c4" Jan 21 14:50:00 crc kubenswrapper[4902]: E0121 14:50:00.296134 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" podUID="624ad6d5-5647-43c8-8e62-751e4c5989b3" Jan 21 14:50:00 crc kubenswrapper[4902]: E0121 14:50:00.674187 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 21 14:50:00 crc kubenswrapper[4902]: E0121 14:50:00.674370 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-szjnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-s7vgs_openstack-operators(1ffd452b-d331-4c80-a6f6-0b1b21d5fd84): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:50:00 crc kubenswrapper[4902]: E0121 14:50:00.675470 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" podUID="1ffd452b-d331-4c80-a6f6-0b1b21d5fd84" Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.462772 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8"] Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.489863 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9"] Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.777378 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g"] Jan 21 14:50:01 crc kubenswrapper[4902]: W0121 14:50:01.779644 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77e35131_84f1_4df7_b6de_ceda247df931.slice/crio-bd481c53ddb914b46e8d8252056b984f59876cf3f8595fc7a9c31d62dd2ae11e WatchSource:0}: Error finding container bd481c53ddb914b46e8d8252056b984f59876cf3f8595fc7a9c31d62dd2ae11e: Status 404 returned error can't find the container with id bd481c53ddb914b46e8d8252056b984f59876cf3f8595fc7a9c31d62dd2ae11e Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.804406 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd" event={"ID":"66bb9ed9-5aee-41c4-a7d0-4b2ff5cff91e","Type":"ContainerStarted","Data":"147994c9c4d6ffbae7dae34d134e990e3809785d6b92aa20f207ea96ab026ec1"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.805833 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" event={"ID":"77e35131-84f1-4df7-b6de-ceda247df931","Type":"ContainerStarted","Data":"bd481c53ddb914b46e8d8252056b984f59876cf3f8595fc7a9c31d62dd2ae11e"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.807241 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gffs4" event={"ID":"3c1e8b4d-a47d-4a6e-be63-bfc41d04d964","Type":"ContainerStarted","Data":"5fc0538aaa09a715393ccb6d668c62fd974446e0a1ae4f3393e081b958848eb8"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.808250 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gffs4" Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.809815 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd" event={"ID":"3912b1da-b132-48da-9b67-1f4aeb2203c4","Type":"ContainerStarted","Data":"b5bea44854eafba048f117851acabbb00cfd0449b127a0acb316bb7c2d3d3b50"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.812321 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd" Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.815553 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" event={"ID":"cea39ffd-421f-4b74-9f26-065f49e00786","Type":"ContainerStarted","Data":"9ceb84be5ef5279bdade38a87108464391261dbe3b1954062f0c27d1232bb331"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.822562 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb" event={"ID":"a5d9aa95-7d14-4a6e-af38-dddad85007f4","Type":"ContainerStarted","Data":"e5db9fdc618abbcd4a48560055de12494aedd908ef716a271a9ac8deea3b3978"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.826554 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6" event={"ID":"bc7bedc3-7b23-4f5c-bfbb-7b05694e6b90","Type":"ContainerStarted","Data":"07c9c538f77e19ab9fab3728e6e8eee968cf664672186bb870a319ad7426ac10"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.834947 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr" event={"ID":"b924ea4f-71c9-4f42-aa0a-a4945ea589e3","Type":"ContainerStarted","Data":"ff5fd66c8e144fbf43aa4a446473a2217594924ffba9e1780d0ccf8a3d03b153"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.857319 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gffs4" podStartSLOduration=6.731189034 podStartE2EDuration="44.857305327s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:19.321629873 +0000 UTC m=+921.398462902" lastFinishedPulling="2026-01-21 14:49:57.447746166 +0000 UTC m=+959.524579195" observedRunningTime="2026-01-21 14:50:01.852996719 +0000 UTC m=+963.929829748" watchObservedRunningTime="2026-01-21 14:50:01.857305327 +0000 UTC m=+963.934138356" Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.862893 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n" event={"ID":"6783daa1-082d-4ab7-be65-dc2fb211be6c","Type":"ContainerStarted","Data":"0afdc19f03cfa72c43b19f1448fa00d0b965d21013c4b314d0e2db064467fe86"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.863199 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n" Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.869935 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" event={"ID":"14dc1630-021a-4b05-8ac4-d99368b51726","Type":"ContainerStarted","Data":"f3458e06a9e00c949e22ddd57bdc7a78bd00d6197a6943329b0c946cf6bbecd9"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.926962 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr" event={"ID":"01091192-af46-486f-8890-787505f3b41c","Type":"ContainerStarted","Data":"96fabea12803f37ab2fdc0d110168312353a7c4b41d1b31c343457d52625df31"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.927252 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr" Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.972350 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd" podStartSLOduration=4.683792293 podStartE2EDuration="44.972331501s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.382737618 +0000 UTC m=+922.459570647" lastFinishedPulling="2026-01-21 14:50:00.671276826 +0000 UTC m=+962.748109855" observedRunningTime="2026-01-21 14:50:01.905139963 +0000 UTC m=+963.981972992" watchObservedRunningTime="2026-01-21 14:50:01.972331501 +0000 UTC m=+964.049164530" Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.973109 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9" event={"ID":"56c38bff-8549-485e-a91f-1d89d801a8ee","Type":"ContainerStarted","Data":"bd91f5934af96035c1e6e6b50659da8a6322137cdc73010f3005a4cc270cf229"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.973836 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9" Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.973830 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n" podStartSLOduration=3.699938257 podStartE2EDuration="43.973822932s" podCreationTimestamp="2026-01-21 14:49:18 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.397428542 +0000 UTC m=+922.474261571" lastFinishedPulling="2026-01-21 14:50:00.671313217 +0000 UTC m=+962.748146246" observedRunningTime="2026-01-21 14:50:01.969129433 +0000 UTC m=+964.045962462" watchObservedRunningTime="2026-01-21 14:50:01.973822932 +0000 UTC m=+964.050655961" Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.981625 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2" event={"ID":"1e685238-529c-4964-af9d-8abed4dfcfae","Type":"ContainerStarted","Data":"34c313c8ce31ed2925f2d5643ec662a5359fcb236385f602a710ace20a3739ff"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.982405 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2" Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.983806 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh" event={"ID":"05001c4b-c8f0-46ea-bf02-d7537d8a373b","Type":"ContainerStarted","Data":"288d3b340d3fca254c6e56948869684d71b5385a70e9765fc390cc8727e12f8b"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.984411 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh" Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.985391 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc" event={"ID":"c5d64dc8-80f6-4076-9068-11ec25d524b5","Type":"ContainerStarted","Data":"3769ea1de172394498b8477afddcb9ca9b1619b07fee03ea91472fadf2b2926d"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.985806 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc" Jan 21 14:50:02 crc kubenswrapper[4902]: I0121 14:50:02.055764 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr" podStartSLOduration=5.580785253 podStartE2EDuration="45.055738295s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.4304698 +0000 UTC m=+922.507302839" lastFinishedPulling="2026-01-21 14:49:59.905422832 +0000 UTC m=+961.982255881" observedRunningTime="2026-01-21 14:50:02.055265002 +0000 UTC m=+964.132098031" watchObservedRunningTime="2026-01-21 14:50:02.055738295 +0000 UTC m=+964.132571324" Jan 21 14:50:02 crc kubenswrapper[4902]: I0121 14:50:02.081657 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh" podStartSLOduration=5.21599071 podStartE2EDuration="45.081642868s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.01889154 +0000 UTC m=+922.095724569" lastFinishedPulling="2026-01-21 14:49:59.884543688 +0000 UTC m=+961.961376727" observedRunningTime="2026-01-21 14:50:02.078739478 +0000 UTC m=+964.155572507" watchObservedRunningTime="2026-01-21 14:50:02.081642868 +0000 UTC m=+964.158475897" Jan 21 14:50:02 crc kubenswrapper[4902]: I0121 14:50:02.130111 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2" podStartSLOduration=4.864183424 podStartE2EDuration="45.13009556s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.398629555 +0000 UTC m=+922.475462584" lastFinishedPulling="2026-01-21 14:50:00.664541691 +0000 UTC m=+962.741374720" observedRunningTime="2026-01-21 14:50:02.104599409 +0000 UTC m=+964.181432438" watchObservedRunningTime="2026-01-21 14:50:02.13009556 +0000 UTC m=+964.206928589" Jan 21 14:50:02 crc kubenswrapper[4902]: I0121 14:50:02.133731 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9" podStartSLOduration=3.71727009 podStartE2EDuration="45.13372037s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:19.254819856 +0000 UTC m=+921.331652885" lastFinishedPulling="2026-01-21 14:50:00.671270136 +0000 UTC m=+962.748103165" observedRunningTime="2026-01-21 14:50:02.129381081 +0000 UTC m=+964.206214110" watchObservedRunningTime="2026-01-21 14:50:02.13372037 +0000 UTC m=+964.210553399" Jan 21 14:50:02 crc kubenswrapper[4902]: I0121 14:50:02.162085 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc" podStartSLOduration=5.211417445 podStartE2EDuration="45.16206687s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:19.933986876 +0000 UTC m=+922.010819905" lastFinishedPulling="2026-01-21 14:49:59.884636311 +0000 UTC m=+961.961469330" observedRunningTime="2026-01-21 14:50:02.155936531 +0000 UTC m=+964.232769560" watchObservedRunningTime="2026-01-21 14:50:02.16206687 +0000 UTC m=+964.238899899" Jan 21 14:50:02 crc kubenswrapper[4902]: E0121 14:50:02.295733 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" podUID="7d33c2a4-c369-4a5f-9592-289c162f095c" Jan 21 14:50:02 crc kubenswrapper[4902]: I0121 14:50:02.994980 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" event={"ID":"77e35131-84f1-4df7-b6de-ceda247df931","Type":"ContainerStarted","Data":"a662be622993c039ca65f54334e428be03d3233f4a97317f521d770c4efa4478"} Jan 21 14:50:03 crc kubenswrapper[4902]: I0121 14:50:03.096336 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd" podStartSLOduration=7.054747872 podStartE2EDuration="46.096315774s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:19.023563725 +0000 UTC m=+921.100396754" lastFinishedPulling="2026-01-21 14:49:58.065131627 +0000 UTC m=+960.141964656" observedRunningTime="2026-01-21 14:50:03.091698667 +0000 UTC m=+965.168531696" watchObservedRunningTime="2026-01-21 14:50:03.096315774 +0000 UTC m=+965.173148803" Jan 21 14:50:03 crc kubenswrapper[4902]: I0121 14:50:03.108341 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb" podStartSLOduration=5.405327608 podStartE2EDuration="46.108324175s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:19.961237645 +0000 UTC m=+922.038070674" lastFinishedPulling="2026-01-21 14:50:00.664234212 +0000 UTC m=+962.741067241" observedRunningTime="2026-01-21 14:50:03.105720613 +0000 UTC m=+965.182553662" watchObservedRunningTime="2026-01-21 14:50:03.108324175 +0000 UTC m=+965.185157204" Jan 21 14:50:03 crc kubenswrapper[4902]: I0121 14:50:03.143409 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr" podStartSLOduration=5.60322565 podStartE2EDuration="46.143390939s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:19.3444222 +0000 UTC m=+921.421255229" lastFinishedPulling="2026-01-21 14:49:59.884587479 +0000 UTC m=+961.961420518" observedRunningTime="2026-01-21 14:50:03.142094203 +0000 UTC m=+965.218927242" watchObservedRunningTime="2026-01-21 14:50:03.143390939 +0000 UTC m=+965.220223978" Jan 21 14:50:03 crc kubenswrapper[4902]: I0121 14:50:03.168988 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6" podStartSLOduration=8.118365296 podStartE2EDuration="46.168969883s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.015853097 +0000 UTC m=+922.092686126" lastFinishedPulling="2026-01-21 14:49:58.066457674 +0000 UTC m=+960.143290713" observedRunningTime="2026-01-21 14:50:03.164008786 +0000 UTC m=+965.240841815" watchObservedRunningTime="2026-01-21 14:50:03.168969883 +0000 UTC m=+965.245802912" Jan 21 14:50:03 crc kubenswrapper[4902]: I0121 14:50:03.197717 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" podStartSLOduration=45.197702493 podStartE2EDuration="45.197702493s" podCreationTimestamp="2026-01-21 14:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:50:03.196275984 +0000 UTC m=+965.273109013" watchObservedRunningTime="2026-01-21 14:50:03.197702493 +0000 UTC m=+965.274535522" Jan 21 14:50:04 crc kubenswrapper[4902]: I0121 14:50:04.000429 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:50:04 crc kubenswrapper[4902]: I0121 14:50:04.001772 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr" Jan 21 14:50:07 crc kubenswrapper[4902]: I0121 14:50:07.684779 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd" Jan 21 14:50:07 crc kubenswrapper[4902]: I0121 14:50:07.687254 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr" Jan 21 14:50:07 crc kubenswrapper[4902]: I0121 14:50:07.688624 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd" Jan 21 14:50:07 crc kubenswrapper[4902]: I0121 14:50:07.714276 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gffs4" Jan 21 14:50:07 crc kubenswrapper[4902]: I0121 14:50:07.840330 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9" Jan 21 14:50:07 crc kubenswrapper[4902]: I0121 14:50:07.884564 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh" Jan 21 14:50:08 crc kubenswrapper[4902]: E0121 14:50:08.383582 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" podUID="2ad74206-4131-4395-8392-9697c2c164eb" Jan 21 14:50:08 crc kubenswrapper[4902]: I0121 14:50:08.442175 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6" Jan 21 14:50:08 crc kubenswrapper[4902]: I0121 14:50:08.446006 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6" Jan 21 14:50:08 crc kubenswrapper[4902]: I0121 14:50:08.461731 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb" Jan 21 14:50:08 crc kubenswrapper[4902]: I0121 14:50:08.487979 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb" Jan 21 14:50:08 crc kubenswrapper[4902]: I0121 14:50:08.511527 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd" Jan 21 14:50:08 crc kubenswrapper[4902]: I0121 14:50:08.528785 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr" Jan 21 14:50:08 crc kubenswrapper[4902]: I0121 14:50:08.541063 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc" Jan 21 14:50:08 crc kubenswrapper[4902]: I0121 14:50:08.598464 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2" Jan 21 14:50:08 crc kubenswrapper[4902]: I0121 14:50:08.887719 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n" Jan 21 14:50:09 crc kubenswrapper[4902]: I0121 14:50:09.032988 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt" event={"ID":"f3f5f576-48b8-4175-8d70-d8de7e41a63a","Type":"ContainerStarted","Data":"dcecb835864950ebd8334a279999396d40124aadbabe0012b23a303973500058"} Jan 21 14:50:09 crc kubenswrapper[4902]: I0121 14:50:09.033232 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt" Jan 21 14:50:09 crc kubenswrapper[4902]: I0121 14:50:09.034640 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" event={"ID":"14dc1630-021a-4b05-8ac4-d99368b51726","Type":"ContainerStarted","Data":"7771f68efe7eba1dc9be23fbe5f3b261cc56a27bcd0ef2e57933374230eda516"} Jan 21 14:50:09 crc kubenswrapper[4902]: I0121 14:50:09.034782 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:50:09 crc kubenswrapper[4902]: I0121 14:50:09.036308 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" event={"ID":"cea39ffd-421f-4b74-9f26-065f49e00786","Type":"ContainerStarted","Data":"6055f7d9b64f6008acc16b0297fd873cf37c0b32a0aa26ccef0a1e3942c877c8"} Jan 21 14:50:09 crc kubenswrapper[4902]: I0121 14:50:09.053374 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt" podStartSLOduration=3.684785026 podStartE2EDuration="52.053351338s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.019813626 +0000 UTC m=+922.096646655" lastFinishedPulling="2026-01-21 14:50:08.388379918 +0000 UTC m=+970.465212967" observedRunningTime="2026-01-21 14:50:09.048525435 +0000 UTC m=+971.125358464" watchObservedRunningTime="2026-01-21 14:50:09.053351338 +0000 UTC m=+971.130184377" Jan 21 14:50:09 crc kubenswrapper[4902]: I0121 14:50:09.069819 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" podStartSLOduration=45.192816225 podStartE2EDuration="52.06980502s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:50:01.529550613 +0000 UTC m=+963.606383642" lastFinishedPulling="2026-01-21 14:50:08.406539388 +0000 UTC m=+970.483372437" observedRunningTime="2026-01-21 14:50:09.068256138 +0000 UTC m=+971.145089167" watchObservedRunningTime="2026-01-21 14:50:09.06980502 +0000 UTC m=+971.146638049" Jan 21 14:50:09 crc kubenswrapper[4902]: I0121 14:50:09.086851 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" podStartSLOduration=45.227184441 podStartE2EDuration="52.086830669s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:50:01.527799175 +0000 UTC m=+963.604632204" lastFinishedPulling="2026-01-21 14:50:08.387445403 +0000 UTC m=+970.464278432" observedRunningTime="2026-01-21 14:50:09.083318962 +0000 UTC m=+971.160151991" watchObservedRunningTime="2026-01-21 14:50:09.086830669 +0000 UTC m=+971.163663718" Jan 21 14:50:09 crc kubenswrapper[4902]: I0121 14:50:09.296179 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 14:50:09 crc kubenswrapper[4902]: I0121 14:50:09.776124 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:50:10 crc kubenswrapper[4902]: I0121 14:50:10.042818 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj" event={"ID":"0b55bf9c-cc65-446c-849e-035fb1bba4c4","Type":"ContainerStarted","Data":"51b295bbdb999ae1b25dc5563e00d9a2dd300ea7580635b04d9b954d8997d641"} Jan 21 14:50:10 crc kubenswrapper[4902]: I0121 14:50:10.043013 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj" Jan 21 14:50:10 crc kubenswrapper[4902]: I0121 14:50:10.045377 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs" event={"ID":"bc4c2749-7073-4bb8-8c87-736187565b08","Type":"ContainerStarted","Data":"75576ac01a0b8a2386b243533f80fbeb65d95f9a52786ec24f9f8977324ee7ba"} Jan 21 14:50:10 crc kubenswrapper[4902]: I0121 14:50:10.059515 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj" podStartSLOduration=3.726119932 podStartE2EDuration="53.059499671s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.416076595 +0000 UTC m=+922.492909624" lastFinishedPulling="2026-01-21 14:50:09.749456334 +0000 UTC m=+971.826289363" observedRunningTime="2026-01-21 14:50:10.057937069 +0000 UTC m=+972.134770098" watchObservedRunningTime="2026-01-21 14:50:10.059499671 +0000 UTC m=+972.136332700" Jan 21 14:50:10 crc kubenswrapper[4902]: I0121 14:50:10.073189 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs" podStartSLOduration=4.181395923 podStartE2EDuration="53.073174267s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.01998623 +0000 UTC m=+922.096819259" lastFinishedPulling="2026-01-21 14:50:08.911764554 +0000 UTC m=+970.988597603" observedRunningTime="2026-01-21 14:50:10.070326729 +0000 UTC m=+972.147159748" watchObservedRunningTime="2026-01-21 14:50:10.073174267 +0000 UTC m=+972.150007296" Jan 21 14:50:10 crc kubenswrapper[4902]: I0121 14:50:10.729346 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:50:14 crc kubenswrapper[4902]: I0121 14:50:14.087124 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-nql9r" event={"ID":"b01862fd-dfad-4a73-ac90-5ef7823c06ea","Type":"ContainerStarted","Data":"754aebabeccc0afb5b9bee9cb1062360dbb8fcf1e54ce97e639d072a0a79c540"} Jan 21 14:50:14 crc kubenswrapper[4902]: I0121 14:50:14.088216 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-65849867d6-nql9r" Jan 21 14:50:14 crc kubenswrapper[4902]: I0121 14:50:14.089258 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" event={"ID":"624ad6d5-5647-43c8-8e62-751e4c5989b3","Type":"ContainerStarted","Data":"d7f04251b633ca79874a460b36003c894296fd04f789c0208ab9106bd530325e"} Jan 21 14:50:14 crc kubenswrapper[4902]: I0121 14:50:14.089571 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" Jan 21 14:50:14 crc kubenswrapper[4902]: I0121 14:50:14.148230 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-65849867d6-nql9r" podStartSLOduration=3.739297341 podStartE2EDuration="57.148203171s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.190950433 +0000 UTC m=+922.267783462" lastFinishedPulling="2026-01-21 14:50:13.599856263 +0000 UTC m=+975.676689292" observedRunningTime="2026-01-21 14:50:14.122441314 +0000 UTC m=+976.199274353" watchObservedRunningTime="2026-01-21 14:50:14.148203171 +0000 UTC m=+976.225036200" Jan 21 14:50:14 crc kubenswrapper[4902]: I0121 14:50:14.149313 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" podStartSLOduration=3.860634408 podStartE2EDuration="57.149304401s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.43591003 +0000 UTC m=+922.512743049" lastFinishedPulling="2026-01-21 14:50:13.724580013 +0000 UTC m=+975.801413042" observedRunningTime="2026-01-21 14:50:14.146100372 +0000 UTC m=+976.222933411" watchObservedRunningTime="2026-01-21 14:50:14.149304401 +0000 UTC m=+976.226137430" Jan 21 14:50:14 crc kubenswrapper[4902]: E0121 14:50:14.298880 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" podUID="1ffd452b-d331-4c80-a6f6-0b1b21d5fd84" Jan 21 14:50:15 crc kubenswrapper[4902]: I0121 14:50:15.099086 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" event={"ID":"7d33c2a4-c369-4a5f-9592-289c162f095c","Type":"ContainerStarted","Data":"0bb6bf1a37d7432f0da1b46e19ba2832e320d64cce0ebe0d38767074f6bb612b"} Jan 21 14:50:15 crc kubenswrapper[4902]: I0121 14:50:15.100235 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" Jan 21 14:50:15 crc kubenswrapper[4902]: I0121 14:50:15.126791 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" podStartSLOduration=3.635189125 podStartE2EDuration="58.126772951s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.436199798 +0000 UTC m=+922.513032827" lastFinishedPulling="2026-01-21 14:50:14.927783624 +0000 UTC m=+977.004616653" observedRunningTime="2026-01-21 14:50:15.118806929 +0000 UTC m=+977.195639968" watchObservedRunningTime="2026-01-21 14:50:15.126772951 +0000 UTC m=+977.203605980" Jan 21 14:50:18 crc kubenswrapper[4902]: I0121 14:50:18.033265 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs" Jan 21 14:50:18 crc kubenswrapper[4902]: I0121 14:50:18.036022 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs" Jan 21 14:50:18 crc kubenswrapper[4902]: I0121 14:50:18.069898 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt" Jan 21 14:50:18 crc kubenswrapper[4902]: I0121 14:50:18.463102 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-65849867d6-nql9r" Jan 21 14:50:18 crc kubenswrapper[4902]: I0121 14:50:18.612585 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj" Jan 21 14:50:18 crc kubenswrapper[4902]: I0121 14:50:18.729254 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" Jan 21 14:50:19 crc kubenswrapper[4902]: I0121 14:50:19.784325 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:50:20 crc kubenswrapper[4902]: I0121 14:50:20.269993 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:50:25 crc kubenswrapper[4902]: I0121 14:50:25.176409 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" event={"ID":"2ad74206-4131-4395-8392-9697c2c164eb","Type":"ContainerStarted","Data":"21f24a1ffaa9b183d180a35af678bf424b03f1f06423693fe4451e2bb4d418f1"} Jan 21 14:50:25 crc kubenswrapper[4902]: I0121 14:50:25.177086 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" Jan 21 14:50:25 crc kubenswrapper[4902]: I0121 14:50:25.193963 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" podStartSLOduration=4.088624213 podStartE2EDuration="1m8.19394774s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.435928091 +0000 UTC m=+922.512761120" lastFinishedPulling="2026-01-21 14:50:24.541251608 +0000 UTC m=+986.618084647" observedRunningTime="2026-01-21 14:50:25.191694818 +0000 UTC m=+987.268527837" watchObservedRunningTime="2026-01-21 14:50:25.19394774 +0000 UTC m=+987.270780769" Jan 21 14:50:26 crc kubenswrapper[4902]: I0121 14:50:26.186234 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" event={"ID":"1ffd452b-d331-4c80-a6f6-0b1b21d5fd84","Type":"ContainerStarted","Data":"dc60dd8fd66f1b704d4a96c680696c63dec7df7ad5e877df36b052092985150e"} Jan 21 14:50:26 crc kubenswrapper[4902]: I0121 14:50:26.207387 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" podStartSLOduration=2.909968971 podStartE2EDuration="1m8.20736287s" podCreationTimestamp="2026-01-21 14:49:18 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.435694364 +0000 UTC m=+922.512527393" lastFinishedPulling="2026-01-21 14:50:25.733088253 +0000 UTC m=+987.809921292" observedRunningTime="2026-01-21 14:50:26.200360945 +0000 UTC m=+988.277194044" watchObservedRunningTime="2026-01-21 14:50:26.20736287 +0000 UTC m=+988.284195909" Jan 21 14:50:28 crc kubenswrapper[4902]: I0121 14:50:28.665394 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" Jan 21 14:50:38 crc kubenswrapper[4902]: I0121 14:50:38.780634 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" Jan 21 14:50:47 crc kubenswrapper[4902]: I0121 14:50:47.769585 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:50:47 crc kubenswrapper[4902]: I0121 14:50:47.769877 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:50:54 crc kubenswrapper[4902]: I0121 14:50:54.842496 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-9cn2p"] Jan 21 14:50:54 crc kubenswrapper[4902]: I0121 14:50:54.845982 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:50:54 crc kubenswrapper[4902]: I0121 14:50:54.850349 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 21 14:50:54 crc kubenswrapper[4902]: I0121 14:50:54.851557 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 21 14:50:54 crc kubenswrapper[4902]: I0121 14:50:54.854337 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 21 14:50:54 crc kubenswrapper[4902]: I0121 14:50:54.854504 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-6mhmv" Jan 21 14:50:54 crc kubenswrapper[4902]: I0121 14:50:54.854664 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 21 14:50:54 crc kubenswrapper[4902]: I0121 14:50:54.855719 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-9cn2p"] Jan 21 14:50:54 crc kubenswrapper[4902]: I0121 14:50:54.961549 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15a1e285-4b20-4390-8e14-d9d2f0101c71-config\") pod \"dnsmasq-dns-5f854695bc-9cn2p\" (UID: \"15a1e285-4b20-4390-8e14-d9d2f0101c71\") " pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:50:54 crc kubenswrapper[4902]: I0121 14:50:54.961993 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl6z6\" (UniqueName: \"kubernetes.io/projected/15a1e285-4b20-4390-8e14-d9d2f0101c71-kube-api-access-tl6z6\") pod \"dnsmasq-dns-5f854695bc-9cn2p\" (UID: \"15a1e285-4b20-4390-8e14-d9d2f0101c71\") " pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:50:54 crc kubenswrapper[4902]: I0121 14:50:54.962102 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15a1e285-4b20-4390-8e14-d9d2f0101c71-dns-svc\") pod \"dnsmasq-dns-5f854695bc-9cn2p\" (UID: \"15a1e285-4b20-4390-8e14-d9d2f0101c71\") " pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:50:55 crc kubenswrapper[4902]: I0121 14:50:55.063000 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl6z6\" (UniqueName: \"kubernetes.io/projected/15a1e285-4b20-4390-8e14-d9d2f0101c71-kube-api-access-tl6z6\") pod \"dnsmasq-dns-5f854695bc-9cn2p\" (UID: \"15a1e285-4b20-4390-8e14-d9d2f0101c71\") " pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:50:55 crc kubenswrapper[4902]: I0121 14:50:55.063082 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15a1e285-4b20-4390-8e14-d9d2f0101c71-dns-svc\") pod \"dnsmasq-dns-5f854695bc-9cn2p\" (UID: \"15a1e285-4b20-4390-8e14-d9d2f0101c71\") " pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:50:55 crc kubenswrapper[4902]: I0121 14:50:55.063150 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15a1e285-4b20-4390-8e14-d9d2f0101c71-config\") pod \"dnsmasq-dns-5f854695bc-9cn2p\" (UID: \"15a1e285-4b20-4390-8e14-d9d2f0101c71\") " pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:50:55 crc kubenswrapper[4902]: I0121 14:50:55.064182 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15a1e285-4b20-4390-8e14-d9d2f0101c71-config\") pod \"dnsmasq-dns-5f854695bc-9cn2p\" (UID: \"15a1e285-4b20-4390-8e14-d9d2f0101c71\") " pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:50:55 crc kubenswrapper[4902]: I0121 14:50:55.064196 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15a1e285-4b20-4390-8e14-d9d2f0101c71-dns-svc\") pod \"dnsmasq-dns-5f854695bc-9cn2p\" (UID: \"15a1e285-4b20-4390-8e14-d9d2f0101c71\") " pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:50:55 crc kubenswrapper[4902]: I0121 14:50:55.082823 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl6z6\" (UniqueName: \"kubernetes.io/projected/15a1e285-4b20-4390-8e14-d9d2f0101c71-kube-api-access-tl6z6\") pod \"dnsmasq-dns-5f854695bc-9cn2p\" (UID: \"15a1e285-4b20-4390-8e14-d9d2f0101c71\") " pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:50:55 crc kubenswrapper[4902]: I0121 14:50:55.165524 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:50:55 crc kubenswrapper[4902]: I0121 14:50:55.641406 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-9cn2p"] Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.271659 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-kldb7"] Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.273238 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.282212 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7rwc\" (UniqueName: \"kubernetes.io/projected/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-kube-api-access-g7rwc\") pod \"dnsmasq-dns-744ffd65bc-kldb7\" (UID: \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\") " pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.282283 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-config\") pod \"dnsmasq-dns-744ffd65bc-kldb7\" (UID: \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\") " pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.282453 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-kldb7\" (UID: \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\") " pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.285816 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-kldb7"] Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.384903 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-kldb7\" (UID: \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\") " pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.384985 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7rwc\" (UniqueName: \"kubernetes.io/projected/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-kube-api-access-g7rwc\") pod \"dnsmasq-dns-744ffd65bc-kldb7\" (UID: \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\") " pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.385026 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-config\") pod \"dnsmasq-dns-744ffd65bc-kldb7\" (UID: \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\") " pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.385944 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-kldb7\" (UID: \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\") " pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.387071 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-config\") pod \"dnsmasq-dns-744ffd65bc-kldb7\" (UID: \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\") " pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.420081 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7rwc\" (UniqueName: \"kubernetes.io/projected/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-kube-api-access-g7rwc\") pod \"dnsmasq-dns-744ffd65bc-kldb7\" (UID: \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\") " pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.455164 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" event={"ID":"15a1e285-4b20-4390-8e14-d9d2f0101c71","Type":"ContainerStarted","Data":"d9d43db0013e5ca723cebb5651583404b69e5c48b2b76ea7263f612afe6ee4be"} Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.592671 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.981241 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-9cn2p"] Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.003621 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-jnphw"] Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.004721 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.020007 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-jnphw"] Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.059809 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-kldb7"] Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.193508 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056e5d1c-0c8e-4988-8e3d-bd133023ce30-config\") pod \"dnsmasq-dns-95f5f6995-jnphw\" (UID: \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\") " pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.193570 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccpbs\" (UniqueName: \"kubernetes.io/projected/056e5d1c-0c8e-4988-8e3d-bd133023ce30-kube-api-access-ccpbs\") pod \"dnsmasq-dns-95f5f6995-jnphw\" (UID: \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\") " pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.193757 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/056e5d1c-0c8e-4988-8e3d-bd133023ce30-dns-svc\") pod \"dnsmasq-dns-95f5f6995-jnphw\" (UID: \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\") " pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.295790 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056e5d1c-0c8e-4988-8e3d-bd133023ce30-config\") pod \"dnsmasq-dns-95f5f6995-jnphw\" (UID: \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\") " pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.295851 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccpbs\" (UniqueName: \"kubernetes.io/projected/056e5d1c-0c8e-4988-8e3d-bd133023ce30-kube-api-access-ccpbs\") pod \"dnsmasq-dns-95f5f6995-jnphw\" (UID: \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\") " pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.295922 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/056e5d1c-0c8e-4988-8e3d-bd133023ce30-dns-svc\") pod \"dnsmasq-dns-95f5f6995-jnphw\" (UID: \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\") " pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.296797 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056e5d1c-0c8e-4988-8e3d-bd133023ce30-config\") pod \"dnsmasq-dns-95f5f6995-jnphw\" (UID: \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\") " pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.297926 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/056e5d1c-0c8e-4988-8e3d-bd133023ce30-dns-svc\") pod \"dnsmasq-dns-95f5f6995-jnphw\" (UID: \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\") " pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.320747 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccpbs\" (UniqueName: \"kubernetes.io/projected/056e5d1c-0c8e-4988-8e3d-bd133023ce30-kube-api-access-ccpbs\") pod \"dnsmasq-dns-95f5f6995-jnphw\" (UID: \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\") " pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.347525 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.445697 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.447338 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.455352 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.455407 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.455639 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.455723 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.455789 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.455983 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.456028 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-9m6fj" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.462609 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.481229 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" event={"ID":"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9","Type":"ContainerStarted","Data":"5296913110c392e15b54a0f987eb61dded57186e36461bf1b89e97184d22ce54"} Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.601608 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-server-conf\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.602182 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.602238 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.602264 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.602334 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.602455 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.602489 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.604306 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sth8r\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-kube-api-access-sth8r\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.604478 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67f50f65-9151-4444-9680-f86e0f256069-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.604519 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.604564 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67f50f65-9151-4444-9680-f86e0f256069-pod-info\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.705864 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.705948 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sth8r\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-kube-api-access-sth8r\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.705984 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67f50f65-9151-4444-9680-f86e0f256069-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.706000 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.706016 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67f50f65-9151-4444-9680-f86e0f256069-pod-info\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.706085 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-server-conf\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.706103 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.706135 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.706158 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.706174 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.706193 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.706745 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.707592 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-server-conf\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.709077 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.709159 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.709474 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.709953 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.720021 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67f50f65-9151-4444-9680-f86e0f256069-pod-info\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.720954 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67f50f65-9151-4444-9680-f86e0f256069-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.721178 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.725945 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.728941 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sth8r\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-kube-api-access-sth8r\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.729873 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.795598 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.845889 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-jnphw"] Jan 21 14:50:57 crc kubenswrapper[4902]: W0121 14:50:57.853527 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod056e5d1c_0c8e_4988_8e3d_bd133023ce30.slice/crio-ca0db72aa9304747531a3dcf3ad66d4587463b234d1c9123a01c6a7b05b94cfc WatchSource:0}: Error finding container ca0db72aa9304747531a3dcf3ad66d4587463b234d1c9123a01c6a7b05b94cfc: Status 404 returned error can't find the container with id ca0db72aa9304747531a3dcf3ad66d4587463b234d1c9123a01c6a7b05b94cfc Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.139200 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.140778 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.143428 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.143597 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.143771 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.143924 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.144126 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.144277 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dc6mx" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.144305 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.149330 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.260660 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.314014 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.314083 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.314151 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.314184 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d7103bd-b24b-4a0c-b68a-17373307f1aa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.314214 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.314236 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.314259 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.314300 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkc98\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-kube-api-access-rkc98\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.314322 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.314345 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d7103bd-b24b-4a0c-b68a-17373307f1aa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.314375 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.423579 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.424111 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.424450 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.424511 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d7103bd-b24b-4a0c-b68a-17373307f1aa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.424545 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.424570 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.424601 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.424630 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkc98\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-kube-api-access-rkc98\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.424654 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.424679 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d7103bd-b24b-4a0c-b68a-17373307f1aa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.424712 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.424842 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.425096 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.426380 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.428758 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.431319 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d7103bd-b24b-4a0c-b68a-17373307f1aa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.433219 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.433430 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.433541 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.433663 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.436466 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.438323 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.438899 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.449947 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.450069 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.472770 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkc98\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-kube-api-access-rkc98\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.473012 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.473683 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d7103bd-b24b-4a0c-b68a-17373307f1aa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.489665 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.513520 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-jnphw" event={"ID":"056e5d1c-0c8e-4988-8e3d-bd133023ce30","Type":"ContainerStarted","Data":"ca0db72aa9304747531a3dcf3ad66d4587463b234d1c9123a01c6a7b05b94cfc"} Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.515133 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67f50f65-9151-4444-9680-f86e0f256069","Type":"ContainerStarted","Data":"08ee02c4a3aa1bd9f0c6f8daed756e3d6ec0c75c1f2a0da20740a10a51dd17d5"} Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.783263 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dc6mx" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.792280 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.273362 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 14:50:59 crc kubenswrapper[4902]: W0121 14:50:59.276802 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d7103bd_b24b_4a0c_b68a_17373307f1aa.slice/crio-43205feda26dd86650cc6a1b706524efcf814c15daa6ef3c2cb46d3126d049ac WatchSource:0}: Error finding container 43205feda26dd86650cc6a1b706524efcf814c15daa6ef3c2cb46d3126d049ac: Status 404 returned error can't find the container with id 43205feda26dd86650cc6a1b706524efcf814c15daa6ef3c2cb46d3126d049ac Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.485485 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.502859 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.506607 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.507234 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.512249 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.512354 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.513124 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-z2s6n" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.515684 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.533138 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d7103bd-b24b-4a0c-b68a-17373307f1aa","Type":"ContainerStarted","Data":"43205feda26dd86650cc6a1b706524efcf814c15daa6ef3c2cb46d3126d049ac"} Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.667916 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.668187 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.668218 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.671490 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-kolla-config\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.671561 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.671650 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.671726 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x82cz\" (UniqueName: \"kubernetes.io/projected/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-kube-api-access-x82cz\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.672031 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-config-data-default\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.774084 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-kolla-config\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.774135 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.774168 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.774196 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x82cz\" (UniqueName: \"kubernetes.io/projected/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-kube-api-access-x82cz\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.774251 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-config-data-default\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.774326 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.774380 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.774402 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.774716 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.789242 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-kolla-config\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.789537 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.791242 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-config-data-default\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.792446 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.828809 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.829738 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x82cz\" (UniqueName: \"kubernetes.io/projected/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-kube-api-access-x82cz\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.832137 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.835171 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.132885 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.803205 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.804704 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.811821 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-v26k8" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.812127 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.812516 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.812528 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.816524 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.991109 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94031dcf-9569-4cf1-90a9-61c962434ae8-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.991176 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/94031dcf-9569-4cf1-90a9-61c962434ae8-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.991220 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.991250 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.994142 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cs94\" (UniqueName: \"kubernetes.io/projected/94031dcf-9569-4cf1-90a9-61c962434ae8-kube-api-access-5cs94\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.994222 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.994271 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/94031dcf-9569-4cf1-90a9-61c962434ae8-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.994335 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.082849 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.083894 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.085922 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-f7vd6" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.086188 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.093266 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.093844 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.099882 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94031dcf-9569-4cf1-90a9-61c962434ae8-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.100094 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/94031dcf-9569-4cf1-90a9-61c962434ae8-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.100243 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.100358 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.100460 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cs94\" (UniqueName: \"kubernetes.io/projected/94031dcf-9569-4cf1-90a9-61c962434ae8-kube-api-access-5cs94\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.100557 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.100659 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/94031dcf-9569-4cf1-90a9-61c962434ae8-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.100778 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.107818 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94031dcf-9569-4cf1-90a9-61c962434ae8-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.111137 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/94031dcf-9569-4cf1-90a9-61c962434ae8-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.111468 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.111805 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.111916 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.112875 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.144281 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/94031dcf-9569-4cf1-90a9-61c962434ae8-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.149422 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cs94\" (UniqueName: \"kubernetes.io/projected/94031dcf-9569-4cf1-90a9-61c962434ae8-kube-api-access-5cs94\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.202324 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c70bcdb-316e-4246-b333-ddaf6438c6ee-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.202488 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2c70bcdb-316e-4246-b333-ddaf6438c6ee-kolla-config\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.202542 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xllj\" (UniqueName: \"kubernetes.io/projected/2c70bcdb-316e-4246-b333-ddaf6438c6ee-kube-api-access-5xllj\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.202575 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2c70bcdb-316e-4246-b333-ddaf6438c6ee-config-data\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.202602 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c70bcdb-316e-4246-b333-ddaf6438c6ee-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.214853 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.303354 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2c70bcdb-316e-4246-b333-ddaf6438c6ee-kolla-config\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.303412 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xllj\" (UniqueName: \"kubernetes.io/projected/2c70bcdb-316e-4246-b333-ddaf6438c6ee-kube-api-access-5xllj\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.303434 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2c70bcdb-316e-4246-b333-ddaf6438c6ee-config-data\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.303452 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c70bcdb-316e-4246-b333-ddaf6438c6ee-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.303471 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c70bcdb-316e-4246-b333-ddaf6438c6ee-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.304244 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2c70bcdb-316e-4246-b333-ddaf6438c6ee-kolla-config\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.304855 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2c70bcdb-316e-4246-b333-ddaf6438c6ee-config-data\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.310564 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c70bcdb-316e-4246-b333-ddaf6438c6ee-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.315336 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c70bcdb-316e-4246-b333-ddaf6438c6ee-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.320966 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xllj\" (UniqueName: \"kubernetes.io/projected/2c70bcdb-316e-4246-b333-ddaf6438c6ee-kube-api-access-5xllj\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.477749 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.479122 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:02 crc kubenswrapper[4902]: I0121 14:51:02.920133 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:51:02 crc kubenswrapper[4902]: I0121 14:51:02.921207 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 14:51:02 crc kubenswrapper[4902]: I0121 14:51:02.924543 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-h8x8w" Jan 21 14:51:02 crc kubenswrapper[4902]: I0121 14:51:02.934406 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:51:03 crc kubenswrapper[4902]: I0121 14:51:03.096036 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhfv5\" (UniqueName: \"kubernetes.io/projected/14fb6fe4-7f85-4d0a-b6f6-a86c152cb113-kube-api-access-jhfv5\") pod \"kube-state-metrics-0\" (UID: \"14fb6fe4-7f85-4d0a-b6f6-a86c152cb113\") " pod="openstack/kube-state-metrics-0" Jan 21 14:51:03 crc kubenswrapper[4902]: I0121 14:51:03.200379 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhfv5\" (UniqueName: \"kubernetes.io/projected/14fb6fe4-7f85-4d0a-b6f6-a86c152cb113-kube-api-access-jhfv5\") pod \"kube-state-metrics-0\" (UID: \"14fb6fe4-7f85-4d0a-b6f6-a86c152cb113\") " pod="openstack/kube-state-metrics-0" Jan 21 14:51:03 crc kubenswrapper[4902]: I0121 14:51:03.221384 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhfv5\" (UniqueName: \"kubernetes.io/projected/14fb6fe4-7f85-4d0a-b6f6-a86c152cb113-kube-api-access-jhfv5\") pod \"kube-state-metrics-0\" (UID: \"14fb6fe4-7f85-4d0a-b6f6-a86c152cb113\") " pod="openstack/kube-state-metrics-0" Jan 21 14:51:03 crc kubenswrapper[4902]: I0121 14:51:03.237129 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.096929 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kxwsm"] Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.099258 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.101945 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-kfz9n" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.110089 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.110318 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.113252 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kxwsm"] Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.120202 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-4sm9h"] Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.123199 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.136680 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-4sm9h"] Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.201575 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8135258-f03d-4c9a-be6f-7dd1dd099188-scripts\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.201736 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8135258-f03d-4c9a-be6f-7dd1dd099188-combined-ca-bundle\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.201855 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-488bn\" (UniqueName: \"kubernetes.io/projected/bfa512c9-b91a-4a30-8a23-548ef53b094e-kube-api-access-488bn\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.201917 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-run\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.201990 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-run\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.202029 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-run-ovn\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.202069 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-log\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.202103 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa512c9-b91a-4a30-8a23-548ef53b094e-scripts\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.202161 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8135258-f03d-4c9a-be6f-7dd1dd099188-ovn-controller-tls-certs\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.202184 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-lib\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.202200 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-etc-ovs\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.202261 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-log-ovn\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.202287 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smxgb\" (UniqueName: \"kubernetes.io/projected/e8135258-f03d-4c9a-be6f-7dd1dd099188-kube-api-access-smxgb\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303253 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8135258-f03d-4c9a-be6f-7dd1dd099188-scripts\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303304 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8135258-f03d-4c9a-be6f-7dd1dd099188-combined-ca-bundle\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303351 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-488bn\" (UniqueName: \"kubernetes.io/projected/bfa512c9-b91a-4a30-8a23-548ef53b094e-kube-api-access-488bn\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303381 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-run\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303410 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-run\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303432 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-run-ovn\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303451 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-log\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303471 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa512c9-b91a-4a30-8a23-548ef53b094e-scripts\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303494 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8135258-f03d-4c9a-be6f-7dd1dd099188-ovn-controller-tls-certs\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303514 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-etc-ovs\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303568 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-lib\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303602 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-log-ovn\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303619 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smxgb\" (UniqueName: \"kubernetes.io/projected/e8135258-f03d-4c9a-be6f-7dd1dd099188-kube-api-access-smxgb\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303974 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-run\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.309142 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-run\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.309573 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8135258-f03d-4c9a-be6f-7dd1dd099188-scripts\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.313208 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-run-ovn\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.318158 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8135258-f03d-4c9a-be6f-7dd1dd099188-ovn-controller-tls-certs\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.319237 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8135258-f03d-4c9a-be6f-7dd1dd099188-combined-ca-bundle\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.320366 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-etc-ovs\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.320381 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-log-ovn\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.320439 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-log\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.320458 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-lib\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.321516 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa512c9-b91a-4a30-8a23-548ef53b094e-scripts\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.321530 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smxgb\" (UniqueName: \"kubernetes.io/projected/e8135258-f03d-4c9a-be6f-7dd1dd099188-kube-api-access-smxgb\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.322407 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-488bn\" (UniqueName: \"kubernetes.io/projected/bfa512c9-b91a-4a30-8a23-548ef53b094e-kube-api-access-488bn\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.426499 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.441606 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.583630 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.585733 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.591837 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.591950 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.591855 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.592109 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.592256 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-8qgzz" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.663224 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.724717 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.724759 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.724780 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.724986 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d8zs\" (UniqueName: \"kubernetes.io/projected/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-kube-api-access-5d8zs\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.725023 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.725073 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.725102 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-config\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.725168 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.868450 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d8zs\" (UniqueName: \"kubernetes.io/projected/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-kube-api-access-5d8zs\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.868500 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.868531 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.868556 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-config\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.868602 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.868677 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.868704 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.868727 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.868913 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.871151 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-config\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.871517 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.871867 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.890880 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.891445 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d8zs\" (UniqueName: \"kubernetes.io/projected/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-kube-api-access-5d8zs\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.892897 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.896030 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.906774 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.962127 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.619497 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.625501 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.629295 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-8bqds" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.629343 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.629890 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.629910 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.648462 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.726243 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55191d4e-0310-4e6a-a10c-902e0cc8a209-config\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.726290 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.726360 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.726397 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55191d4e-0310-4e6a-a10c-902e0cc8a209-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.726421 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.726442 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9q6s\" (UniqueName: \"kubernetes.io/projected/55191d4e-0310-4e6a-a10c-902e0cc8a209-kube-api-access-g9q6s\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.726477 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.726523 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/55191d4e-0310-4e6a-a10c-902e0cc8a209-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.827973 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/55191d4e-0310-4e6a-a10c-902e0cc8a209-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.828102 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55191d4e-0310-4e6a-a10c-902e0cc8a209-config\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.828145 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.828233 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.828272 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55191d4e-0310-4e6a-a10c-902e0cc8a209-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.828303 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.828336 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9q6s\" (UniqueName: \"kubernetes.io/projected/55191d4e-0310-4e6a-a10c-902e0cc8a209-kube-api-access-g9q6s\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.828387 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.828664 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/55191d4e-0310-4e6a-a10c-902e0cc8a209-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.829173 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55191d4e-0310-4e6a-a10c-902e0cc8a209-config\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.829665 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55191d4e-0310-4e6a-a10c-902e0cc8a209-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.830337 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.835430 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.835752 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.836103 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.860515 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.869553 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9q6s\" (UniqueName: \"kubernetes.io/projected/55191d4e-0310-4e6a-a10c-902e0cc8a209-kube-api-access-g9q6s\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.948377 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:14 crc kubenswrapper[4902]: E0121 14:51:14.484396 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 21 14:51:14 crc kubenswrapper[4902]: E0121 14:51:14.484884 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sth8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(67f50f65-9151-4444-9680-f86e0f256069): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:51:14 crc kubenswrapper[4902]: E0121 14:51:14.486567 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="67f50f65-9151-4444-9680-f86e0f256069" Jan 21 14:51:14 crc kubenswrapper[4902]: E0121 14:51:14.687012 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-server-0" podUID="67f50f65-9151-4444-9680-f86e0f256069" Jan 21 14:51:17 crc kubenswrapper[4902]: I0121 14:51:17.769687 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:51:17 crc kubenswrapper[4902]: I0121 14:51:17.770026 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:51:19 crc kubenswrapper[4902]: I0121 14:51:19.632093 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 14:51:20 crc kubenswrapper[4902]: E0121 14:51:20.088197 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 21 14:51:20 crc kubenswrapper[4902]: E0121 14:51:20.088695 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tl6z6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-9cn2p_openstack(15a1e285-4b20-4390-8e14-d9d2f0101c71): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:51:20 crc kubenswrapper[4902]: E0121 14:51:20.089988 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" podUID="15a1e285-4b20-4390-8e14-d9d2f0101c71" Jan 21 14:51:20 crc kubenswrapper[4902]: E0121 14:51:20.112036 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 21 14:51:20 crc kubenswrapper[4902]: E0121 14:51:20.112165 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7rwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-744ffd65bc-kldb7_openstack(d70f1f30-fc0e-48a8-a7b7-cf43c23331e9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:51:20 crc kubenswrapper[4902]: E0121 14:51:20.113445 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" podUID="d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" Jan 21 14:51:20 crc kubenswrapper[4902]: E0121 14:51:20.122173 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 21 14:51:20 crc kubenswrapper[4902]: E0121 14:51:20.122293 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ccpbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-95f5f6995-jnphw_openstack(056e5d1c-0c8e-4988-8e3d-bd133023ce30): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:51:20 crc kubenswrapper[4902]: E0121 14:51:20.217118 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-95f5f6995-jnphw" podUID="056e5d1c-0c8e-4988-8e3d-bd133023ce30" Jan 21 14:51:20 crc kubenswrapper[4902]: I0121 14:51:20.726703 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"19a933f8-5063-4cd1-8d3d-420e82d4e1fd","Type":"ContainerStarted","Data":"bf2f4711a987253bd77a78040ec2bd0cf16012bd15444fb1b640251be787c875"} Jan 21 14:51:20 crc kubenswrapper[4902]: E0121 14:51:20.730257 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" podUID="d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" Jan 21 14:51:20 crc kubenswrapper[4902]: E0121 14:51:20.730786 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-95f5f6995-jnphw" podUID="056e5d1c-0c8e-4988-8e3d-bd133023ce30" Jan 21 14:51:20 crc kubenswrapper[4902]: I0121 14:51:20.934520 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 14:51:20 crc kubenswrapper[4902]: I0121 14:51:20.981636 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.038536 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.123586 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kxwsm"] Jan 21 14:51:21 crc kubenswrapper[4902]: W0121 14:51:21.135941 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8135258_f03d_4c9a_be6f_7dd1dd099188.slice/crio-4abe7b149b5deee49487446d44f9ad3581d14a3d2ca4cc34cd11e6b49541512c WatchSource:0}: Error finding container 4abe7b149b5deee49487446d44f9ad3581d14a3d2ca4cc34cd11e6b49541512c: Status 404 returned error can't find the container with id 4abe7b149b5deee49487446d44f9ad3581d14a3d2ca4cc34cd11e6b49541512c Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.337235 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 14:51:21 crc kubenswrapper[4902]: W0121 14:51:21.352352 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcaf5f1ad_bafb_4a54_b8fd_503d1a3a5fd3.slice/crio-2fa4acbc229f26d07119b0fd5c43c50281090a6fcc6e1442dc8b7ca5938b7ddb WatchSource:0}: Error finding container 2fa4acbc229f26d07119b0fd5c43c50281090a6fcc6e1442dc8b7ca5938b7ddb: Status 404 returned error can't find the container with id 2fa4acbc229f26d07119b0fd5c43c50281090a6fcc6e1442dc8b7ca5938b7ddb Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.383900 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.387895 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.473528 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15a1e285-4b20-4390-8e14-d9d2f0101c71-config\") pod \"15a1e285-4b20-4390-8e14-d9d2f0101c71\" (UID: \"15a1e285-4b20-4390-8e14-d9d2f0101c71\") " Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.473575 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl6z6\" (UniqueName: \"kubernetes.io/projected/15a1e285-4b20-4390-8e14-d9d2f0101c71-kube-api-access-tl6z6\") pod \"15a1e285-4b20-4390-8e14-d9d2f0101c71\" (UID: \"15a1e285-4b20-4390-8e14-d9d2f0101c71\") " Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.473632 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15a1e285-4b20-4390-8e14-d9d2f0101c71-dns-svc\") pod \"15a1e285-4b20-4390-8e14-d9d2f0101c71\" (UID: \"15a1e285-4b20-4390-8e14-d9d2f0101c71\") " Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.474571 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15a1e285-4b20-4390-8e14-d9d2f0101c71-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "15a1e285-4b20-4390-8e14-d9d2f0101c71" (UID: "15a1e285-4b20-4390-8e14-d9d2f0101c71"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.474993 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15a1e285-4b20-4390-8e14-d9d2f0101c71-config" (OuterVolumeSpecName: "config") pod "15a1e285-4b20-4390-8e14-d9d2f0101c71" (UID: "15a1e285-4b20-4390-8e14-d9d2f0101c71"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.481944 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15a1e285-4b20-4390-8e14-d9d2f0101c71-kube-api-access-tl6z6" (OuterVolumeSpecName: "kube-api-access-tl6z6") pod "15a1e285-4b20-4390-8e14-d9d2f0101c71" (UID: "15a1e285-4b20-4390-8e14-d9d2f0101c71"). InnerVolumeSpecName "kube-api-access-tl6z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.484181 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-4sm9h"] Jan 21 14:51:21 crc kubenswrapper[4902]: W0121 14:51:21.487008 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfa512c9_b91a_4a30_8a23_548ef53b094e.slice/crio-802447b9b93240937e871b9f5fd717abb6508a7f8537087545c7900d7f4a54d8 WatchSource:0}: Error finding container 802447b9b93240937e871b9f5fd717abb6508a7f8537087545c7900d7f4a54d8: Status 404 returned error can't find the container with id 802447b9b93240937e871b9f5fd717abb6508a7f8537087545c7900d7f4a54d8 Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.575737 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15a1e285-4b20-4390-8e14-d9d2f0101c71-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.575767 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tl6z6\" (UniqueName: \"kubernetes.io/projected/15a1e285-4b20-4390-8e14-d9d2f0101c71-kube-api-access-tl6z6\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.575777 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15a1e285-4b20-4390-8e14-d9d2f0101c71-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.736646 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"55191d4e-0310-4e6a-a10c-902e0cc8a209","Type":"ContainerStarted","Data":"7b6bfe3f7296114e25ecf2caceede712b35695e06d9545a4b2270d1cce053ea2"} Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.737572 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" event={"ID":"15a1e285-4b20-4390-8e14-d9d2f0101c71","Type":"ContainerDied","Data":"d9d43db0013e5ca723cebb5651583404b69e5c48b2b76ea7263f612afe6ee4be"} Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.737604 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.738533 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3","Type":"ContainerStarted","Data":"2fa4acbc229f26d07119b0fd5c43c50281090a6fcc6e1442dc8b7ca5938b7ddb"} Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.739557 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2c70bcdb-316e-4246-b333-ddaf6438c6ee","Type":"ContainerStarted","Data":"012af9c88121ed6a56a653b1c142d5e67759c3d8ac9efeda00265ffdb3f91980"} Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.740425 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"14fb6fe4-7f85-4d0a-b6f6-a86c152cb113","Type":"ContainerStarted","Data":"83455c4bf3aeb7b7c76443c4b9198dde4cf810334ccfb634a4b5c17df6d13e97"} Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.741327 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"94031dcf-9569-4cf1-90a9-61c962434ae8","Type":"ContainerStarted","Data":"0e2225caf36121574255d90227f9966e2a981074b953f7b34948ace2a7d9beae"} Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.742668 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4sm9h" event={"ID":"bfa512c9-b91a-4a30-8a23-548ef53b094e","Type":"ContainerStarted","Data":"802447b9b93240937e871b9f5fd717abb6508a7f8537087545c7900d7f4a54d8"} Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.744106 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d7103bd-b24b-4a0c-b68a-17373307f1aa","Type":"ContainerStarted","Data":"92e5d5d244ea3ad93a7e80ee12639d5b07fffbb547e78aa8ac616bbb354c0c54"} Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.745553 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxwsm" event={"ID":"e8135258-f03d-4c9a-be6f-7dd1dd099188","Type":"ContainerStarted","Data":"4abe7b149b5deee49487446d44f9ad3581d14a3d2ca4cc34cd11e6b49541512c"} Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.815827 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-9cn2p"] Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.823625 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-9cn2p"] Jan 21 14:51:22 crc kubenswrapper[4902]: I0121 14:51:22.311849 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15a1e285-4b20-4390-8e14-d9d2f0101c71" path="/var/lib/kubelet/pods/15a1e285-4b20-4390-8e14-d9d2f0101c71/volumes" Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.930308 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3","Type":"ContainerStarted","Data":"9f4ace11ba250ec7523d0e7ac4b0965a74da40d284c328763aa454874b0606f8"} Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.931803 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxwsm" event={"ID":"e8135258-f03d-4c9a-be6f-7dd1dd099188","Type":"ContainerStarted","Data":"339126d2349790760c7b3087cf9fa15cd976581645c959f56ddb41d46b290f7c"} Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.931904 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.933529 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2c70bcdb-316e-4246-b333-ddaf6438c6ee","Type":"ContainerStarted","Data":"c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2"} Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.933683 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.936106 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"14fb6fe4-7f85-4d0a-b6f6-a86c152cb113","Type":"ContainerStarted","Data":"eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba"} Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.936311 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.937784 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"55191d4e-0310-4e6a-a10c-902e0cc8a209","Type":"ContainerStarted","Data":"00bf7a3928a19891dd7e4eeb9d6cbd183d170218b09cf88bac1204f77dcea9f1"} Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.939867 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"19a933f8-5063-4cd1-8d3d-420e82d4e1fd","Type":"ContainerStarted","Data":"231c6e3ba9f72e73ba62f1ebc540eb1af701534ca3cefe4ed6a2a60507ad8659"} Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.944650 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"94031dcf-9569-4cf1-90a9-61c962434ae8","Type":"ContainerStarted","Data":"9c7eb232194bf5acf0b72c5e4e2b10f32410c50f4767d8979981cf5af8e7ed7d"} Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.946564 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4sm9h" event={"ID":"bfa512c9-b91a-4a30-8a23-548ef53b094e","Type":"ContainerStarted","Data":"e34b32829dca6d57eb471e674b23b27f3959ee5f87b800836461d13aa9a37bcb"} Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.958156 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-kxwsm" podStartSLOduration=15.846521812 podStartE2EDuration="22.95813955s" podCreationTimestamp="2026-01-21 14:51:06 +0000 UTC" firstStartedPulling="2026-01-21 14:51:21.141170575 +0000 UTC m=+1043.218003604" lastFinishedPulling="2026-01-21 14:51:28.252788313 +0000 UTC m=+1050.329621342" observedRunningTime="2026-01-21 14:51:28.948774569 +0000 UTC m=+1051.025607598" watchObservedRunningTime="2026-01-21 14:51:28.95813955 +0000 UTC m=+1051.034972579" Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.975124 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=20.692774964 podStartE2EDuration="27.975103782s" podCreationTimestamp="2026-01-21 14:51:01 +0000 UTC" firstStartedPulling="2026-01-21 14:51:20.965931949 +0000 UTC m=+1043.042764978" lastFinishedPulling="2026-01-21 14:51:28.248260767 +0000 UTC m=+1050.325093796" observedRunningTime="2026-01-21 14:51:28.970465123 +0000 UTC m=+1051.047298152" watchObservedRunningTime="2026-01-21 14:51:28.975103782 +0000 UTC m=+1051.051936811" Jan 21 14:51:29 crc kubenswrapper[4902]: I0121 14:51:29.046656 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=19.764505679 podStartE2EDuration="27.046629132s" podCreationTimestamp="2026-01-21 14:51:02 +0000 UTC" firstStartedPulling="2026-01-21 14:51:20.986192872 +0000 UTC m=+1043.063025901" lastFinishedPulling="2026-01-21 14:51:28.268316325 +0000 UTC m=+1050.345149354" observedRunningTime="2026-01-21 14:51:29.0418975 +0000 UTC m=+1051.118730539" watchObservedRunningTime="2026-01-21 14:51:29.046629132 +0000 UTC m=+1051.123462161" Jan 21 14:51:29 crc kubenswrapper[4902]: I0121 14:51:29.954288 4902 generic.go:334] "Generic (PLEG): container finished" podID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerID="e34b32829dca6d57eb471e674b23b27f3959ee5f87b800836461d13aa9a37bcb" exitCode=0 Jan 21 14:51:29 crc kubenswrapper[4902]: I0121 14:51:29.954363 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4sm9h" event={"ID":"bfa512c9-b91a-4a30-8a23-548ef53b094e","Type":"ContainerDied","Data":"e34b32829dca6d57eb471e674b23b27f3959ee5f87b800836461d13aa9a37bcb"} Jan 21 14:51:30 crc kubenswrapper[4902]: I0121 14:51:30.967397 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4sm9h" event={"ID":"bfa512c9-b91a-4a30-8a23-548ef53b094e","Type":"ContainerStarted","Data":"df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8"} Jan 21 14:51:31 crc kubenswrapper[4902]: I0121 14:51:31.975839 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4sm9h" event={"ID":"bfa512c9-b91a-4a30-8a23-548ef53b094e","Type":"ContainerStarted","Data":"0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1"} Jan 21 14:51:31 crc kubenswrapper[4902]: I0121 14:51:31.976211 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:31 crc kubenswrapper[4902]: I0121 14:51:31.976226 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:31 crc kubenswrapper[4902]: I0121 14:51:31.977281 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3","Type":"ContainerStarted","Data":"fabbe3c5e36565bf6c2514be460d8e197d15c7ef2a2eaad51eaaf9fc51cd6931"} Jan 21 14:51:31 crc kubenswrapper[4902]: I0121 14:51:31.979475 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"55191d4e-0310-4e6a-a10c-902e0cc8a209","Type":"ContainerStarted","Data":"f33529c27085ffa8a5953825706b4cb4672e9bfd551a411eede0445f1ce65803"} Jan 21 14:51:32 crc kubenswrapper[4902]: I0121 14:51:32.030879 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-4sm9h" podStartSLOduration=19.316401865 podStartE2EDuration="26.030860672s" podCreationTimestamp="2026-01-21 14:51:06 +0000 UTC" firstStartedPulling="2026-01-21 14:51:21.489407335 +0000 UTC m=+1043.566240364" lastFinishedPulling="2026-01-21 14:51:28.203866142 +0000 UTC m=+1050.280699171" observedRunningTime="2026-01-21 14:51:32.001422173 +0000 UTC m=+1054.078255222" watchObservedRunningTime="2026-01-21 14:51:32.030860672 +0000 UTC m=+1054.107693701" Jan 21 14:51:32 crc kubenswrapper[4902]: I0121 14:51:32.033668 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=12.889791386 podStartE2EDuration="23.033652489s" podCreationTimestamp="2026-01-21 14:51:09 +0000 UTC" firstStartedPulling="2026-01-21 14:51:21.400388338 +0000 UTC m=+1043.477221387" lastFinishedPulling="2026-01-21 14:51:31.544249421 +0000 UTC m=+1053.621082490" observedRunningTime="2026-01-21 14:51:32.030628625 +0000 UTC m=+1054.107461684" watchObservedRunningTime="2026-01-21 14:51:32.033652489 +0000 UTC m=+1054.110485518" Jan 21 14:51:32 crc kubenswrapper[4902]: I0121 14:51:32.329106 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=16.11345753 podStartE2EDuration="26.329084041s" podCreationTimestamp="2026-01-21 14:51:06 +0000 UTC" firstStartedPulling="2026-01-21 14:51:21.35483444 +0000 UTC m=+1043.431667459" lastFinishedPulling="2026-01-21 14:51:31.570460941 +0000 UTC m=+1053.647293970" observedRunningTime="2026-01-21 14:51:32.07893813 +0000 UTC m=+1054.155771159" watchObservedRunningTime="2026-01-21 14:51:32.329084041 +0000 UTC m=+1054.405917080" Jan 21 14:51:32 crc kubenswrapper[4902]: I0121 14:51:32.963773 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:32 crc kubenswrapper[4902]: I0121 14:51:32.987759 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67f50f65-9151-4444-9680-f86e0f256069","Type":"ContainerStarted","Data":"61d699bd9d7518ccbc0b054c17af7f1dd18564a940361db0161ef5daafb50c80"} Jan 21 14:51:32 crc kubenswrapper[4902]: I0121 14:51:32.990105 4902 generic.go:334] "Generic (PLEG): container finished" podID="19a933f8-5063-4cd1-8d3d-420e82d4e1fd" containerID="231c6e3ba9f72e73ba62f1ebc540eb1af701534ca3cefe4ed6a2a60507ad8659" exitCode=0 Jan 21 14:51:32 crc kubenswrapper[4902]: I0121 14:51:32.990180 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"19a933f8-5063-4cd1-8d3d-420e82d4e1fd","Type":"ContainerDied","Data":"231c6e3ba9f72e73ba62f1ebc540eb1af701534ca3cefe4ed6a2a60507ad8659"} Jan 21 14:51:32 crc kubenswrapper[4902]: I0121 14:51:32.992503 4902 generic.go:334] "Generic (PLEG): container finished" podID="94031dcf-9569-4cf1-90a9-61c962434ae8" containerID="9c7eb232194bf5acf0b72c5e4e2b10f32410c50f4767d8979981cf5af8e7ed7d" exitCode=0 Jan 21 14:51:32 crc kubenswrapper[4902]: I0121 14:51:32.992552 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"94031dcf-9569-4cf1-90a9-61c962434ae8","Type":"ContainerDied","Data":"9c7eb232194bf5acf0b72c5e4e2b10f32410c50f4767d8979981cf5af8e7ed7d"} Jan 21 14:51:32 crc kubenswrapper[4902]: I0121 14:51:32.994369 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" event={"ID":"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9","Type":"ContainerDied","Data":"f49956a6a8aaab4287075b0d5dea23e1e32bd5448385906a6c3a40565425bcb2"} Jan 21 14:51:32 crc kubenswrapper[4902]: I0121 14:51:32.994801 4902 generic.go:334] "Generic (PLEG): container finished" podID="d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" containerID="f49956a6a8aaab4287075b0d5dea23e1e32bd5448385906a6c3a40565425bcb2" exitCode=0 Jan 21 14:51:33 crc kubenswrapper[4902]: I0121 14:51:33.243531 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 14:51:34 crc kubenswrapper[4902]: I0121 14:51:34.004744 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"19a933f8-5063-4cd1-8d3d-420e82d4e1fd","Type":"ContainerStarted","Data":"21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9"} Jan 21 14:51:34 crc kubenswrapper[4902]: I0121 14:51:34.007246 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"94031dcf-9569-4cf1-90a9-61c962434ae8","Type":"ContainerStarted","Data":"6843f7fdaa415e7e2f0347cd97fdaa8f7eaf2a1c6b75202daa5f85889752389a"} Jan 21 14:51:34 crc kubenswrapper[4902]: I0121 14:51:34.010528 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" event={"ID":"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9","Type":"ContainerStarted","Data":"dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5"} Jan 21 14:51:34 crc kubenswrapper[4902]: I0121 14:51:34.011152 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:51:34 crc kubenswrapper[4902]: I0121 14:51:34.038796 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=27.887059694 podStartE2EDuration="36.038773454s" podCreationTimestamp="2026-01-21 14:50:58 +0000 UTC" firstStartedPulling="2026-01-21 14:51:20.116171053 +0000 UTC m=+1042.193004082" lastFinishedPulling="2026-01-21 14:51:28.267884813 +0000 UTC m=+1050.344717842" observedRunningTime="2026-01-21 14:51:34.034633539 +0000 UTC m=+1056.111466568" watchObservedRunningTime="2026-01-21 14:51:34.038773454 +0000 UTC m=+1056.115606493" Jan 21 14:51:34 crc kubenswrapper[4902]: I0121 14:51:34.058423 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.84749614 podStartE2EDuration="35.058406871s" podCreationTimestamp="2026-01-21 14:50:59 +0000 UTC" firstStartedPulling="2026-01-21 14:51:21.039222308 +0000 UTC m=+1043.116055337" lastFinishedPulling="2026-01-21 14:51:28.250133039 +0000 UTC m=+1050.326966068" observedRunningTime="2026-01-21 14:51:34.057714971 +0000 UTC m=+1056.134548010" watchObservedRunningTime="2026-01-21 14:51:34.058406871 +0000 UTC m=+1056.135239900" Jan 21 14:51:34 crc kubenswrapper[4902]: I0121 14:51:34.072885 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" podStartSLOduration=2.398618788 podStartE2EDuration="38.072864263s" podCreationTimestamp="2026-01-21 14:50:56 +0000 UTC" firstStartedPulling="2026-01-21 14:50:57.062850488 +0000 UTC m=+1019.139683517" lastFinishedPulling="2026-01-21 14:51:32.737095963 +0000 UTC m=+1054.813928992" observedRunningTime="2026-01-21 14:51:34.072239645 +0000 UTC m=+1056.149072674" watchObservedRunningTime="2026-01-21 14:51:34.072864263 +0000 UTC m=+1056.149697302" Jan 21 14:51:34 crc kubenswrapper[4902]: I0121 14:51:34.949017 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:34 crc kubenswrapper[4902]: I0121 14:51:34.962961 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:34 crc kubenswrapper[4902]: I0121 14:51:34.988685 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.019704 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.019794 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.060331 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.335307 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-kldb7"] Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.494437 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b79764b65-4rfxx"] Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.495664 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.499791 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.515665 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-c27gh"] Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.516681 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.522337 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.554565 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-c27gh"] Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.567071 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b79764b65-4rfxx"] Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.608580 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-ovsdbserver-sb\") pod \"dnsmasq-dns-5b79764b65-4rfxx\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.608679 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-dns-svc\") pod \"dnsmasq-dns-5b79764b65-4rfxx\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.608741 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8891f80f-6cb0-4dc6-9f92-836d465e1c84-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.608777 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8891f80f-6cb0-4dc6-9f92-836d465e1c84-ovs-rundir\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.608808 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8891f80f-6cb0-4dc6-9f92-836d465e1c84-combined-ca-bundle\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.608843 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-config\") pod \"dnsmasq-dns-5b79764b65-4rfxx\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.608868 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq5hg\" (UniqueName: \"kubernetes.io/projected/02d63009-9822-4096-9bf1-8f71d4dacd7b-kube-api-access-hq5hg\") pod \"dnsmasq-dns-5b79764b65-4rfxx\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.608892 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8891f80f-6cb0-4dc6-9f92-836d465e1c84-config\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.608931 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8891f80f-6cb0-4dc6-9f92-836d465e1c84-ovn-rundir\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.608961 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5wqz\" (UniqueName: \"kubernetes.io/projected/8891f80f-6cb0-4dc6-9f92-836d465e1c84-kube-api-access-x5wqz\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.710884 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8891f80f-6cb0-4dc6-9f92-836d465e1c84-ovs-rundir\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.710946 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8891f80f-6cb0-4dc6-9f92-836d465e1c84-combined-ca-bundle\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.710981 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-config\") pod \"dnsmasq-dns-5b79764b65-4rfxx\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.711008 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq5hg\" (UniqueName: \"kubernetes.io/projected/02d63009-9822-4096-9bf1-8f71d4dacd7b-kube-api-access-hq5hg\") pod \"dnsmasq-dns-5b79764b65-4rfxx\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.711032 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8891f80f-6cb0-4dc6-9f92-836d465e1c84-config\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.711086 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8891f80f-6cb0-4dc6-9f92-836d465e1c84-ovn-rundir\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.711121 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5wqz\" (UniqueName: \"kubernetes.io/projected/8891f80f-6cb0-4dc6-9f92-836d465e1c84-kube-api-access-x5wqz\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.711164 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-ovsdbserver-sb\") pod \"dnsmasq-dns-5b79764b65-4rfxx\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.711226 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-dns-svc\") pod \"dnsmasq-dns-5b79764b65-4rfxx\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.711280 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8891f80f-6cb0-4dc6-9f92-836d465e1c84-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.711357 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8891f80f-6cb0-4dc6-9f92-836d465e1c84-ovs-rundir\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.711606 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8891f80f-6cb0-4dc6-9f92-836d465e1c84-ovn-rundir\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.712287 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-ovsdbserver-sb\") pod \"dnsmasq-dns-5b79764b65-4rfxx\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.712372 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8891f80f-6cb0-4dc6-9f92-836d465e1c84-config\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.712703 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-config\") pod \"dnsmasq-dns-5b79764b65-4rfxx\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.712872 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-dns-svc\") pod \"dnsmasq-dns-5b79764b65-4rfxx\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.716920 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8891f80f-6cb0-4dc6-9f92-836d465e1c84-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.720057 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8891f80f-6cb0-4dc6-9f92-836d465e1c84-combined-ca-bundle\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.735763 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5wqz\" (UniqueName: \"kubernetes.io/projected/8891f80f-6cb0-4dc6-9f92-836d465e1c84-kube-api-access-x5wqz\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.735871 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq5hg\" (UniqueName: \"kubernetes.io/projected/02d63009-9822-4096-9bf1-8f71d4dacd7b-kube-api-access-hq5hg\") pod \"dnsmasq-dns-5b79764b65-4rfxx\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.752799 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-jnphw"] Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.785715 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-8sbwv"] Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.786929 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.788928 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.810767 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-8sbwv"] Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.881887 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.902730 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.915309 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.915605 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.915704 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-config\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.915746 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-dns-svc\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.915776 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbz4w\" (UniqueName: \"kubernetes.io/projected/6fbfbb64-2e43-4c95-b011-bec06204855d-kube-api-access-vbz4w\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.017729 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-config\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.017858 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-dns-svc\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.017937 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbz4w\" (UniqueName: \"kubernetes.io/projected/6fbfbb64-2e43-4c95-b011-bec06204855d-kube-api-access-vbz4w\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.018059 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.018138 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.164104 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.164626 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-dns-svc\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.165314 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.165711 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-config\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.166588 4902 generic.go:334] "Generic (PLEG): container finished" podID="056e5d1c-0c8e-4988-8e3d-bd133023ce30" containerID="0f5c9ee80727b9e8632f288b2b3d7cfcfa77af2c1b8caf9690b633d832028cb6" exitCode=0 Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.166684 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-jnphw" event={"ID":"056e5d1c-0c8e-4988-8e3d-bd133023ce30","Type":"ContainerDied","Data":"0f5c9ee80727b9e8632f288b2b3d7cfcfa77af2c1b8caf9690b633d832028cb6"} Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.168207 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" podUID="d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" containerName="dnsmasq-dns" containerID="cri-o://dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5" gracePeriod=10 Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.173136 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbz4w\" (UniqueName: \"kubernetes.io/projected/6fbfbb64-2e43-4c95-b011-bec06204855d-kube-api-access-vbz4w\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.236488 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.392906 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.395666 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.422154 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.422928 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-jlxv8" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.423151 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.423285 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.423402 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.429512 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.470746 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j24b6\" (UniqueName: \"kubernetes.io/projected/2ff2c3d8-2d68-4255-a175-21f0df1b9276-kube-api-access-j24b6\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.470816 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.470858 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.470901 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff2c3d8-2d68-4255-a175-21f0df1b9276-config\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.470945 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ff2c3d8-2d68-4255-a175-21f0df1b9276-scripts\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.471006 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2ff2c3d8-2d68-4255-a175-21f0df1b9276-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.471053 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.487092 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.572249 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j24b6\" (UniqueName: \"kubernetes.io/projected/2ff2c3d8-2d68-4255-a175-21f0df1b9276-kube-api-access-j24b6\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.572324 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.572391 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.572434 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff2c3d8-2d68-4255-a175-21f0df1b9276-config\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.572471 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ff2c3d8-2d68-4255-a175-21f0df1b9276-scripts\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.572520 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2ff2c3d8-2d68-4255-a175-21f0df1b9276-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.572549 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.575317 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff2c3d8-2d68-4255-a175-21f0df1b9276-config\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.575857 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ff2c3d8-2d68-4255-a175-21f0df1b9276-scripts\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.576492 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2ff2c3d8-2d68-4255-a175-21f0df1b9276-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.577960 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.581524 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.584035 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.640032 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j24b6\" (UniqueName: \"kubernetes.io/projected/2ff2c3d8-2d68-4255-a175-21f0df1b9276-kube-api-access-j24b6\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.667123 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b79764b65-4rfxx"] Jan 21 14:51:36 crc kubenswrapper[4902]: W0121 14:51:36.673694 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02d63009_9822_4096_9bf1_8f71d4dacd7b.slice/crio-ef2c29276f6c62af6940939c2d275db20eac9e33e286538b85878bd0fb0e83e5 WatchSource:0}: Error finding container ef2c29276f6c62af6940939c2d275db20eac9e33e286538b85878bd0fb0e83e5: Status 404 returned error can't find the container with id ef2c29276f6c62af6940939c2d275db20eac9e33e286538b85878bd0fb0e83e5 Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.724619 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-c27gh"] Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.744877 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.792036 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.875886 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccpbs\" (UniqueName: \"kubernetes.io/projected/056e5d1c-0c8e-4988-8e3d-bd133023ce30-kube-api-access-ccpbs\") pod \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\" (UID: \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\") " Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.876139 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056e5d1c-0c8e-4988-8e3d-bd133023ce30-config\") pod \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\" (UID: \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\") " Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.876171 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/056e5d1c-0c8e-4988-8e3d-bd133023ce30-dns-svc\") pod \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\" (UID: \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\") " Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.882712 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/056e5d1c-0c8e-4988-8e3d-bd133023ce30-kube-api-access-ccpbs" (OuterVolumeSpecName: "kube-api-access-ccpbs") pod "056e5d1c-0c8e-4988-8e3d-bd133023ce30" (UID: "056e5d1c-0c8e-4988-8e3d-bd133023ce30"). InnerVolumeSpecName "kube-api-access-ccpbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.907519 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056e5d1c-0c8e-4988-8e3d-bd133023ce30-config" (OuterVolumeSpecName: "config") pod "056e5d1c-0c8e-4988-8e3d-bd133023ce30" (UID: "056e5d1c-0c8e-4988-8e3d-bd133023ce30"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.909243 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056e5d1c-0c8e-4988-8e3d-bd133023ce30-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "056e5d1c-0c8e-4988-8e3d-bd133023ce30" (UID: "056e5d1c-0c8e-4988-8e3d-bd133023ce30"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.979398 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccpbs\" (UniqueName: \"kubernetes.io/projected/056e5d1c-0c8e-4988-8e3d-bd133023ce30-kube-api-access-ccpbs\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.979425 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056e5d1c-0c8e-4988-8e3d-bd133023ce30-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.979434 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/056e5d1c-0c8e-4988-8e3d-bd133023ce30-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.987634 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.080840 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-dns-svc\") pod \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\" (UID: \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\") " Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.080925 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7rwc\" (UniqueName: \"kubernetes.io/projected/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-kube-api-access-g7rwc\") pod \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\" (UID: \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\") " Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.081051 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-config\") pod \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\" (UID: \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\") " Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.088492 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-kube-api-access-g7rwc" (OuterVolumeSpecName: "kube-api-access-g7rwc") pod "d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" (UID: "d70f1f30-fc0e-48a8-a7b7-cf43c23331e9"). InnerVolumeSpecName "kube-api-access-g7rwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.119151 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" (UID: "d70f1f30-fc0e-48a8-a7b7-cf43c23331e9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.135042 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-config" (OuterVolumeSpecName: "config") pod "d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" (UID: "d70f1f30-fc0e-48a8-a7b7-cf43c23331e9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.174430 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-8sbwv"] Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.179317 4902 generic.go:334] "Generic (PLEG): container finished" podID="d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" containerID="dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5" exitCode=0 Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.179395 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" event={"ID":"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9","Type":"ContainerDied","Data":"dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5"} Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.179423 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" event={"ID":"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9","Type":"ContainerDied","Data":"5296913110c392e15b54a0f987eb61dded57186e36461bf1b89e97184d22ce54"} Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.179440 4902 scope.go:117] "RemoveContainer" containerID="dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.179659 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.184855 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.184878 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.184888 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7rwc\" (UniqueName: \"kubernetes.io/projected/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-kube-api-access-g7rwc\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.187256 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-c27gh" event={"ID":"8891f80f-6cb0-4dc6-9f92-836d465e1c84","Type":"ContainerStarted","Data":"1e365c417d7c9fc9f0e3c50b8df2956ab629924185f3c066a501456bc7f2f244"} Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.187295 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-c27gh" event={"ID":"8891f80f-6cb0-4dc6-9f92-836d465e1c84","Type":"ContainerStarted","Data":"b07d2a04235629b220fbd6c246ba8a8b5088d31b321ecb0ba20c9950895f0f74"} Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.192249 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-jnphw" event={"ID":"056e5d1c-0c8e-4988-8e3d-bd133023ce30","Type":"ContainerDied","Data":"ca0db72aa9304747531a3dcf3ad66d4587463b234d1c9123a01c6a7b05b94cfc"} Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.192342 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.197710 4902 generic.go:334] "Generic (PLEG): container finished" podID="02d63009-9822-4096-9bf1-8f71d4dacd7b" containerID="1bfce2ecde4206400633bc9ed5a03f89132046bc198571a9ea9d8cdbe7e9aafa" exitCode=0 Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.198218 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" event={"ID":"02d63009-9822-4096-9bf1-8f71d4dacd7b","Type":"ContainerDied","Data":"1bfce2ecde4206400633bc9ed5a03f89132046bc198571a9ea9d8cdbe7e9aafa"} Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.198276 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" event={"ID":"02d63009-9822-4096-9bf1-8f71d4dacd7b","Type":"ContainerStarted","Data":"ef2c29276f6c62af6940939c2d275db20eac9e33e286538b85878bd0fb0e83e5"} Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.203718 4902 scope.go:117] "RemoveContainer" containerID="f49956a6a8aaab4287075b0d5dea23e1e32bd5448385906a6c3a40565425bcb2" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.211310 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-c27gh" podStartSLOduration=2.211293222 podStartE2EDuration="2.211293222s" podCreationTimestamp="2026-01-21 14:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:37.201389477 +0000 UTC m=+1059.278222506" watchObservedRunningTime="2026-01-21 14:51:37.211293222 +0000 UTC m=+1059.288126251" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.241207 4902 scope.go:117] "RemoveContainer" containerID="dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5" Jan 21 14:51:37 crc kubenswrapper[4902]: E0121 14:51:37.242106 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5\": container with ID starting with dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5 not found: ID does not exist" containerID="dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.242192 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5"} err="failed to get container status \"dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5\": rpc error: code = NotFound desc = could not find container \"dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5\": container with ID starting with dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5 not found: ID does not exist" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.242347 4902 scope.go:117] "RemoveContainer" containerID="f49956a6a8aaab4287075b0d5dea23e1e32bd5448385906a6c3a40565425bcb2" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.247620 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-kldb7"] Jan 21 14:51:37 crc kubenswrapper[4902]: E0121 14:51:37.250347 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f49956a6a8aaab4287075b0d5dea23e1e32bd5448385906a6c3a40565425bcb2\": container with ID starting with f49956a6a8aaab4287075b0d5dea23e1e32bd5448385906a6c3a40565425bcb2 not found: ID does not exist" containerID="f49956a6a8aaab4287075b0d5dea23e1e32bd5448385906a6c3a40565425bcb2" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.250398 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f49956a6a8aaab4287075b0d5dea23e1e32bd5448385906a6c3a40565425bcb2"} err="failed to get container status \"f49956a6a8aaab4287075b0d5dea23e1e32bd5448385906a6c3a40565425bcb2\": rpc error: code = NotFound desc = could not find container \"f49956a6a8aaab4287075b0d5dea23e1e32bd5448385906a6c3a40565425bcb2\": container with ID starting with f49956a6a8aaab4287075b0d5dea23e1e32bd5448385906a6c3a40565425bcb2 not found: ID does not exist" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.250424 4902 scope.go:117] "RemoveContainer" containerID="0f5c9ee80727b9e8632f288b2b3d7cfcfa77af2c1b8caf9690b633d832028cb6" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.300925 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-kldb7"] Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.341669 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.355894 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-jnphw"] Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.364908 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-jnphw"] Jan 21 14:51:38 crc kubenswrapper[4902]: I0121 14:51:38.206774 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" event={"ID":"02d63009-9822-4096-9bf1-8f71d4dacd7b","Type":"ContainerStarted","Data":"fa5cddac767f0cfa37e86e0452a0e4172f930485b3055e92e46247cd7dffa247"} Jan 21 14:51:38 crc kubenswrapper[4902]: I0121 14:51:38.207122 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:38 crc kubenswrapper[4902]: I0121 14:51:38.211315 4902 generic.go:334] "Generic (PLEG): container finished" podID="6fbfbb64-2e43-4c95-b011-bec06204855d" containerID="b2e9ea7c18b393632fe84eb829e1d3e7e911dd393dc030fbec80ac0c2b50440c" exitCode=0 Jan 21 14:51:38 crc kubenswrapper[4902]: I0121 14:51:38.211360 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" event={"ID":"6fbfbb64-2e43-4c95-b011-bec06204855d","Type":"ContainerDied","Data":"b2e9ea7c18b393632fe84eb829e1d3e7e911dd393dc030fbec80ac0c2b50440c"} Jan 21 14:51:38 crc kubenswrapper[4902]: I0121 14:51:38.211378 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" event={"ID":"6fbfbb64-2e43-4c95-b011-bec06204855d","Type":"ContainerStarted","Data":"88562bce194dc6ec93ebe7a9e6fd7cffd8f7caf51a0e036e6b3531ce6275c539"} Jan 21 14:51:38 crc kubenswrapper[4902]: I0121 14:51:38.213583 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2ff2c3d8-2d68-4255-a175-21f0df1b9276","Type":"ContainerStarted","Data":"710e2e791f44aa4a7534510792c8ca7893edb756d648bcd8efc2a038da9f4e30"} Jan 21 14:51:38 crc kubenswrapper[4902]: I0121 14:51:38.228712 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" podStartSLOduration=3.228698223 podStartE2EDuration="3.228698223s" podCreationTimestamp="2026-01-21 14:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:38.226521823 +0000 UTC m=+1060.303354862" watchObservedRunningTime="2026-01-21 14:51:38.228698223 +0000 UTC m=+1060.305531252" Jan 21 14:51:38 crc kubenswrapper[4902]: I0121 14:51:38.316419 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="056e5d1c-0c8e-4988-8e3d-bd133023ce30" path="/var/lib/kubelet/pods/056e5d1c-0c8e-4988-8e3d-bd133023ce30/volumes" Jan 21 14:51:38 crc kubenswrapper[4902]: I0121 14:51:38.317234 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" path="/var/lib/kubelet/pods/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9/volumes" Jan 21 14:51:39 crc kubenswrapper[4902]: I0121 14:51:39.224475 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" event={"ID":"6fbfbb64-2e43-4c95-b011-bec06204855d","Type":"ContainerStarted","Data":"bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d"} Jan 21 14:51:39 crc kubenswrapper[4902]: I0121 14:51:39.224816 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:39 crc kubenswrapper[4902]: I0121 14:51:39.227712 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2ff2c3d8-2d68-4255-a175-21f0df1b9276","Type":"ContainerStarted","Data":"e8a096d5f6a2e59562479be65d1cff285382747948d319ddcc17f47f718069db"} Jan 21 14:51:39 crc kubenswrapper[4902]: I0121 14:51:39.227753 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2ff2c3d8-2d68-4255-a175-21f0df1b9276","Type":"ContainerStarted","Data":"c4adcc4c76cce96e7677ca792f5ca78a7e382164071237e39c2950d3440922c3"} Jan 21 14:51:39 crc kubenswrapper[4902]: I0121 14:51:39.243659 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" podStartSLOduration=4.243636245 podStartE2EDuration="4.243636245s" podCreationTimestamp="2026-01-21 14:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:39.241549937 +0000 UTC m=+1061.318382976" watchObservedRunningTime="2026-01-21 14:51:39.243636245 +0000 UTC m=+1061.320469274" Jan 21 14:51:40 crc kubenswrapper[4902]: I0121 14:51:40.136912 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 21 14:51:40 crc kubenswrapper[4902]: I0121 14:51:40.136997 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 21 14:51:40 crc kubenswrapper[4902]: I0121 14:51:40.234517 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 21 14:51:41 crc kubenswrapper[4902]: I0121 14:51:41.479322 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:41 crc kubenswrapper[4902]: I0121 14:51:41.479742 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:41 crc kubenswrapper[4902]: I0121 14:51:41.678363 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:41 crc kubenswrapper[4902]: I0121 14:51:41.700001 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=4.660396658 podStartE2EDuration="5.699982785s" podCreationTimestamp="2026-01-21 14:51:36 +0000 UTC" firstStartedPulling="2026-01-21 14:51:37.373415384 +0000 UTC m=+1059.450248413" lastFinishedPulling="2026-01-21 14:51:38.413001511 +0000 UTC m=+1060.489834540" observedRunningTime="2026-01-21 14:51:39.265296228 +0000 UTC m=+1061.342129277" watchObservedRunningTime="2026-01-21 14:51:41.699982785 +0000 UTC m=+1063.776815814" Jan 21 14:51:41 crc kubenswrapper[4902]: I0121 14:51:41.995784 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 21 14:51:42 crc kubenswrapper[4902]: I0121 14:51:42.079114 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 21 14:51:42 crc kubenswrapper[4902]: I0121 14:51:42.308749 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.354627 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-8sbwv"] Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.355143 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" podUID="6fbfbb64-2e43-4c95-b011-bec06204855d" containerName="dnsmasq-dns" containerID="cri-o://bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d" gracePeriod=10 Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.372690 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.424139 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-xglm5"] Jan 21 14:51:43 crc kubenswrapper[4902]: E0121 14:51:43.424545 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" containerName="dnsmasq-dns" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.424562 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" containerName="dnsmasq-dns" Jan 21 14:51:43 crc kubenswrapper[4902]: E0121 14:51:43.424589 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" containerName="init" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.424595 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" containerName="init" Jan 21 14:51:43 crc kubenswrapper[4902]: E0121 14:51:43.424612 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056e5d1c-0c8e-4988-8e3d-bd133023ce30" containerName="init" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.424618 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="056e5d1c-0c8e-4988-8e3d-bd133023ce30" containerName="init" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.424780 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" containerName="dnsmasq-dns" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.424795 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="056e5d1c-0c8e-4988-8e3d-bd133023ce30" containerName="init" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.425597 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.441608 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-xglm5"] Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.529543 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfprk\" (UniqueName: \"kubernetes.io/projected/f26a414c-0df3-4829-ad7a-c444b795160a-kube-api-access-pfprk\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.529626 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.529683 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.529704 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-config\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.529729 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.630904 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.631247 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.631271 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-config\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.631291 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.631338 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfprk\" (UniqueName: \"kubernetes.io/projected/f26a414c-0df3-4829-ad7a-c444b795160a-kube-api-access-pfprk\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.631764 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.632002 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.632393 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.632397 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-config\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.672686 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfprk\" (UniqueName: \"kubernetes.io/projected/f26a414c-0df3-4829-ad7a-c444b795160a-kube-api-access-pfprk\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.811774 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.892766 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.935576 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-ovsdbserver-nb\") pod \"6fbfbb64-2e43-4c95-b011-bec06204855d\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.935957 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbz4w\" (UniqueName: \"kubernetes.io/projected/6fbfbb64-2e43-4c95-b011-bec06204855d-kube-api-access-vbz4w\") pod \"6fbfbb64-2e43-4c95-b011-bec06204855d\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.936020 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-ovsdbserver-sb\") pod \"6fbfbb64-2e43-4c95-b011-bec06204855d\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.936051 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-config\") pod \"6fbfbb64-2e43-4c95-b011-bec06204855d\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.936140 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-dns-svc\") pod \"6fbfbb64-2e43-4c95-b011-bec06204855d\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.044193 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fbfbb64-2e43-4c95-b011-bec06204855d-kube-api-access-vbz4w" (OuterVolumeSpecName: "kube-api-access-vbz4w") pod "6fbfbb64-2e43-4c95-b011-bec06204855d" (UID: "6fbfbb64-2e43-4c95-b011-bec06204855d"). InnerVolumeSpecName "kube-api-access-vbz4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.061484 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6fbfbb64-2e43-4c95-b011-bec06204855d" (UID: "6fbfbb64-2e43-4c95-b011-bec06204855d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.063450 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-config" (OuterVolumeSpecName: "config") pod "6fbfbb64-2e43-4c95-b011-bec06204855d" (UID: "6fbfbb64-2e43-4c95-b011-bec06204855d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.074506 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6fbfbb64-2e43-4c95-b011-bec06204855d" (UID: "6fbfbb64-2e43-4c95-b011-bec06204855d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.086605 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6fbfbb64-2e43-4c95-b011-bec06204855d" (UID: "6fbfbb64-2e43-4c95-b011-bec06204855d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.139236 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.139266 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbz4w\" (UniqueName: \"kubernetes.io/projected/6fbfbb64-2e43-4c95-b011-bec06204855d-kube-api-access-vbz4w\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.139275 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.139285 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.139293 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.260322 4902 generic.go:334] "Generic (PLEG): container finished" podID="6fbfbb64-2e43-4c95-b011-bec06204855d" containerID="bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d" exitCode=0 Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.260379 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.260443 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" event={"ID":"6fbfbb64-2e43-4c95-b011-bec06204855d","Type":"ContainerDied","Data":"bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d"} Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.260489 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" event={"ID":"6fbfbb64-2e43-4c95-b011-bec06204855d","Type":"ContainerDied","Data":"88562bce194dc6ec93ebe7a9e6fd7cffd8f7caf51a0e036e6b3531ce6275c539"} Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.260505 4902 scope.go:117] "RemoveContainer" containerID="bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.279541 4902 scope.go:117] "RemoveContainer" containerID="b2e9ea7c18b393632fe84eb829e1d3e7e911dd393dc030fbec80ac0c2b50440c" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.293390 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-8sbwv"] Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.304910 4902 scope.go:117] "RemoveContainer" containerID="bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d" Jan 21 14:51:44 crc kubenswrapper[4902]: E0121 14:51:44.305383 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d\": container with ID starting with bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d not found: ID does not exist" containerID="bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.305420 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d"} err="failed to get container status \"bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d\": rpc error: code = NotFound desc = could not find container \"bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d\": container with ID starting with bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d not found: ID does not exist" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.305467 4902 scope.go:117] "RemoveContainer" containerID="b2e9ea7c18b393632fe84eb829e1d3e7e911dd393dc030fbec80ac0c2b50440c" Jan 21 14:51:44 crc kubenswrapper[4902]: E0121 14:51:44.305903 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2e9ea7c18b393632fe84eb829e1d3e7e911dd393dc030fbec80ac0c2b50440c\": container with ID starting with b2e9ea7c18b393632fe84eb829e1d3e7e911dd393dc030fbec80ac0c2b50440c not found: ID does not exist" containerID="b2e9ea7c18b393632fe84eb829e1d3e7e911dd393dc030fbec80ac0c2b50440c" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.305926 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2e9ea7c18b393632fe84eb829e1d3e7e911dd393dc030fbec80ac0c2b50440c"} err="failed to get container status \"b2e9ea7c18b393632fe84eb829e1d3e7e911dd393dc030fbec80ac0c2b50440c\": rpc error: code = NotFound desc = could not find container \"b2e9ea7c18b393632fe84eb829e1d3e7e911dd393dc030fbec80ac0c2b50440c\": container with ID starting with b2e9ea7c18b393632fe84eb829e1d3e7e911dd393dc030fbec80ac0c2b50440c not found: ID does not exist" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.306534 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-8sbwv"] Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.366149 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-xglm5"] Jan 21 14:51:44 crc kubenswrapper[4902]: W0121 14:51:44.368663 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf26a414c_0df3_4829_ad7a_c444b795160a.slice/crio-f2e59dbbb8c6adb99cbeb35911a1b6de41741dd0dd7508b3dc32a7f75a4ed19c WatchSource:0}: Error finding container f2e59dbbb8c6adb99cbeb35911a1b6de41741dd0dd7508b3dc32a7f75a4ed19c: Status 404 returned error can't find the container with id f2e59dbbb8c6adb99cbeb35911a1b6de41741dd0dd7508b3dc32a7f75a4ed19c Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.495648 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 21 14:51:44 crc kubenswrapper[4902]: E0121 14:51:44.496329 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fbfbb64-2e43-4c95-b011-bec06204855d" containerName="init" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.496351 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fbfbb64-2e43-4c95-b011-bec06204855d" containerName="init" Jan 21 14:51:44 crc kubenswrapper[4902]: E0121 14:51:44.496392 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fbfbb64-2e43-4c95-b011-bec06204855d" containerName="dnsmasq-dns" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.496402 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fbfbb64-2e43-4c95-b011-bec06204855d" containerName="dnsmasq-dns" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.496597 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fbfbb64-2e43-4c95-b011-bec06204855d" containerName="dnsmasq-dns" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.502443 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.505344 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.505680 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.505949 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.506004 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-s2887" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.531346 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.546411 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ee214fec-083a-4abd-b65e-003bccee24fa-lock\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.546471 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.546532 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ee214fec-083a-4abd-b65e-003bccee24fa-cache\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.546610 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.546647 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqvxq\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-kube-api-access-hqvxq\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.647933 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.648042 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqvxq\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-kube-api-access-hqvxq\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.648119 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ee214fec-083a-4abd-b65e-003bccee24fa-lock\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.648167 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: E0121 14:51:44.648421 4902 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 14:51:44 crc kubenswrapper[4902]: E0121 14:51:44.648523 4902 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.648554 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.648659 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ee214fec-083a-4abd-b65e-003bccee24fa-lock\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: E0121 14:51:44.648654 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift podName:ee214fec-083a-4abd-b65e-003bccee24fa nodeName:}" failed. No retries permitted until 2026-01-21 14:51:45.148635354 +0000 UTC m=+1067.225468383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift") pod "swift-storage-0" (UID: "ee214fec-083a-4abd-b65e-003bccee24fa") : configmap "swift-ring-files" not found Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.648916 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ee214fec-083a-4abd-b65e-003bccee24fa-cache\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.649256 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ee214fec-083a-4abd-b65e-003bccee24fa-cache\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.670955 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqvxq\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-kube-api-access-hqvxq\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.671515 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.143247 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-hmcs2"] Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.144139 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.146488 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.146560 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.146624 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.157717 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:45 crc kubenswrapper[4902]: E0121 14:51:45.157851 4902 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 14:51:45 crc kubenswrapper[4902]: E0121 14:51:45.157877 4902 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 14:51:45 crc kubenswrapper[4902]: E0121 14:51:45.157933 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift podName:ee214fec-083a-4abd-b65e-003bccee24fa nodeName:}" failed. No retries permitted until 2026-01-21 14:51:46.157914626 +0000 UTC m=+1068.234747645 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift") pod "swift-storage-0" (UID: "ee214fec-083a-4abd-b65e-003bccee24fa") : configmap "swift-ring-files" not found Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.169796 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-hmcs2"] Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.258894 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-combined-ca-bundle\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.258946 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9959d508-3783-403a-bdd6-65159821fc9e-etc-swift\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.258967 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkh72\" (UniqueName: \"kubernetes.io/projected/9959d508-3783-403a-bdd6-65159821fc9e-kube-api-access-fkh72\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.259024 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9959d508-3783-403a-bdd6-65159821fc9e-scripts\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.259113 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9959d508-3783-403a-bdd6-65159821fc9e-ring-data-devices\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.259146 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-swiftconf\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.259183 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-dispersionconf\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.272100 4902 generic.go:334] "Generic (PLEG): container finished" podID="f26a414c-0df3-4829-ad7a-c444b795160a" containerID="e19fecd53265fa377cce915a6f9d5418debd0cc0619facc38c21547ed0d4b095" exitCode=0 Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.272436 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" event={"ID":"f26a414c-0df3-4829-ad7a-c444b795160a","Type":"ContainerDied","Data":"e19fecd53265fa377cce915a6f9d5418debd0cc0619facc38c21547ed0d4b095"} Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.272460 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" event={"ID":"f26a414c-0df3-4829-ad7a-c444b795160a","Type":"ContainerStarted","Data":"f2e59dbbb8c6adb99cbeb35911a1b6de41741dd0dd7508b3dc32a7f75a4ed19c"} Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.361165 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9959d508-3783-403a-bdd6-65159821fc9e-scripts\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.362082 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9959d508-3783-403a-bdd6-65159821fc9e-ring-data-devices\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.362148 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-swiftconf\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.362198 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-dispersionconf\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.362262 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-combined-ca-bundle\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.362295 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9959d508-3783-403a-bdd6-65159821fc9e-etc-swift\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.362294 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9959d508-3783-403a-bdd6-65159821fc9e-scripts\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.362336 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkh72\" (UniqueName: \"kubernetes.io/projected/9959d508-3783-403a-bdd6-65159821fc9e-kube-api-access-fkh72\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.363345 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9959d508-3783-403a-bdd6-65159821fc9e-ring-data-devices\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.364066 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9959d508-3783-403a-bdd6-65159821fc9e-etc-swift\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.366260 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-swiftconf\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.367847 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-combined-ca-bundle\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.372444 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-dispersionconf\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.379871 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkh72\" (UniqueName: \"kubernetes.io/projected/9959d508-3783-403a-bdd6-65159821fc9e-kube-api-access-fkh72\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.495859 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.883717 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.253312 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:46 crc kubenswrapper[4902]: E0121 14:51:46.253490 4902 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 14:51:46 crc kubenswrapper[4902]: E0121 14:51:46.253516 4902 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 14:51:46 crc kubenswrapper[4902]: E0121 14:51:46.253576 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift podName:ee214fec-083a-4abd-b65e-003bccee24fa nodeName:}" failed. No retries permitted until 2026-01-21 14:51:48.253557453 +0000 UTC m=+1070.330390472 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift") pod "swift-storage-0" (UID: "ee214fec-083a-4abd-b65e-003bccee24fa") : configmap "swift-ring-files" not found Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.305248 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fbfbb64-2e43-4c95-b011-bec06204855d" path="/var/lib/kubelet/pods/6fbfbb64-2e43-4c95-b011-bec06204855d/volumes" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.474729 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-hmcs2"] Jan 21 14:51:46 crc kubenswrapper[4902]: W0121 14:51:46.477213 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9959d508_3783_403a_bdd6_65159821fc9e.slice/crio-fef5a480d26d112bdfd5701ee7922c211bab55d5f67520948d56b886a9288647 WatchSource:0}: Error finding container fef5a480d26d112bdfd5701ee7922c211bab55d5f67520948d56b886a9288647: Status 404 returned error can't find the container with id fef5a480d26d112bdfd5701ee7922c211bab55d5f67520948d56b886a9288647 Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.788654 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a0c6-account-create-update-g2pwx"] Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.789953 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a0c6-account-create-update-g2pwx" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.792205 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.801005 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a0c6-account-create-update-g2pwx"] Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.818450 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-62fdp"] Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.820344 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-62fdp" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.837079 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-62fdp"] Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.860894 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a55b324-126b-4571-a2ab-1ea8005e3c46-operator-scripts\") pod \"glance-db-create-62fdp\" (UID: \"9a55b324-126b-4571-a2ab-1ea8005e3c46\") " pod="openstack/glance-db-create-62fdp" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.861177 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22787b52-e166-415c-906e-788b1b73ccd0-operator-scripts\") pod \"glance-a0c6-account-create-update-g2pwx\" (UID: \"22787b52-e166-415c-906e-788b1b73ccd0\") " pod="openstack/glance-a0c6-account-create-update-g2pwx" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.861352 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfmhs\" (UniqueName: \"kubernetes.io/projected/22787b52-e166-415c-906e-788b1b73ccd0-kube-api-access-pfmhs\") pod \"glance-a0c6-account-create-update-g2pwx\" (UID: \"22787b52-e166-415c-906e-788b1b73ccd0\") " pod="openstack/glance-a0c6-account-create-update-g2pwx" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.861462 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trqf8\" (UniqueName: \"kubernetes.io/projected/9a55b324-126b-4571-a2ab-1ea8005e3c46-kube-api-access-trqf8\") pod \"glance-db-create-62fdp\" (UID: \"9a55b324-126b-4571-a2ab-1ea8005e3c46\") " pod="openstack/glance-db-create-62fdp" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.963003 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a55b324-126b-4571-a2ab-1ea8005e3c46-operator-scripts\") pod \"glance-db-create-62fdp\" (UID: \"9a55b324-126b-4571-a2ab-1ea8005e3c46\") " pod="openstack/glance-db-create-62fdp" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.963093 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22787b52-e166-415c-906e-788b1b73ccd0-operator-scripts\") pod \"glance-a0c6-account-create-update-g2pwx\" (UID: \"22787b52-e166-415c-906e-788b1b73ccd0\") " pod="openstack/glance-a0c6-account-create-update-g2pwx" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.963138 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfmhs\" (UniqueName: \"kubernetes.io/projected/22787b52-e166-415c-906e-788b1b73ccd0-kube-api-access-pfmhs\") pod \"glance-a0c6-account-create-update-g2pwx\" (UID: \"22787b52-e166-415c-906e-788b1b73ccd0\") " pod="openstack/glance-a0c6-account-create-update-g2pwx" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.963163 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trqf8\" (UniqueName: \"kubernetes.io/projected/9a55b324-126b-4571-a2ab-1ea8005e3c46-kube-api-access-trqf8\") pod \"glance-db-create-62fdp\" (UID: \"9a55b324-126b-4571-a2ab-1ea8005e3c46\") " pod="openstack/glance-db-create-62fdp" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.964018 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a55b324-126b-4571-a2ab-1ea8005e3c46-operator-scripts\") pod \"glance-db-create-62fdp\" (UID: \"9a55b324-126b-4571-a2ab-1ea8005e3c46\") " pod="openstack/glance-db-create-62fdp" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.964609 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22787b52-e166-415c-906e-788b1b73ccd0-operator-scripts\") pod \"glance-a0c6-account-create-update-g2pwx\" (UID: \"22787b52-e166-415c-906e-788b1b73ccd0\") " pod="openstack/glance-a0c6-account-create-update-g2pwx" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.982816 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trqf8\" (UniqueName: \"kubernetes.io/projected/9a55b324-126b-4571-a2ab-1ea8005e3c46-kube-api-access-trqf8\") pod \"glance-db-create-62fdp\" (UID: \"9a55b324-126b-4571-a2ab-1ea8005e3c46\") " pod="openstack/glance-db-create-62fdp" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.986895 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfmhs\" (UniqueName: \"kubernetes.io/projected/22787b52-e166-415c-906e-788b1b73ccd0-kube-api-access-pfmhs\") pod \"glance-a0c6-account-create-update-g2pwx\" (UID: \"22787b52-e166-415c-906e-788b1b73ccd0\") " pod="openstack/glance-a0c6-account-create-update-g2pwx" Jan 21 14:51:47 crc kubenswrapper[4902]: I0121 14:51:47.109866 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a0c6-account-create-update-g2pwx" Jan 21 14:51:47 crc kubenswrapper[4902]: I0121 14:51:47.139501 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-62fdp" Jan 21 14:51:47 crc kubenswrapper[4902]: I0121 14:51:47.312463 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hmcs2" event={"ID":"9959d508-3783-403a-bdd6-65159821fc9e","Type":"ContainerStarted","Data":"fef5a480d26d112bdfd5701ee7922c211bab55d5f67520948d56b886a9288647"} Jan 21 14:51:47 crc kubenswrapper[4902]: I0121 14:51:47.421125 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a0c6-account-create-update-g2pwx"] Jan 21 14:51:47 crc kubenswrapper[4902]: W0121 14:51:47.426528 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22787b52_e166_415c_906e_788b1b73ccd0.slice/crio-85146753c8fc39ddffc18caf6df87521671191f1f4b39e27ecf1ba7e0f546b40 WatchSource:0}: Error finding container 85146753c8fc39ddffc18caf6df87521671191f1f4b39e27ecf1ba7e0f546b40: Status 404 returned error can't find the container with id 85146753c8fc39ddffc18caf6df87521671191f1f4b39e27ecf1ba7e0f546b40 Jan 21 14:51:47 crc kubenswrapper[4902]: I0121 14:51:47.769989 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:51:47 crc kubenswrapper[4902]: I0121 14:51:47.770116 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:51:47 crc kubenswrapper[4902]: I0121 14:51:47.770196 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:51:47 crc kubenswrapper[4902]: I0121 14:51:47.771315 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f9ca57ec1458d1c5cf7c9248bedd6ee378b9620abbe566738ff33d6096aeb8f1"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:51:47 crc kubenswrapper[4902]: I0121 14:51:47.771422 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://f9ca57ec1458d1c5cf7c9248bedd6ee378b9620abbe566738ff33d6096aeb8f1" gracePeriod=600 Jan 21 14:51:47 crc kubenswrapper[4902]: I0121 14:51:47.819710 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-62fdp"] Jan 21 14:51:47 crc kubenswrapper[4902]: W0121 14:51:47.822458 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a55b324_126b_4571_a2ab_1ea8005e3c46.slice/crio-d2eb35fb5799ce1ef42682c3bb497116e47e162109bced6658a35474b3df1805 WatchSource:0}: Error finding container d2eb35fb5799ce1ef42682c3bb497116e47e162109bced6658a35474b3df1805: Status 404 returned error can't find the container with id d2eb35fb5799ce1ef42682c3bb497116e47e162109bced6658a35474b3df1805 Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.302822 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:48 crc kubenswrapper[4902]: E0121 14:51:48.303076 4902 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 14:51:48 crc kubenswrapper[4902]: E0121 14:51:48.303109 4902 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 14:51:48 crc kubenswrapper[4902]: E0121 14:51:48.303172 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift podName:ee214fec-083a-4abd-b65e-003bccee24fa nodeName:}" failed. No retries permitted until 2026-01-21 14:51:52.303152076 +0000 UTC m=+1074.379985115 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift") pod "swift-storage-0" (UID: "ee214fec-083a-4abd-b65e-003bccee24fa") : configmap "swift-ring-files" not found Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.322846 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-62fdp" event={"ID":"9a55b324-126b-4571-a2ab-1ea8005e3c46","Type":"ContainerStarted","Data":"d2eb35fb5799ce1ef42682c3bb497116e47e162109bced6658a35474b3df1805"} Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.324161 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a0c6-account-create-update-g2pwx" event={"ID":"22787b52-e166-415c-906e-788b1b73ccd0","Type":"ContainerStarted","Data":"85146753c8fc39ddffc18caf6df87521671191f1f4b39e27ecf1ba7e0f546b40"} Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.502622 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-gtbbh"] Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.507845 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gtbbh" Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.514775 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.518715 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gtbbh"] Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.610083 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1954463b-8937-4042-a917-fe047862f4b8-operator-scripts\") pod \"root-account-create-update-gtbbh\" (UID: \"1954463b-8937-4042-a917-fe047862f4b8\") " pod="openstack/root-account-create-update-gtbbh" Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.610209 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjddz\" (UniqueName: \"kubernetes.io/projected/1954463b-8937-4042-a917-fe047862f4b8-kube-api-access-bjddz\") pod \"root-account-create-update-gtbbh\" (UID: \"1954463b-8937-4042-a917-fe047862f4b8\") " pod="openstack/root-account-create-update-gtbbh" Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.711550 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1954463b-8937-4042-a917-fe047862f4b8-operator-scripts\") pod \"root-account-create-update-gtbbh\" (UID: \"1954463b-8937-4042-a917-fe047862f4b8\") " pod="openstack/root-account-create-update-gtbbh" Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.711680 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjddz\" (UniqueName: \"kubernetes.io/projected/1954463b-8937-4042-a917-fe047862f4b8-kube-api-access-bjddz\") pod \"root-account-create-update-gtbbh\" (UID: \"1954463b-8937-4042-a917-fe047862f4b8\") " pod="openstack/root-account-create-update-gtbbh" Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.712638 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1954463b-8937-4042-a917-fe047862f4b8-operator-scripts\") pod \"root-account-create-update-gtbbh\" (UID: \"1954463b-8937-4042-a917-fe047862f4b8\") " pod="openstack/root-account-create-update-gtbbh" Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.733388 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjddz\" (UniqueName: \"kubernetes.io/projected/1954463b-8937-4042-a917-fe047862f4b8-kube-api-access-bjddz\") pod \"root-account-create-update-gtbbh\" (UID: \"1954463b-8937-4042-a917-fe047862f4b8\") " pod="openstack/root-account-create-update-gtbbh" Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.864946 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gtbbh" Jan 21 14:51:49 crc kubenswrapper[4902]: I0121 14:51:49.350747 4902 generic.go:334] "Generic (PLEG): container finished" podID="22787b52-e166-415c-906e-788b1b73ccd0" containerID="04e51686a115d7efa7ccafee00c3c35f348877ed4159bb02ef8fdec725c74808" exitCode=0 Jan 21 14:51:49 crc kubenswrapper[4902]: I0121 14:51:49.350830 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a0c6-account-create-update-g2pwx" event={"ID":"22787b52-e166-415c-906e-788b1b73ccd0","Type":"ContainerDied","Data":"04e51686a115d7efa7ccafee00c3c35f348877ed4159bb02ef8fdec725c74808"} Jan 21 14:51:49 crc kubenswrapper[4902]: I0121 14:51:49.353857 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gtbbh"] Jan 21 14:51:49 crc kubenswrapper[4902]: I0121 14:51:49.354597 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" event={"ID":"f26a414c-0df3-4829-ad7a-c444b795160a","Type":"ContainerStarted","Data":"2835e971956ba6f6b6ef4af53fbc776463dd7dc5cf9fe6d1cb87ca296d232dda"} Jan 21 14:51:49 crc kubenswrapper[4902]: I0121 14:51:49.355089 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:49 crc kubenswrapper[4902]: I0121 14:51:49.357525 4902 generic.go:334] "Generic (PLEG): container finished" podID="9a55b324-126b-4571-a2ab-1ea8005e3c46" containerID="f6b39c880fbd40f2782ed02884cfa856d1ecf3dfd90d97c9787d318a34cf7495" exitCode=0 Jan 21 14:51:49 crc kubenswrapper[4902]: I0121 14:51:49.357588 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-62fdp" event={"ID":"9a55b324-126b-4571-a2ab-1ea8005e3c46","Type":"ContainerDied","Data":"f6b39c880fbd40f2782ed02884cfa856d1ecf3dfd90d97c9787d318a34cf7495"} Jan 21 14:51:49 crc kubenswrapper[4902]: I0121 14:51:49.360537 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="f9ca57ec1458d1c5cf7c9248bedd6ee378b9620abbe566738ff33d6096aeb8f1" exitCode=0 Jan 21 14:51:49 crc kubenswrapper[4902]: I0121 14:51:49.360591 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"f9ca57ec1458d1c5cf7c9248bedd6ee378b9620abbe566738ff33d6096aeb8f1"} Jan 21 14:51:49 crc kubenswrapper[4902]: I0121 14:51:49.360624 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"0203ec0a15ee1aa92f4eb3d8e44c0e52d1043afb244cf40caae4761f1f1ee369"} Jan 21 14:51:49 crc kubenswrapper[4902]: I0121 14:51:49.360644 4902 scope.go:117] "RemoveContainer" containerID="097b55fd9fa87b27fef8f06ba3cbfef04c2339f11dc61a41eeced54a3451dbca" Jan 21 14:51:49 crc kubenswrapper[4902]: I0121 14:51:49.410094 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" podStartSLOduration=6.410075678 podStartE2EDuration="6.410075678s" podCreationTimestamp="2026-01-21 14:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:49.408541795 +0000 UTC m=+1071.485374834" watchObservedRunningTime="2026-01-21 14:51:49.410075678 +0000 UTC m=+1071.486908717" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.006765 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-bdp9p"] Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.008292 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bdp9p" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.020912 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bdp9p"] Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.089787 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbcnc\" (UniqueName: \"kubernetes.io/projected/fd5b13a8-7950-40cf-9255-d2c9f34c6add-kube-api-access-jbcnc\") pod \"keystone-db-create-bdp9p\" (UID: \"fd5b13a8-7950-40cf-9255-d2c9f34c6add\") " pod="openstack/keystone-db-create-bdp9p" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.089916 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd5b13a8-7950-40cf-9255-d2c9f34c6add-operator-scripts\") pod \"keystone-db-create-bdp9p\" (UID: \"fd5b13a8-7950-40cf-9255-d2c9f34c6add\") " pod="openstack/keystone-db-create-bdp9p" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.121930 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b2af-account-create-update-g4dvb"] Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.123282 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b2af-account-create-update-g4dvb" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.125650 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.132238 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b2af-account-create-update-g4dvb"] Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.191144 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f05425e-47d3-4358-844c-9b661f254e22-operator-scripts\") pod \"keystone-b2af-account-create-update-g4dvb\" (UID: \"8f05425e-47d3-4358-844c-9b661f254e22\") " pod="openstack/keystone-b2af-account-create-update-g4dvb" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.191235 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbcnc\" (UniqueName: \"kubernetes.io/projected/fd5b13a8-7950-40cf-9255-d2c9f34c6add-kube-api-access-jbcnc\") pod \"keystone-db-create-bdp9p\" (UID: \"fd5b13a8-7950-40cf-9255-d2c9f34c6add\") " pod="openstack/keystone-db-create-bdp9p" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.191333 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd5b13a8-7950-40cf-9255-d2c9f34c6add-operator-scripts\") pod \"keystone-db-create-bdp9p\" (UID: \"fd5b13a8-7950-40cf-9255-d2c9f34c6add\") " pod="openstack/keystone-db-create-bdp9p" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.191384 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4dfc\" (UniqueName: \"kubernetes.io/projected/8f05425e-47d3-4358-844c-9b661f254e22-kube-api-access-b4dfc\") pod \"keystone-b2af-account-create-update-g4dvb\" (UID: \"8f05425e-47d3-4358-844c-9b661f254e22\") " pod="openstack/keystone-b2af-account-create-update-g4dvb" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.192401 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd5b13a8-7950-40cf-9255-d2c9f34c6add-operator-scripts\") pod \"keystone-db-create-bdp9p\" (UID: \"fd5b13a8-7950-40cf-9255-d2c9f34c6add\") " pod="openstack/keystone-db-create-bdp9p" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.241197 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbcnc\" (UniqueName: \"kubernetes.io/projected/fd5b13a8-7950-40cf-9255-d2c9f34c6add-kube-api-access-jbcnc\") pod \"keystone-db-create-bdp9p\" (UID: \"fd5b13a8-7950-40cf-9255-d2c9f34c6add\") " pod="openstack/keystone-db-create-bdp9p" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.292404 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4dfc\" (UniqueName: \"kubernetes.io/projected/8f05425e-47d3-4358-844c-9b661f254e22-kube-api-access-b4dfc\") pod \"keystone-b2af-account-create-update-g4dvb\" (UID: \"8f05425e-47d3-4358-844c-9b661f254e22\") " pod="openstack/keystone-b2af-account-create-update-g4dvb" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.292472 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f05425e-47d3-4358-844c-9b661f254e22-operator-scripts\") pod \"keystone-b2af-account-create-update-g4dvb\" (UID: \"8f05425e-47d3-4358-844c-9b661f254e22\") " pod="openstack/keystone-b2af-account-create-update-g4dvb" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.293533 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f05425e-47d3-4358-844c-9b661f254e22-operator-scripts\") pod \"keystone-b2af-account-create-update-g4dvb\" (UID: \"8f05425e-47d3-4358-844c-9b661f254e22\") " pod="openstack/keystone-b2af-account-create-update-g4dvb" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.324287 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-x9wcg"] Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.325567 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x9wcg" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.331064 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4dfc\" (UniqueName: \"kubernetes.io/projected/8f05425e-47d3-4358-844c-9b661f254e22-kube-api-access-b4dfc\") pod \"keystone-b2af-account-create-update-g4dvb\" (UID: \"8f05425e-47d3-4358-844c-9b661f254e22\") " pod="openstack/keystone-b2af-account-create-update-g4dvb" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.331935 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bdp9p" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.335776 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-x9wcg"] Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.353480 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-431b-account-create-update-trwhd"] Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.354527 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-431b-account-create-update-trwhd" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.357607 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.375166 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-431b-account-create-update-trwhd"] Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.394557 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d-operator-scripts\") pod \"placement-db-create-x9wcg\" (UID: \"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d\") " pod="openstack/placement-db-create-x9wcg" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.395444 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eef10c95-ed5c-4479-b01f-8f956d478dcf-operator-scripts\") pod \"placement-431b-account-create-update-trwhd\" (UID: \"eef10c95-ed5c-4479-b01f-8f956d478dcf\") " pod="openstack/placement-431b-account-create-update-trwhd" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.395588 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2hms\" (UniqueName: \"kubernetes.io/projected/eef10c95-ed5c-4479-b01f-8f956d478dcf-kube-api-access-s2hms\") pod \"placement-431b-account-create-update-trwhd\" (UID: \"eef10c95-ed5c-4479-b01f-8f956d478dcf\") " pod="openstack/placement-431b-account-create-update-trwhd" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.395727 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn79f\" (UniqueName: \"kubernetes.io/projected/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d-kube-api-access-sn79f\") pod \"placement-db-create-x9wcg\" (UID: \"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d\") " pod="openstack/placement-db-create-x9wcg" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.446220 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b2af-account-create-update-g4dvb" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.756327 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d-operator-scripts\") pod \"placement-db-create-x9wcg\" (UID: \"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d\") " pod="openstack/placement-db-create-x9wcg" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.756864 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eef10c95-ed5c-4479-b01f-8f956d478dcf-operator-scripts\") pod \"placement-431b-account-create-update-trwhd\" (UID: \"eef10c95-ed5c-4479-b01f-8f956d478dcf\") " pod="openstack/placement-431b-account-create-update-trwhd" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.760359 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d-operator-scripts\") pod \"placement-db-create-x9wcg\" (UID: \"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d\") " pod="openstack/placement-db-create-x9wcg" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.763014 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2hms\" (UniqueName: \"kubernetes.io/projected/eef10c95-ed5c-4479-b01f-8f956d478dcf-kube-api-access-s2hms\") pod \"placement-431b-account-create-update-trwhd\" (UID: \"eef10c95-ed5c-4479-b01f-8f956d478dcf\") " pod="openstack/placement-431b-account-create-update-trwhd" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.763141 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn79f\" (UniqueName: \"kubernetes.io/projected/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d-kube-api-access-sn79f\") pod \"placement-db-create-x9wcg\" (UID: \"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d\") " pod="openstack/placement-db-create-x9wcg" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.767031 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eef10c95-ed5c-4479-b01f-8f956d478dcf-operator-scripts\") pod \"placement-431b-account-create-update-trwhd\" (UID: \"eef10c95-ed5c-4479-b01f-8f956d478dcf\") " pod="openstack/placement-431b-account-create-update-trwhd" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.786092 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn79f\" (UniqueName: \"kubernetes.io/projected/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d-kube-api-access-sn79f\") pod \"placement-db-create-x9wcg\" (UID: \"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d\") " pod="openstack/placement-db-create-x9wcg" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.793788 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2hms\" (UniqueName: \"kubernetes.io/projected/eef10c95-ed5c-4479-b01f-8f956d478dcf-kube-api-access-s2hms\") pod \"placement-431b-account-create-update-trwhd\" (UID: \"eef10c95-ed5c-4479-b01f-8f956d478dcf\") " pod="openstack/placement-431b-account-create-update-trwhd" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.950420 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.976520 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x9wcg" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.990176 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-431b-account-create-update-trwhd" Jan 21 14:51:52 crc kubenswrapper[4902]: I0121 14:51:52.336553 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:52 crc kubenswrapper[4902]: E0121 14:51:52.336776 4902 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 14:51:52 crc kubenswrapper[4902]: E0121 14:51:52.336798 4902 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 14:51:52 crc kubenswrapper[4902]: E0121 14:51:52.336844 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift podName:ee214fec-083a-4abd-b65e-003bccee24fa nodeName:}" failed. No retries permitted until 2026-01-21 14:52:00.336827648 +0000 UTC m=+1082.413660677 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift") pod "swift-storage-0" (UID: "ee214fec-083a-4abd-b65e-003bccee24fa") : configmap "swift-ring-files" not found Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.400719 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-62fdp" event={"ID":"9a55b324-126b-4571-a2ab-1ea8005e3c46","Type":"ContainerDied","Data":"d2eb35fb5799ce1ef42682c3bb497116e47e162109bced6658a35474b3df1805"} Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.400982 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2eb35fb5799ce1ef42682c3bb497116e47e162109bced6658a35474b3df1805" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.410096 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gtbbh" event={"ID":"1954463b-8937-4042-a917-fe047862f4b8","Type":"ContainerStarted","Data":"1e9ffed32be9a49bc998cff74fdbc43e5ee1377e006d1bfc773044e302a7d8ed"} Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.411985 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a0c6-account-create-update-g2pwx" event={"ID":"22787b52-e166-415c-906e-788b1b73ccd0","Type":"ContainerDied","Data":"85146753c8fc39ddffc18caf6df87521671191f1f4b39e27ecf1ba7e0f546b40"} Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.412060 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85146753c8fc39ddffc18caf6df87521671191f1f4b39e27ecf1ba7e0f546b40" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.536931 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-62fdp" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.612403 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trqf8\" (UniqueName: \"kubernetes.io/projected/9a55b324-126b-4571-a2ab-1ea8005e3c46-kube-api-access-trqf8\") pod \"9a55b324-126b-4571-a2ab-1ea8005e3c46\" (UID: \"9a55b324-126b-4571-a2ab-1ea8005e3c46\") " Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.612440 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a55b324-126b-4571-a2ab-1ea8005e3c46-operator-scripts\") pod \"9a55b324-126b-4571-a2ab-1ea8005e3c46\" (UID: \"9a55b324-126b-4571-a2ab-1ea8005e3c46\") " Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.613344 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a55b324-126b-4571-a2ab-1ea8005e3c46-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a55b324-126b-4571-a2ab-1ea8005e3c46" (UID: "9a55b324-126b-4571-a2ab-1ea8005e3c46"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.614921 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a0c6-account-create-update-g2pwx" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.619125 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a55b324-126b-4571-a2ab-1ea8005e3c46-kube-api-access-trqf8" (OuterVolumeSpecName: "kube-api-access-trqf8") pod "9a55b324-126b-4571-a2ab-1ea8005e3c46" (UID: "9a55b324-126b-4571-a2ab-1ea8005e3c46"). InnerVolumeSpecName "kube-api-access-trqf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.714949 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22787b52-e166-415c-906e-788b1b73ccd0-operator-scripts\") pod \"22787b52-e166-415c-906e-788b1b73ccd0\" (UID: \"22787b52-e166-415c-906e-788b1b73ccd0\") " Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.715154 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfmhs\" (UniqueName: \"kubernetes.io/projected/22787b52-e166-415c-906e-788b1b73ccd0-kube-api-access-pfmhs\") pod \"22787b52-e166-415c-906e-788b1b73ccd0\" (UID: \"22787b52-e166-415c-906e-788b1b73ccd0\") " Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.715725 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trqf8\" (UniqueName: \"kubernetes.io/projected/9a55b324-126b-4571-a2ab-1ea8005e3c46-kube-api-access-trqf8\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.715745 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a55b324-126b-4571-a2ab-1ea8005e3c46-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.715914 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22787b52-e166-415c-906e-788b1b73ccd0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22787b52-e166-415c-906e-788b1b73ccd0" (UID: "22787b52-e166-415c-906e-788b1b73ccd0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.719281 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22787b52-e166-415c-906e-788b1b73ccd0-kube-api-access-pfmhs" (OuterVolumeSpecName: "kube-api-access-pfmhs") pod "22787b52-e166-415c-906e-788b1b73ccd0" (UID: "22787b52-e166-415c-906e-788b1b73ccd0"). InnerVolumeSpecName "kube-api-access-pfmhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.813269 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.817481 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22787b52-e166-415c-906e-788b1b73ccd0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.817508 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfmhs\" (UniqueName: \"kubernetes.io/projected/22787b52-e166-415c-906e-788b1b73ccd0-kube-api-access-pfmhs\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.938675 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b79764b65-4rfxx"] Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.938935 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" podUID="02d63009-9822-4096-9bf1-8f71d4dacd7b" containerName="dnsmasq-dns" containerID="cri-o://fa5cddac767f0cfa37e86e0452a0e4172f930485b3055e92e46247cd7dffa247" gracePeriod=10 Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.976763 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b2af-account-create-update-g4dvb"] Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.987141 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-x9wcg"] Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.038287 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-431b-account-create-update-trwhd"] Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.046778 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bdp9p"] Jan 21 14:51:54 crc kubenswrapper[4902]: W0121 14:51:54.052071 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeef10c95_ed5c_4479_b01f_8f956d478dcf.slice/crio-414ad226955c987ec7c025c549b229f68b66622db3c6c7377bf41c281090ba20 WatchSource:0}: Error finding container 414ad226955c987ec7c025c549b229f68b66622db3c6c7377bf41c281090ba20: Status 404 returned error can't find the container with id 414ad226955c987ec7c025c549b229f68b66622db3c6c7377bf41c281090ba20 Jan 21 14:51:54 crc kubenswrapper[4902]: W0121 14:51:54.065734 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56dceeb6_ebc6_44b8_aba5_5f203f1a8d5d.slice/crio-2874be65f89a65a539647e2aeacf5ef7b77341fc540507819036a595a616e45a WatchSource:0}: Error finding container 2874be65f89a65a539647e2aeacf5ef7b77341fc540507819036a595a616e45a: Status 404 returned error can't find the container with id 2874be65f89a65a539647e2aeacf5ef7b77341fc540507819036a595a616e45a Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.424963 4902 generic.go:334] "Generic (PLEG): container finished" podID="02d63009-9822-4096-9bf1-8f71d4dacd7b" containerID="fa5cddac767f0cfa37e86e0452a0e4172f930485b3055e92e46247cd7dffa247" exitCode=0 Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.425017 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" event={"ID":"02d63009-9822-4096-9bf1-8f71d4dacd7b","Type":"ContainerDied","Data":"fa5cddac767f0cfa37e86e0452a0e4172f930485b3055e92e46247cd7dffa247"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.425460 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" event={"ID":"02d63009-9822-4096-9bf1-8f71d4dacd7b","Type":"ContainerDied","Data":"ef2c29276f6c62af6940939c2d275db20eac9e33e286538b85878bd0fb0e83e5"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.425475 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef2c29276f6c62af6940939c2d275db20eac9e33e286538b85878bd0fb0e83e5" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.427594 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-431b-account-create-update-trwhd" event={"ID":"eef10c95-ed5c-4479-b01f-8f956d478dcf","Type":"ContainerStarted","Data":"f8e614c23f60db2d2289c45f03de6ca360a2d28723c52bf7d5442f33e4ef3cb9"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.427619 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-431b-account-create-update-trwhd" event={"ID":"eef10c95-ed5c-4479-b01f-8f956d478dcf","Type":"ContainerStarted","Data":"414ad226955c987ec7c025c549b229f68b66622db3c6c7377bf41c281090ba20"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.432787 4902 generic.go:334] "Generic (PLEG): container finished" podID="8d7103bd-b24b-4a0c-b68a-17373307f1aa" containerID="92e5d5d244ea3ad93a7e80ee12639d5b07fffbb547e78aa8ac616bbb354c0c54" exitCode=0 Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.432841 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d7103bd-b24b-4a0c-b68a-17373307f1aa","Type":"ContainerDied","Data":"92e5d5d244ea3ad93a7e80ee12639d5b07fffbb547e78aa8ac616bbb354c0c54"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.436607 4902 generic.go:334] "Generic (PLEG): container finished" podID="1954463b-8937-4042-a917-fe047862f4b8" containerID="183c9aacc3759e23732dbe091d0a8125502d61ad06cbf81f3beb450ef89e7614" exitCode=0 Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.436727 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gtbbh" event={"ID":"1954463b-8937-4042-a917-fe047862f4b8","Type":"ContainerDied","Data":"183c9aacc3759e23732dbe091d0a8125502d61ad06cbf81f3beb450ef89e7614"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.439207 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hmcs2" event={"ID":"9959d508-3783-403a-bdd6-65159821fc9e","Type":"ContainerStarted","Data":"29527624e52b61188971d77dcdc19feadc4e519866ced3ad0c73f26335294506"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.447977 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-431b-account-create-update-trwhd" podStartSLOduration=3.447953341 podStartE2EDuration="3.447953341s" podCreationTimestamp="2026-01-21 14:51:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:54.44610647 +0000 UTC m=+1076.522939499" watchObservedRunningTime="2026-01-21 14:51:54.447953341 +0000 UTC m=+1076.524786370" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.448799 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bdp9p" event={"ID":"fd5b13a8-7950-40cf-9255-d2c9f34c6add","Type":"ContainerStarted","Data":"0db12f9364007deb6067c2c445b04573d37703a8a3c7073268d343c3233327a1"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.448849 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bdp9p" event={"ID":"fd5b13a8-7950-40cf-9255-d2c9f34c6add","Type":"ContainerStarted","Data":"5cb8cc3872ae580644ce626d0d89d1daf3e291701338bc5d3629a7cd3738096c"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.453938 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b2af-account-create-update-g4dvb" event={"ID":"8f05425e-47d3-4358-844c-9b661f254e22","Type":"ContainerStarted","Data":"7d4422a73cd9c69151e982d6a24415a420632cf5387be9a9908b89fae4b7d136"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.453976 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b2af-account-create-update-g4dvb" event={"ID":"8f05425e-47d3-4358-844c-9b661f254e22","Type":"ContainerStarted","Data":"4390a64682acbf30933444954ba3902efa753868aaafded59aeec375b82f230e"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.462207 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a0c6-account-create-update-g2pwx" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.466060 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x9wcg" event={"ID":"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d","Type":"ContainerStarted","Data":"2e960884dfc54470df60f875a779cf61caa394a9b9eb4b58037a649720bdac73"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.466101 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x9wcg" event={"ID":"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d","Type":"ContainerStarted","Data":"2874be65f89a65a539647e2aeacf5ef7b77341fc540507819036a595a616e45a"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.466169 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-62fdp" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.498113 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.515772 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-hmcs2" podStartSLOduration=2.722722963 podStartE2EDuration="9.515755798s" podCreationTimestamp="2026-01-21 14:51:45 +0000 UTC" firstStartedPulling="2026-01-21 14:51:46.479964863 +0000 UTC m=+1068.556797892" lastFinishedPulling="2026-01-21 14:51:53.272997698 +0000 UTC m=+1075.349830727" observedRunningTime="2026-01-21 14:51:54.508163627 +0000 UTC m=+1076.584996656" watchObservedRunningTime="2026-01-21 14:51:54.515755798 +0000 UTC m=+1076.592588827" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.530360 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-b2af-account-create-update-g4dvb" podStartSLOduration=3.530342434 podStartE2EDuration="3.530342434s" podCreationTimestamp="2026-01-21 14:51:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:54.525259083 +0000 UTC m=+1076.602092112" watchObservedRunningTime="2026-01-21 14:51:54.530342434 +0000 UTC m=+1076.607175453" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.532895 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-ovsdbserver-sb\") pod \"02d63009-9822-4096-9bf1-8f71d4dacd7b\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.532952 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-dns-svc\") pod \"02d63009-9822-4096-9bf1-8f71d4dacd7b\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.533019 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq5hg\" (UniqueName: \"kubernetes.io/projected/02d63009-9822-4096-9bf1-8f71d4dacd7b-kube-api-access-hq5hg\") pod \"02d63009-9822-4096-9bf1-8f71d4dacd7b\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.533103 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-config\") pod \"02d63009-9822-4096-9bf1-8f71d4dacd7b\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.545337 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02d63009-9822-4096-9bf1-8f71d4dacd7b-kube-api-access-hq5hg" (OuterVolumeSpecName: "kube-api-access-hq5hg") pod "02d63009-9822-4096-9bf1-8f71d4dacd7b" (UID: "02d63009-9822-4096-9bf1-8f71d4dacd7b"). InnerVolumeSpecName "kube-api-access-hq5hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.576307 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-bdp9p" podStartSLOduration=4.576261692 podStartE2EDuration="4.576261692s" podCreationTimestamp="2026-01-21 14:51:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:54.544982291 +0000 UTC m=+1076.621815320" watchObservedRunningTime="2026-01-21 14:51:54.576261692 +0000 UTC m=+1076.653094741" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.592422 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-x9wcg" podStartSLOduration=3.592405711 podStartE2EDuration="3.592405711s" podCreationTimestamp="2026-01-21 14:51:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:54.580612933 +0000 UTC m=+1076.657445962" watchObservedRunningTime="2026-01-21 14:51:54.592405711 +0000 UTC m=+1076.669238740" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.606544 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "02d63009-9822-4096-9bf1-8f71d4dacd7b" (UID: "02d63009-9822-4096-9bf1-8f71d4dacd7b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.610379 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "02d63009-9822-4096-9bf1-8f71d4dacd7b" (UID: "02d63009-9822-4096-9bf1-8f71d4dacd7b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.621958 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-config" (OuterVolumeSpecName: "config") pod "02d63009-9822-4096-9bf1-8f71d4dacd7b" (UID: "02d63009-9822-4096-9bf1-8f71d4dacd7b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.634918 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.634946 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.634958 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.634967 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq5hg\" (UniqueName: \"kubernetes.io/projected/02d63009-9822-4096-9bf1-8f71d4dacd7b-kube-api-access-hq5hg\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.470779 4902 generic.go:334] "Generic (PLEG): container finished" podID="fd5b13a8-7950-40cf-9255-d2c9f34c6add" containerID="0db12f9364007deb6067c2c445b04573d37703a8a3c7073268d343c3233327a1" exitCode=0 Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.470884 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bdp9p" event={"ID":"fd5b13a8-7950-40cf-9255-d2c9f34c6add","Type":"ContainerDied","Data":"0db12f9364007deb6067c2c445b04573d37703a8a3c7073268d343c3233327a1"} Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.474091 4902 generic.go:334] "Generic (PLEG): container finished" podID="8f05425e-47d3-4358-844c-9b661f254e22" containerID="7d4422a73cd9c69151e982d6a24415a420632cf5387be9a9908b89fae4b7d136" exitCode=0 Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.474141 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b2af-account-create-update-g4dvb" event={"ID":"8f05425e-47d3-4358-844c-9b661f254e22","Type":"ContainerDied","Data":"7d4422a73cd9c69151e982d6a24415a420632cf5387be9a9908b89fae4b7d136"} Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.476187 4902 generic.go:334] "Generic (PLEG): container finished" podID="56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d" containerID="2e960884dfc54470df60f875a779cf61caa394a9b9eb4b58037a649720bdac73" exitCode=0 Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.476267 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x9wcg" event={"ID":"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d","Type":"ContainerDied","Data":"2e960884dfc54470df60f875a779cf61caa394a9b9eb4b58037a649720bdac73"} Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.478496 4902 generic.go:334] "Generic (PLEG): container finished" podID="eef10c95-ed5c-4479-b01f-8f956d478dcf" containerID="f8e614c23f60db2d2289c45f03de6ca360a2d28723c52bf7d5442f33e4ef3cb9" exitCode=0 Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.478576 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-431b-account-create-update-trwhd" event={"ID":"eef10c95-ed5c-4479-b01f-8f956d478dcf","Type":"ContainerDied","Data":"f8e614c23f60db2d2289c45f03de6ca360a2d28723c52bf7d5442f33e4ef3cb9"} Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.480601 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d7103bd-b24b-4a0c-b68a-17373307f1aa","Type":"ContainerStarted","Data":"9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70"} Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.480696 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.525758 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.673066654 podStartE2EDuration="58.525733652s" podCreationTimestamp="2026-01-21 14:50:57 +0000 UTC" firstStartedPulling="2026-01-21 14:50:59.279476579 +0000 UTC m=+1021.356309608" lastFinishedPulling="2026-01-21 14:51:20.132143577 +0000 UTC m=+1042.208976606" observedRunningTime="2026-01-21 14:51:55.515271161 +0000 UTC m=+1077.592104200" watchObservedRunningTime="2026-01-21 14:51:55.525733652 +0000 UTC m=+1077.602566681" Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.643900 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b79764b65-4rfxx"] Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.651991 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b79764b65-4rfxx"] Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.928860 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gtbbh" Jan 21 14:51:56 crc kubenswrapper[4902]: I0121 14:51:56.058355 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjddz\" (UniqueName: \"kubernetes.io/projected/1954463b-8937-4042-a917-fe047862f4b8-kube-api-access-bjddz\") pod \"1954463b-8937-4042-a917-fe047862f4b8\" (UID: \"1954463b-8937-4042-a917-fe047862f4b8\") " Jan 21 14:51:56 crc kubenswrapper[4902]: I0121 14:51:56.058422 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1954463b-8937-4042-a917-fe047862f4b8-operator-scripts\") pod \"1954463b-8937-4042-a917-fe047862f4b8\" (UID: \"1954463b-8937-4042-a917-fe047862f4b8\") " Jan 21 14:51:56 crc kubenswrapper[4902]: I0121 14:51:56.059744 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1954463b-8937-4042-a917-fe047862f4b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1954463b-8937-4042-a917-fe047862f4b8" (UID: "1954463b-8937-4042-a917-fe047862f4b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:56 crc kubenswrapper[4902]: I0121 14:51:56.070359 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1954463b-8937-4042-a917-fe047862f4b8-kube-api-access-bjddz" (OuterVolumeSpecName: "kube-api-access-bjddz") pod "1954463b-8937-4042-a917-fe047862f4b8" (UID: "1954463b-8937-4042-a917-fe047862f4b8"). InnerVolumeSpecName "kube-api-access-bjddz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:56 crc kubenswrapper[4902]: I0121 14:51:56.160910 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjddz\" (UniqueName: \"kubernetes.io/projected/1954463b-8937-4042-a917-fe047862f4b8-kube-api-access-bjddz\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:56 crc kubenswrapper[4902]: I0121 14:51:56.160952 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1954463b-8937-4042-a917-fe047862f4b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:56 crc kubenswrapper[4902]: I0121 14:51:56.307493 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02d63009-9822-4096-9bf1-8f71d4dacd7b" path="/var/lib/kubelet/pods/02d63009-9822-4096-9bf1-8f71d4dacd7b/volumes" Jan 21 14:51:56 crc kubenswrapper[4902]: I0121 14:51:56.492836 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gtbbh" event={"ID":"1954463b-8937-4042-a917-fe047862f4b8","Type":"ContainerDied","Data":"1e9ffed32be9a49bc998cff74fdbc43e5ee1377e006d1bfc773044e302a7d8ed"} Jan 21 14:51:56 crc kubenswrapper[4902]: I0121 14:51:56.492881 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e9ffed32be9a49bc998cff74fdbc43e5ee1377e006d1bfc773044e302a7d8ed" Jan 21 14:51:56 crc kubenswrapper[4902]: I0121 14:51:56.492926 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gtbbh" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.011322 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b2af-account-create-update-g4dvb" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.073118 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-ktqgj"] Jan 21 14:51:57 crc kubenswrapper[4902]: E0121 14:51:57.073353 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a55b324-126b-4571-a2ab-1ea8005e3c46" containerName="mariadb-database-create" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.073364 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a55b324-126b-4571-a2ab-1ea8005e3c46" containerName="mariadb-database-create" Jan 21 14:51:57 crc kubenswrapper[4902]: E0121 14:51:57.073383 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d63009-9822-4096-9bf1-8f71d4dacd7b" containerName="init" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.073388 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d63009-9822-4096-9bf1-8f71d4dacd7b" containerName="init" Jan 21 14:51:57 crc kubenswrapper[4902]: E0121 14:51:57.073398 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d63009-9822-4096-9bf1-8f71d4dacd7b" containerName="dnsmasq-dns" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.073404 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d63009-9822-4096-9bf1-8f71d4dacd7b" containerName="dnsmasq-dns" Jan 21 14:51:57 crc kubenswrapper[4902]: E0121 14:51:57.073413 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1954463b-8937-4042-a917-fe047862f4b8" containerName="mariadb-account-create-update" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.073419 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1954463b-8937-4042-a917-fe047862f4b8" containerName="mariadb-account-create-update" Jan 21 14:51:57 crc kubenswrapper[4902]: E0121 14:51:57.073427 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22787b52-e166-415c-906e-788b1b73ccd0" containerName="mariadb-account-create-update" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.073433 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="22787b52-e166-415c-906e-788b1b73ccd0" containerName="mariadb-account-create-update" Jan 21 14:51:57 crc kubenswrapper[4902]: E0121 14:51:57.073454 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f05425e-47d3-4358-844c-9b661f254e22" containerName="mariadb-account-create-update" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.073459 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f05425e-47d3-4358-844c-9b661f254e22" containerName="mariadb-account-create-update" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.073607 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f05425e-47d3-4358-844c-9b661f254e22" containerName="mariadb-account-create-update" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.073620 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a55b324-126b-4571-a2ab-1ea8005e3c46" containerName="mariadb-database-create" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.073627 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="02d63009-9822-4096-9bf1-8f71d4dacd7b" containerName="dnsmasq-dns" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.073645 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="22787b52-e166-415c-906e-788b1b73ccd0" containerName="mariadb-account-create-update" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.073653 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="1954463b-8937-4042-a917-fe047862f4b8" containerName="mariadb-account-create-update" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.074090 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.077697 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-n2trw" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.077936 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.080379 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ktqgj"] Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.175810 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f05425e-47d3-4358-844c-9b661f254e22-operator-scripts\") pod \"8f05425e-47d3-4358-844c-9b661f254e22\" (UID: \"8f05425e-47d3-4358-844c-9b661f254e22\") " Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.175908 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4dfc\" (UniqueName: \"kubernetes.io/projected/8f05425e-47d3-4358-844c-9b661f254e22-kube-api-access-b4dfc\") pod \"8f05425e-47d3-4358-844c-9b661f254e22\" (UID: \"8f05425e-47d3-4358-844c-9b661f254e22\") " Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.176210 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7zxd\" (UniqueName: \"kubernetes.io/projected/eb5e91bc-7b75-4275-b1b6-998431981fca-kube-api-access-r7zxd\") pod \"glance-db-sync-ktqgj\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.176402 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-config-data\") pod \"glance-db-sync-ktqgj\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.176453 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-combined-ca-bundle\") pod \"glance-db-sync-ktqgj\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.176497 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-db-sync-config-data\") pod \"glance-db-sync-ktqgj\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.176774 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f05425e-47d3-4358-844c-9b661f254e22-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f05425e-47d3-4358-844c-9b661f254e22" (UID: "8f05425e-47d3-4358-844c-9b661f254e22"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.178737 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x9wcg" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.182257 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f05425e-47d3-4358-844c-9b661f254e22-kube-api-access-b4dfc" (OuterVolumeSpecName: "kube-api-access-b4dfc") pod "8f05425e-47d3-4358-844c-9b661f254e22" (UID: "8f05425e-47d3-4358-844c-9b661f254e22"). InnerVolumeSpecName "kube-api-access-b4dfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.195280 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bdp9p" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.203562 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-431b-account-create-update-trwhd" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.277838 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd5b13a8-7950-40cf-9255-d2c9f34c6add-operator-scripts\") pod \"fd5b13a8-7950-40cf-9255-d2c9f34c6add\" (UID: \"fd5b13a8-7950-40cf-9255-d2c9f34c6add\") " Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.277883 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eef10c95-ed5c-4479-b01f-8f956d478dcf-operator-scripts\") pod \"eef10c95-ed5c-4479-b01f-8f956d478dcf\" (UID: \"eef10c95-ed5c-4479-b01f-8f956d478dcf\") " Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.277932 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbcnc\" (UniqueName: \"kubernetes.io/projected/fd5b13a8-7950-40cf-9255-d2c9f34c6add-kube-api-access-jbcnc\") pod \"fd5b13a8-7950-40cf-9255-d2c9f34c6add\" (UID: \"fd5b13a8-7950-40cf-9255-d2c9f34c6add\") " Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.277955 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn79f\" (UniqueName: \"kubernetes.io/projected/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d-kube-api-access-sn79f\") pod \"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d\" (UID: \"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d\") " Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.277972 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2hms\" (UniqueName: \"kubernetes.io/projected/eef10c95-ed5c-4479-b01f-8f956d478dcf-kube-api-access-s2hms\") pod \"eef10c95-ed5c-4479-b01f-8f956d478dcf\" (UID: \"eef10c95-ed5c-4479-b01f-8f956d478dcf\") " Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.278133 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d-operator-scripts\") pod \"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d\" (UID: \"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d\") " Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.278282 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-db-sync-config-data\") pod \"glance-db-sync-ktqgj\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.278388 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7zxd\" (UniqueName: \"kubernetes.io/projected/eb5e91bc-7b75-4275-b1b6-998431981fca-kube-api-access-r7zxd\") pod \"glance-db-sync-ktqgj\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.278456 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-config-data\") pod \"glance-db-sync-ktqgj\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.278474 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-combined-ca-bundle\") pod \"glance-db-sync-ktqgj\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.278517 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4dfc\" (UniqueName: \"kubernetes.io/projected/8f05425e-47d3-4358-844c-9b661f254e22-kube-api-access-b4dfc\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.278533 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f05425e-47d3-4358-844c-9b661f254e22-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.279490 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eef10c95-ed5c-4479-b01f-8f956d478dcf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eef10c95-ed5c-4479-b01f-8f956d478dcf" (UID: "eef10c95-ed5c-4479-b01f-8f956d478dcf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.280150 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd5b13a8-7950-40cf-9255-d2c9f34c6add-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fd5b13a8-7950-40cf-9255-d2c9f34c6add" (UID: "fd5b13a8-7950-40cf-9255-d2c9f34c6add"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.280888 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d" (UID: "56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.282936 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-db-sync-config-data\") pod \"glance-db-sync-ktqgj\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.282973 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-combined-ca-bundle\") pod \"glance-db-sync-ktqgj\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.283507 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d-kube-api-access-sn79f" (OuterVolumeSpecName: "kube-api-access-sn79f") pod "56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d" (UID: "56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d"). InnerVolumeSpecName "kube-api-access-sn79f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.283875 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd5b13a8-7950-40cf-9255-d2c9f34c6add-kube-api-access-jbcnc" (OuterVolumeSpecName: "kube-api-access-jbcnc") pod "fd5b13a8-7950-40cf-9255-d2c9f34c6add" (UID: "fd5b13a8-7950-40cf-9255-d2c9f34c6add"). InnerVolumeSpecName "kube-api-access-jbcnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.286392 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eef10c95-ed5c-4479-b01f-8f956d478dcf-kube-api-access-s2hms" (OuterVolumeSpecName: "kube-api-access-s2hms") pod "eef10c95-ed5c-4479-b01f-8f956d478dcf" (UID: "eef10c95-ed5c-4479-b01f-8f956d478dcf"). InnerVolumeSpecName "kube-api-access-s2hms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.286923 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-config-data\") pod \"glance-db-sync-ktqgj\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.296865 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7zxd\" (UniqueName: \"kubernetes.io/projected/eb5e91bc-7b75-4275-b1b6-998431981fca-kube-api-access-r7zxd\") pod \"glance-db-sync-ktqgj\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.379751 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbcnc\" (UniqueName: \"kubernetes.io/projected/fd5b13a8-7950-40cf-9255-d2c9f34c6add-kube-api-access-jbcnc\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.380029 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn79f\" (UniqueName: \"kubernetes.io/projected/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d-kube-api-access-sn79f\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.380064 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2hms\" (UniqueName: \"kubernetes.io/projected/eef10c95-ed5c-4479-b01f-8f956d478dcf-kube-api-access-s2hms\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.380078 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.380091 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd5b13a8-7950-40cf-9255-d2c9f34c6add-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.380101 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eef10c95-ed5c-4479-b01f-8f956d478dcf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.492481 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.510961 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b2af-account-create-update-g4dvb" event={"ID":"8f05425e-47d3-4358-844c-9b661f254e22","Type":"ContainerDied","Data":"4390a64682acbf30933444954ba3902efa753868aaafded59aeec375b82f230e"} Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.511001 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4390a64682acbf30933444954ba3902efa753868aaafded59aeec375b82f230e" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.511727 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b2af-account-create-update-g4dvb" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.512637 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x9wcg" event={"ID":"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d","Type":"ContainerDied","Data":"2874be65f89a65a539647e2aeacf5ef7b77341fc540507819036a595a616e45a"} Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.512666 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2874be65f89a65a539647e2aeacf5ef7b77341fc540507819036a595a616e45a" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.512725 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x9wcg" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.522667 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-431b-account-create-update-trwhd" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.522967 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-431b-account-create-update-trwhd" event={"ID":"eef10c95-ed5c-4479-b01f-8f956d478dcf","Type":"ContainerDied","Data":"414ad226955c987ec7c025c549b229f68b66622db3c6c7377bf41c281090ba20"} Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.523007 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="414ad226955c987ec7c025c549b229f68b66622db3c6c7377bf41c281090ba20" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.525019 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bdp9p" event={"ID":"fd5b13a8-7950-40cf-9255-d2c9f34c6add","Type":"ContainerDied","Data":"5cb8cc3872ae580644ce626d0d89d1daf3e291701338bc5d3629a7cd3738096c"} Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.525080 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bdp9p" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.525101 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cb8cc3872ae580644ce626d0d89d1daf3e291701338bc5d3629a7cd3738096c" Jan 21 14:51:58 crc kubenswrapper[4902]: W0121 14:51:58.083456 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb5e91bc_7b75_4275_b1b6_998431981fca.slice/crio-06bc6ebdb802a8f9e6cf31504f046445f838734188e18dd997d7eac178a9c70b WatchSource:0}: Error finding container 06bc6ebdb802a8f9e6cf31504f046445f838734188e18dd997d7eac178a9c70b: Status 404 returned error can't find the container with id 06bc6ebdb802a8f9e6cf31504f046445f838734188e18dd997d7eac178a9c70b Jan 21 14:51:58 crc kubenswrapper[4902]: I0121 14:51:58.083607 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ktqgj"] Jan 21 14:51:58 crc kubenswrapper[4902]: I0121 14:51:58.540571 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ktqgj" event={"ID":"eb5e91bc-7b75-4275-b1b6-998431981fca","Type":"ContainerStarted","Data":"06bc6ebdb802a8f9e6cf31504f046445f838734188e18dd997d7eac178a9c70b"} Jan 21 14:51:58 crc kubenswrapper[4902]: I0121 14:51:58.793136 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:51:59 crc kubenswrapper[4902]: I0121 14:51:59.787198 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-gtbbh"] Jan 21 14:51:59 crc kubenswrapper[4902]: I0121 14:51:59.795125 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-gtbbh"] Jan 21 14:52:00 crc kubenswrapper[4902]: I0121 14:52:00.302691 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1954463b-8937-4042-a917-fe047862f4b8" path="/var/lib/kubelet/pods/1954463b-8937-4042-a917-fe047862f4b8/volumes" Jan 21 14:52:00 crc kubenswrapper[4902]: I0121 14:52:00.431324 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:52:00 crc kubenswrapper[4902]: E0121 14:52:00.431949 4902 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 14:52:00 crc kubenswrapper[4902]: E0121 14:52:00.432024 4902 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 14:52:00 crc kubenswrapper[4902]: E0121 14:52:00.432139 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift podName:ee214fec-083a-4abd-b65e-003bccee24fa nodeName:}" failed. No retries permitted until 2026-01-21 14:52:16.432113108 +0000 UTC m=+1098.508946177 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift") pod "swift-storage-0" (UID: "ee214fec-083a-4abd-b65e-003bccee24fa") : configmap "swift-ring-files" not found Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.530759 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-kxwsm" podUID="e8135258-f03d-4c9a-be6f-7dd1dd099188" containerName="ovn-controller" probeResult="failure" output=< Jan 21 14:52:01 crc kubenswrapper[4902]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 14:52:01 crc kubenswrapper[4902]: > Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.586936 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.591499 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.818906 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kxwsm-config-s4np8"] Jan 21 14:52:01 crc kubenswrapper[4902]: E0121 14:52:01.819331 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eef10c95-ed5c-4479-b01f-8f956d478dcf" containerName="mariadb-account-create-update" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.819352 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="eef10c95-ed5c-4479-b01f-8f956d478dcf" containerName="mariadb-account-create-update" Jan 21 14:52:01 crc kubenswrapper[4902]: E0121 14:52:01.819379 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d" containerName="mariadb-database-create" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.819388 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d" containerName="mariadb-database-create" Jan 21 14:52:01 crc kubenswrapper[4902]: E0121 14:52:01.819407 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5b13a8-7950-40cf-9255-d2c9f34c6add" containerName="mariadb-database-create" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.819415 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5b13a8-7950-40cf-9255-d2c9f34c6add" containerName="mariadb-database-create" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.819588 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="eef10c95-ed5c-4479-b01f-8f956d478dcf" containerName="mariadb-account-create-update" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.819604 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd5b13a8-7950-40cf-9255-d2c9f34c6add" containerName="mariadb-database-create" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.819610 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d" containerName="mariadb-database-create" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.820116 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.822264 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.832462 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kxwsm-config-s4np8"] Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.854366 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-run-ovn\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.854438 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-log-ovn\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.854488 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-run\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.854511 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6254k\" (UniqueName: \"kubernetes.io/projected/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-kube-api-access-6254k\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.854529 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-additional-scripts\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.854573 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-scripts\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.956163 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-log-ovn\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.956259 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-run\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.956297 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6254k\" (UniqueName: \"kubernetes.io/projected/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-kube-api-access-6254k\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.956323 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-additional-scripts\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.956386 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-scripts\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.956458 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-run-ovn\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.956526 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-run\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.956542 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-log-ovn\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.956583 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-run-ovn\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.957357 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-additional-scripts\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:02 crc kubenswrapper[4902]: I0121 14:52:02.068729 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-scripts\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:02 crc kubenswrapper[4902]: I0121 14:52:02.070713 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6254k\" (UniqueName: \"kubernetes.io/projected/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-kube-api-access-6254k\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:02 crc kubenswrapper[4902]: I0121 14:52:02.145558 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:02 crc kubenswrapper[4902]: I0121 14:52:02.866912 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kxwsm-config-s4np8"] Jan 21 14:52:02 crc kubenswrapper[4902]: W0121 14:52:02.875575 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b6eeccf_57f9_48ef_8b50_f1b3cdadd658.slice/crio-b7ff547a0c27e98c4cfee82aa93881bfcf89d0b6e3e36720fced00cb9b5ac79b WatchSource:0}: Error finding container b7ff547a0c27e98c4cfee82aa93881bfcf89d0b6e3e36720fced00cb9b5ac79b: Status 404 returned error can't find the container with id b7ff547a0c27e98c4cfee82aa93881bfcf89d0b6e3e36720fced00cb9b5ac79b Jan 21 14:52:03 crc kubenswrapper[4902]: I0121 14:52:03.591461 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxwsm-config-s4np8" event={"ID":"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658","Type":"ContainerStarted","Data":"b7ff547a0c27e98c4cfee82aa93881bfcf89d0b6e3e36720fced00cb9b5ac79b"} Jan 21 14:52:04 crc kubenswrapper[4902]: I0121 14:52:04.600305 4902 generic.go:334] "Generic (PLEG): container finished" podID="9959d508-3783-403a-bdd6-65159821fc9e" containerID="29527624e52b61188971d77dcdc19feadc4e519866ced3ad0c73f26335294506" exitCode=0 Jan 21 14:52:04 crc kubenswrapper[4902]: I0121 14:52:04.600386 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hmcs2" event={"ID":"9959d508-3783-403a-bdd6-65159821fc9e","Type":"ContainerDied","Data":"29527624e52b61188971d77dcdc19feadc4e519866ced3ad0c73f26335294506"} Jan 21 14:52:04 crc kubenswrapper[4902]: I0121 14:52:04.602217 4902 generic.go:334] "Generic (PLEG): container finished" podID="67f50f65-9151-4444-9680-f86e0f256069" containerID="61d699bd9d7518ccbc0b054c17af7f1dd18564a940361db0161ef5daafb50c80" exitCode=0 Jan 21 14:52:04 crc kubenswrapper[4902]: I0121 14:52:04.602276 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67f50f65-9151-4444-9680-f86e0f256069","Type":"ContainerDied","Data":"61d699bd9d7518ccbc0b054c17af7f1dd18564a940361db0161ef5daafb50c80"} Jan 21 14:52:04 crc kubenswrapper[4902]: I0121 14:52:04.605580 4902 generic.go:334] "Generic (PLEG): container finished" podID="7b6eeccf-57f9-48ef-8b50-f1b3cdadd658" containerID="4de70b4a162bef7d46289abd4a1b9363b5ded88ef279f8bfde6f5eb04e8068c8" exitCode=0 Jan 21 14:52:04 crc kubenswrapper[4902]: I0121 14:52:04.605625 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxwsm-config-s4np8" event={"ID":"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658","Type":"ContainerDied","Data":"4de70b4a162bef7d46289abd4a1b9363b5ded88ef279f8bfde6f5eb04e8068c8"} Jan 21 14:52:04 crc kubenswrapper[4902]: I0121 14:52:04.811187 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-gvjmj"] Jan 21 14:52:04 crc kubenswrapper[4902]: I0121 14:52:04.812105 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gvjmj" Jan 21 14:52:04 crc kubenswrapper[4902]: I0121 14:52:04.814873 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 14:52:04 crc kubenswrapper[4902]: I0121 14:52:04.818692 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gvjmj"] Jan 21 14:52:04 crc kubenswrapper[4902]: I0121 14:52:04.948935 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8587\" (UniqueName: \"kubernetes.io/projected/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a-kube-api-access-m8587\") pod \"root-account-create-update-gvjmj\" (UID: \"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a\") " pod="openstack/root-account-create-update-gvjmj" Jan 21 14:52:04 crc kubenswrapper[4902]: I0121 14:52:04.949003 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a-operator-scripts\") pod \"root-account-create-update-gvjmj\" (UID: \"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a\") " pod="openstack/root-account-create-update-gvjmj" Jan 21 14:52:05 crc kubenswrapper[4902]: I0121 14:52:05.050658 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8587\" (UniqueName: \"kubernetes.io/projected/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a-kube-api-access-m8587\") pod \"root-account-create-update-gvjmj\" (UID: \"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a\") " pod="openstack/root-account-create-update-gvjmj" Jan 21 14:52:05 crc kubenswrapper[4902]: I0121 14:52:05.050728 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a-operator-scripts\") pod \"root-account-create-update-gvjmj\" (UID: \"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a\") " pod="openstack/root-account-create-update-gvjmj" Jan 21 14:52:05 crc kubenswrapper[4902]: I0121 14:52:05.051499 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a-operator-scripts\") pod \"root-account-create-update-gvjmj\" (UID: \"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a\") " pod="openstack/root-account-create-update-gvjmj" Jan 21 14:52:05 crc kubenswrapper[4902]: I0121 14:52:05.072065 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8587\" (UniqueName: \"kubernetes.io/projected/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a-kube-api-access-m8587\") pod \"root-account-create-update-gvjmj\" (UID: \"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a\") " pod="openstack/root-account-create-update-gvjmj" Jan 21 14:52:05 crc kubenswrapper[4902]: I0121 14:52:05.139719 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gvjmj" Jan 21 14:52:06 crc kubenswrapper[4902]: I0121 14:52:06.617468 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-kxwsm" Jan 21 14:52:08 crc kubenswrapper[4902]: I0121 14:52:08.796570 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:52:16 crc kubenswrapper[4902]: I0121 14:52:16.442397 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:52:16 crc kubenswrapper[4902]: I0121 14:52:16.470276 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:52:16 crc kubenswrapper[4902]: I0121 14:52:16.674548 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 14:52:18 crc kubenswrapper[4902]: E0121 14:52:18.607350 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f" Jan 21 14:52:18 crc kubenswrapper[4902]: E0121 14:52:18.608138 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r7zxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-ktqgj_openstack(eb5e91bc-7b75-4275-b1b6-998431981fca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:52:18 crc kubenswrapper[4902]: E0121 14:52:18.609828 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-ktqgj" podUID="eb5e91bc-7b75-4275-b1b6-998431981fca" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.723620 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hmcs2" event={"ID":"9959d508-3783-403a-bdd6-65159821fc9e","Type":"ContainerDied","Data":"fef5a480d26d112bdfd5701ee7922c211bab55d5f67520948d56b886a9288647"} Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.723656 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fef5a480d26d112bdfd5701ee7922c211bab55d5f67520948d56b886a9288647" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.724963 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxwsm-config-s4np8" event={"ID":"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658","Type":"ContainerDied","Data":"b7ff547a0c27e98c4cfee82aa93881bfcf89d0b6e3e36720fced00cb9b5ac79b"} Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.724988 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7ff547a0c27e98c4cfee82aa93881bfcf89d0b6e3e36720fced00cb9b5ac79b" Jan 21 14:52:18 crc kubenswrapper[4902]: E0121 14:52:18.726837 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f\\\"\"" pod="openstack/glance-db-sync-ktqgj" podUID="eb5e91bc-7b75-4275-b1b6-998431981fca" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.736173 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.746623 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936080 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-swiftconf\") pod \"9959d508-3783-403a-bdd6-65159821fc9e\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936135 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-run\") pod \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936169 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkh72\" (UniqueName: \"kubernetes.io/projected/9959d508-3783-403a-bdd6-65159821fc9e-kube-api-access-fkh72\") pod \"9959d508-3783-403a-bdd6-65159821fc9e\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936204 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-scripts\") pod \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936239 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-combined-ca-bundle\") pod \"9959d508-3783-403a-bdd6-65159821fc9e\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936281 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6254k\" (UniqueName: \"kubernetes.io/projected/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-kube-api-access-6254k\") pod \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936362 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-run-ovn\") pod \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936414 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9959d508-3783-403a-bdd6-65159821fc9e-ring-data-devices\") pod \"9959d508-3783-403a-bdd6-65159821fc9e\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936445 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9959d508-3783-403a-bdd6-65159821fc9e-scripts\") pod \"9959d508-3783-403a-bdd6-65159821fc9e\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936471 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-log-ovn\") pod \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936508 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-additional-scripts\") pod \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936536 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-dispersionconf\") pod \"9959d508-3783-403a-bdd6-65159821fc9e\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936577 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9959d508-3783-403a-bdd6-65159821fc9e-etc-swift\") pod \"9959d508-3783-403a-bdd6-65159821fc9e\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.942192 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "7b6eeccf-57f9-48ef-8b50-f1b3cdadd658" (UID: "7b6eeccf-57f9-48ef-8b50-f1b3cdadd658"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.942250 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-run" (OuterVolumeSpecName: "var-run") pod "7b6eeccf-57f9-48ef-8b50-f1b3cdadd658" (UID: "7b6eeccf-57f9-48ef-8b50-f1b3cdadd658"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.942423 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "7b6eeccf-57f9-48ef-8b50-f1b3cdadd658" (UID: "7b6eeccf-57f9-48ef-8b50-f1b3cdadd658"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.943344 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9959d508-3783-403a-bdd6-65159821fc9e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "9959d508-3783-403a-bdd6-65159821fc9e" (UID: "9959d508-3783-403a-bdd6-65159821fc9e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.943806 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-scripts" (OuterVolumeSpecName: "scripts") pod "7b6eeccf-57f9-48ef-8b50-f1b3cdadd658" (UID: "7b6eeccf-57f9-48ef-8b50-f1b3cdadd658"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.944851 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "7b6eeccf-57f9-48ef-8b50-f1b3cdadd658" (UID: "7b6eeccf-57f9-48ef-8b50-f1b3cdadd658"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.945377 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9959d508-3783-403a-bdd6-65159821fc9e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9959d508-3783-403a-bdd6-65159821fc9e" (UID: "9959d508-3783-403a-bdd6-65159821fc9e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.951312 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "9959d508-3783-403a-bdd6-65159821fc9e" (UID: "9959d508-3783-403a-bdd6-65159821fc9e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.955204 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9959d508-3783-403a-bdd6-65159821fc9e-kube-api-access-fkh72" (OuterVolumeSpecName: "kube-api-access-fkh72") pod "9959d508-3783-403a-bdd6-65159821fc9e" (UID: "9959d508-3783-403a-bdd6-65159821fc9e"). InnerVolumeSpecName "kube-api-access-fkh72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.967235 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9959d508-3783-403a-bdd6-65159821fc9e-scripts" (OuterVolumeSpecName: "scripts") pod "9959d508-3783-403a-bdd6-65159821fc9e" (UID: "9959d508-3783-403a-bdd6-65159821fc9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.967358 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-kube-api-access-6254k" (OuterVolumeSpecName: "kube-api-access-6254k") pod "7b6eeccf-57f9-48ef-8b50-f1b3cdadd658" (UID: "7b6eeccf-57f9-48ef-8b50-f1b3cdadd658"). InnerVolumeSpecName "kube-api-access-6254k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.977198 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9959d508-3783-403a-bdd6-65159821fc9e" (UID: "9959d508-3783-403a-bdd6-65159821fc9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.978276 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "9959d508-3783-403a-bdd6-65159821fc9e" (UID: "9959d508-3783-403a-bdd6-65159821fc9e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041336 4902 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9959d508-3783-403a-bdd6-65159821fc9e-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041713 4902 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041731 4902 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041742 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkh72\" (UniqueName: \"kubernetes.io/projected/9959d508-3783-403a-bdd6-65159821fc9e-kube-api-access-fkh72\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041754 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041764 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041776 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6254k\" (UniqueName: \"kubernetes.io/projected/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-kube-api-access-6254k\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041788 4902 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041798 4902 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9959d508-3783-403a-bdd6-65159821fc9e-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041808 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9959d508-3783-403a-bdd6-65159821fc9e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041819 4902 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041830 4902 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041840 4902 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.060370 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gvjmj"] Jan 21 14:52:19 crc kubenswrapper[4902]: W0121 14:52:19.060852 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd0110fe_ef40_4a4b_bad7_a3c24aa5089a.slice/crio-2a0fa7fabd294929921326289d90686b7bc4b9b5e3e7bc17970170db45a3367d WatchSource:0}: Error finding container 2a0fa7fabd294929921326289d90686b7bc4b9b5e3e7bc17970170db45a3367d: Status 404 returned error can't find the container with id 2a0fa7fabd294929921326289d90686b7bc4b9b5e3e7bc17970170db45a3367d Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.221692 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 21 14:52:19 crc kubenswrapper[4902]: W0121 14:52:19.221782 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee214fec_083a_4abd_b65e_003bccee24fa.slice/crio-6c463f82994bcd8248458f35757eded9002826e57bff7f1770ee0560e5c7ce9d WatchSource:0}: Error finding container 6c463f82994bcd8248458f35757eded9002826e57bff7f1770ee0560e5c7ce9d: Status 404 returned error can't find the container with id 6c463f82994bcd8248458f35757eded9002826e57bff7f1770ee0560e5c7ce9d Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.039292 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"6c463f82994bcd8248458f35757eded9002826e57bff7f1770ee0560e5c7ce9d"} Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.056425 4902 generic.go:334] "Generic (PLEG): container finished" podID="dd0110fe-ef40-4a4b-bad7-a3c24aa5089a" containerID="2afbcb861df82627e26ab173626f1c8e32c7418b9f0cebb9c30b8e8a773fee20" exitCode=0 Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.056524 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gvjmj" event={"ID":"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a","Type":"ContainerDied","Data":"2afbcb861df82627e26ab173626f1c8e32c7418b9f0cebb9c30b8e8a773fee20"} Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.056591 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gvjmj" event={"ID":"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a","Type":"ContainerStarted","Data":"2a0fa7fabd294929921326289d90686b7bc4b9b5e3e7bc17970170db45a3367d"} Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.063000 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.063622 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67f50f65-9151-4444-9680-f86e0f256069","Type":"ContainerStarted","Data":"d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852"} Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.064392 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.064570 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.112389 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371952.742416 podStartE2EDuration="1m24.112358895s" podCreationTimestamp="2026-01-21 14:50:56 +0000 UTC" firstStartedPulling="2026-01-21 14:50:58.272083197 +0000 UTC m=+1020.348916216" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:20.105011627 +0000 UTC m=+1102.181844656" watchObservedRunningTime="2026-01-21 14:52:20.112358895 +0000 UTC m=+1102.189191944" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.126084 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-kxwsm-config-s4np8"] Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.500739 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-kxwsm-config-s4np8"] Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.513648 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kxwsm-config-6v9dp"] Jan 21 14:52:20 crc kubenswrapper[4902]: E0121 14:52:20.514029 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9959d508-3783-403a-bdd6-65159821fc9e" containerName="swift-ring-rebalance" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.514059 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9959d508-3783-403a-bdd6-65159821fc9e" containerName="swift-ring-rebalance" Jan 21 14:52:20 crc kubenswrapper[4902]: E0121 14:52:20.514078 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b6eeccf-57f9-48ef-8b50-f1b3cdadd658" containerName="ovn-config" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.514084 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b6eeccf-57f9-48ef-8b50-f1b3cdadd658" containerName="ovn-config" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.514406 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b6eeccf-57f9-48ef-8b50-f1b3cdadd658" containerName="ovn-config" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.514427 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9959d508-3783-403a-bdd6-65159821fc9e" containerName="swift-ring-rebalance" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.514917 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.518388 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.522586 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kxwsm-config-6v9dp"] Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.588437 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-run-ovn\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.588781 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-run\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.588814 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63ce75de-3f15-43b4-96c9-70c0b03f9280-scripts\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.588870 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/63ce75de-3f15-43b4-96c9-70c0b03f9280-additional-scripts\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.588887 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-log-ovn\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.589128 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjdcg\" (UniqueName: \"kubernetes.io/projected/63ce75de-3f15-43b4-96c9-70c0b03f9280-kube-api-access-cjdcg\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.690391 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjdcg\" (UniqueName: \"kubernetes.io/projected/63ce75de-3f15-43b4-96c9-70c0b03f9280-kube-api-access-cjdcg\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.690711 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-run-ovn\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.690748 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-run\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.690921 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63ce75de-3f15-43b4-96c9-70c0b03f9280-scripts\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.690975 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/63ce75de-3f15-43b4-96c9-70c0b03f9280-additional-scripts\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.690991 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-log-ovn\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.691060 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-run-ovn\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.691078 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-run\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.691131 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-log-ovn\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.691784 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/63ce75de-3f15-43b4-96c9-70c0b03f9280-additional-scripts\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.693114 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63ce75de-3f15-43b4-96c9-70c0b03f9280-scripts\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.719262 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjdcg\" (UniqueName: \"kubernetes.io/projected/63ce75de-3f15-43b4-96c9-70c0b03f9280-kube-api-access-cjdcg\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.841151 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:21 crc kubenswrapper[4902]: I0121 14:52:21.235595 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kxwsm-config-6v9dp"] Jan 21 14:52:21 crc kubenswrapper[4902]: I0121 14:52:21.549451 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gvjmj" Jan 21 14:52:21 crc kubenswrapper[4902]: I0121 14:52:21.719594 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a-operator-scripts\") pod \"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a\" (UID: \"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a\") " Jan 21 14:52:21 crc kubenswrapper[4902]: I0121 14:52:21.720029 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8587\" (UniqueName: \"kubernetes.io/projected/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a-kube-api-access-m8587\") pod \"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a\" (UID: \"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a\") " Jan 21 14:52:21 crc kubenswrapper[4902]: I0121 14:52:21.947097 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd0110fe-ef40-4a4b-bad7-a3c24aa5089a" (UID: "dd0110fe-ef40-4a4b-bad7-a3c24aa5089a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:21 crc kubenswrapper[4902]: I0121 14:52:21.948259 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:21 crc kubenswrapper[4902]: I0121 14:52:21.973532 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a-kube-api-access-m8587" (OuterVolumeSpecName: "kube-api-access-m8587") pod "dd0110fe-ef40-4a4b-bad7-a3c24aa5089a" (UID: "dd0110fe-ef40-4a4b-bad7-a3c24aa5089a"). InnerVolumeSpecName "kube-api-access-m8587". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:22 crc kubenswrapper[4902]: I0121 14:52:22.053965 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8587\" (UniqueName: \"kubernetes.io/projected/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a-kube-api-access-m8587\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:22 crc kubenswrapper[4902]: I0121 14:52:22.088995 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135"} Jan 21 14:52:22 crc kubenswrapper[4902]: I0121 14:52:22.089054 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5"} Jan 21 14:52:22 crc kubenswrapper[4902]: I0121 14:52:22.089067 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b"} Jan 21 14:52:22 crc kubenswrapper[4902]: I0121 14:52:22.090216 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gvjmj" event={"ID":"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a","Type":"ContainerDied","Data":"2a0fa7fabd294929921326289d90686b7bc4b9b5e3e7bc17970170db45a3367d"} Jan 21 14:52:22 crc kubenswrapper[4902]: I0121 14:52:22.090251 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a0fa7fabd294929921326289d90686b7bc4b9b5e3e7bc17970170db45a3367d" Jan 21 14:52:22 crc kubenswrapper[4902]: I0121 14:52:22.090311 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gvjmj" Jan 21 14:52:22 crc kubenswrapper[4902]: I0121 14:52:22.095994 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxwsm-config-6v9dp" event={"ID":"63ce75de-3f15-43b4-96c9-70c0b03f9280","Type":"ContainerStarted","Data":"493b2b2d4384ed074008865724712fb9ff226fa56a68d6f6b8711c1447a2d13b"} Jan 21 14:52:22 crc kubenswrapper[4902]: I0121 14:52:22.096057 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxwsm-config-6v9dp" event={"ID":"63ce75de-3f15-43b4-96c9-70c0b03f9280","Type":"ContainerStarted","Data":"6c871d02c54f921f78654feadccf4922a73121fd0476fec35c7d6d749146bf27"} Jan 21 14:52:22 crc kubenswrapper[4902]: I0121 14:52:22.117512 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-kxwsm-config-6v9dp" podStartSLOduration=2.117495784 podStartE2EDuration="2.117495784s" podCreationTimestamp="2026-01-21 14:52:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:22.114760247 +0000 UTC m=+1104.191593286" watchObservedRunningTime="2026-01-21 14:52:22.117495784 +0000 UTC m=+1104.194328813" Jan 21 14:52:22 crc kubenswrapper[4902]: I0121 14:52:22.808002 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b6eeccf-57f9-48ef-8b50-f1b3cdadd658" path="/var/lib/kubelet/pods/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658/volumes" Jan 21 14:52:23 crc kubenswrapper[4902]: I0121 14:52:23.105136 4902 generic.go:334] "Generic (PLEG): container finished" podID="63ce75de-3f15-43b4-96c9-70c0b03f9280" containerID="493b2b2d4384ed074008865724712fb9ff226fa56a68d6f6b8711c1447a2d13b" exitCode=0 Jan 21 14:52:23 crc kubenswrapper[4902]: I0121 14:52:23.105206 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxwsm-config-6v9dp" event={"ID":"63ce75de-3f15-43b4-96c9-70c0b03f9280","Type":"ContainerDied","Data":"493b2b2d4384ed074008865724712fb9ff226fa56a68d6f6b8711c1447a2d13b"} Jan 21 14:52:23 crc kubenswrapper[4902]: I0121 14:52:23.107971 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad"} Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.446226 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.529854 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63ce75de-3f15-43b4-96c9-70c0b03f9280-scripts\") pod \"63ce75de-3f15-43b4-96c9-70c0b03f9280\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.529949 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjdcg\" (UniqueName: \"kubernetes.io/projected/63ce75de-3f15-43b4-96c9-70c0b03f9280-kube-api-access-cjdcg\") pod \"63ce75de-3f15-43b4-96c9-70c0b03f9280\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.530000 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-log-ovn\") pod \"63ce75de-3f15-43b4-96c9-70c0b03f9280\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.530156 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/63ce75de-3f15-43b4-96c9-70c0b03f9280-additional-scripts\") pod \"63ce75de-3f15-43b4-96c9-70c0b03f9280\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.530209 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-run-ovn\") pod \"63ce75de-3f15-43b4-96c9-70c0b03f9280\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.530345 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-run\") pod \"63ce75de-3f15-43b4-96c9-70c0b03f9280\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.530345 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "63ce75de-3f15-43b4-96c9-70c0b03f9280" (UID: "63ce75de-3f15-43b4-96c9-70c0b03f9280"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.530459 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "63ce75de-3f15-43b4-96c9-70c0b03f9280" (UID: "63ce75de-3f15-43b4-96c9-70c0b03f9280"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.530553 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-run" (OuterVolumeSpecName: "var-run") pod "63ce75de-3f15-43b4-96c9-70c0b03f9280" (UID: "63ce75de-3f15-43b4-96c9-70c0b03f9280"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.530978 4902 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.531009 4902 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.531024 4902 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.531029 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63ce75de-3f15-43b4-96c9-70c0b03f9280-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "63ce75de-3f15-43b4-96c9-70c0b03f9280" (UID: "63ce75de-3f15-43b4-96c9-70c0b03f9280"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.531391 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63ce75de-3f15-43b4-96c9-70c0b03f9280-scripts" (OuterVolumeSpecName: "scripts") pod "63ce75de-3f15-43b4-96c9-70c0b03f9280" (UID: "63ce75de-3f15-43b4-96c9-70c0b03f9280"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.536186 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63ce75de-3f15-43b4-96c9-70c0b03f9280-kube-api-access-cjdcg" (OuterVolumeSpecName: "kube-api-access-cjdcg") pod "63ce75de-3f15-43b4-96c9-70c0b03f9280" (UID: "63ce75de-3f15-43b4-96c9-70c0b03f9280"). InnerVolumeSpecName "kube-api-access-cjdcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.633173 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjdcg\" (UniqueName: \"kubernetes.io/projected/63ce75de-3f15-43b4-96c9-70c0b03f9280-kube-api-access-cjdcg\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.633226 4902 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/63ce75de-3f15-43b4-96c9-70c0b03f9280-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.633245 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63ce75de-3f15-43b4-96c9-70c0b03f9280-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:26 crc kubenswrapper[4902]: I0121 14:52:26.315690 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxwsm-config-6v9dp" event={"ID":"63ce75de-3f15-43b4-96c9-70c0b03f9280","Type":"ContainerDied","Data":"6c871d02c54f921f78654feadccf4922a73121fd0476fec35c7d6d749146bf27"} Jan 21 14:52:26 crc kubenswrapper[4902]: I0121 14:52:26.315730 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c871d02c54f921f78654feadccf4922a73121fd0476fec35c7d6d749146bf27" Jan 21 14:52:26 crc kubenswrapper[4902]: I0121 14:52:26.315766 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:26 crc kubenswrapper[4902]: I0121 14:52:26.521331 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-kxwsm-config-6v9dp"] Jan 21 14:52:26 crc kubenswrapper[4902]: I0121 14:52:26.529486 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-kxwsm-config-6v9dp"] Jan 21 14:52:28 crc kubenswrapper[4902]: I0121 14:52:28.310456 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63ce75de-3f15-43b4-96c9-70c0b03f9280" path="/var/lib/kubelet/pods/63ce75de-3f15-43b4-96c9-70c0b03f9280/volumes" Jan 21 14:52:29 crc kubenswrapper[4902]: I0121 14:52:29.395017 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157"} Jan 21 14:52:30 crc kubenswrapper[4902]: I0121 14:52:30.406208 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a"} Jan 21 14:52:31 crc kubenswrapper[4902]: I0121 14:52:31.418706 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e"} Jan 21 14:52:31 crc kubenswrapper[4902]: I0121 14:52:31.419187 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606"} Jan 21 14:52:32 crc kubenswrapper[4902]: I0121 14:52:32.448316 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9"} Jan 21 14:52:32 crc kubenswrapper[4902]: I0121 14:52:32.448779 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f"} Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.458442 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ktqgj" event={"ID":"eb5e91bc-7b75-4275-b1b6-998431981fca","Type":"ContainerStarted","Data":"dff0b2c9f0b06182d720253d8f2ef15a7b10dcf34cc35665586623d88b252d47"} Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.469663 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc"} Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.469712 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f"} Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.469725 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e"} Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.469737 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179"} Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.469749 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a"} Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.491472 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-ktqgj" podStartSLOduration=2.8625412409999997 podStartE2EDuration="36.491456555s" podCreationTimestamp="2026-01-21 14:51:57 +0000 UTC" firstStartedPulling="2026-01-21 14:51:58.085767847 +0000 UTC m=+1080.162600876" lastFinishedPulling="2026-01-21 14:52:31.714683161 +0000 UTC m=+1113.791516190" observedRunningTime="2026-01-21 14:52:33.482672196 +0000 UTC m=+1115.559505225" watchObservedRunningTime="2026-01-21 14:52:33.491456555 +0000 UTC m=+1115.568289584" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.521891 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.767819182 podStartE2EDuration="50.521863585s" podCreationTimestamp="2026-01-21 14:51:43 +0000 UTC" firstStartedPulling="2026-01-21 14:52:19.224184716 +0000 UTC m=+1101.301017745" lastFinishedPulling="2026-01-21 14:52:31.978229119 +0000 UTC m=+1114.055062148" observedRunningTime="2026-01-21 14:52:33.51108307 +0000 UTC m=+1115.587916109" watchObservedRunningTime="2026-01-21 14:52:33.521863585 +0000 UTC m=+1115.598696624" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.771398 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8db84466c-ns7jh"] Jan 21 14:52:33 crc kubenswrapper[4902]: E0121 14:52:33.771771 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd0110fe-ef40-4a4b-bad7-a3c24aa5089a" containerName="mariadb-account-create-update" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.771790 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd0110fe-ef40-4a4b-bad7-a3c24aa5089a" containerName="mariadb-account-create-update" Jan 21 14:52:33 crc kubenswrapper[4902]: E0121 14:52:33.771816 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ce75de-3f15-43b4-96c9-70c0b03f9280" containerName="ovn-config" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.771825 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ce75de-3f15-43b4-96c9-70c0b03f9280" containerName="ovn-config" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.772037 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd0110fe-ef40-4a4b-bad7-a3c24aa5089a" containerName="mariadb-account-create-update" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.772085 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="63ce75de-3f15-43b4-96c9-70c0b03f9280" containerName="ovn-config" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.773120 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.774951 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.798740 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-ns7jh"] Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.893977 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-dns-svc\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.894068 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.894134 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.894181 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgdkp\" (UniqueName: \"kubernetes.io/projected/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-kube-api-access-xgdkp\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.894239 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-config\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.894268 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.995273 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgdkp\" (UniqueName: \"kubernetes.io/projected/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-kube-api-access-xgdkp\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.995365 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-config\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.995396 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.995455 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-dns-svc\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.995481 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.995537 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.996492 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.997616 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-config\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.998278 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.998895 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-dns-svc\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.999661 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:34 crc kubenswrapper[4902]: I0121 14:52:34.019306 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgdkp\" (UniqueName: \"kubernetes.io/projected/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-kube-api-access-xgdkp\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:34 crc kubenswrapper[4902]: I0121 14:52:34.096428 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:34 crc kubenswrapper[4902]: I0121 14:52:34.550501 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-ns7jh"] Jan 21 14:52:34 crc kubenswrapper[4902]: W0121 14:52:34.561310 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cfdec8c_8d41_4ae4_ad01_a4b76f589140.slice/crio-f37440f856bd371cff80f2f0f1e426de41d7fcfe1af9a8b2a61bd34561bbe363 WatchSource:0}: Error finding container f37440f856bd371cff80f2f0f1e426de41d7fcfe1af9a8b2a61bd34561bbe363: Status 404 returned error can't find the container with id f37440f856bd371cff80f2f0f1e426de41d7fcfe1af9a8b2a61bd34561bbe363 Jan 21 14:52:35 crc kubenswrapper[4902]: I0121 14:52:35.489814 4902 generic.go:334] "Generic (PLEG): container finished" podID="9cfdec8c-8d41-4ae4-ad01-a4b76f589140" containerID="085cc064e188fc067c109385def65abea0a69b47e1fe8f6dadc55d4ea12c4007" exitCode=0 Jan 21 14:52:35 crc kubenswrapper[4902]: I0121 14:52:35.489885 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" event={"ID":"9cfdec8c-8d41-4ae4-ad01-a4b76f589140","Type":"ContainerDied","Data":"085cc064e188fc067c109385def65abea0a69b47e1fe8f6dadc55d4ea12c4007"} Jan 21 14:52:35 crc kubenswrapper[4902]: I0121 14:52:35.490126 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" event={"ID":"9cfdec8c-8d41-4ae4-ad01-a4b76f589140","Type":"ContainerStarted","Data":"f37440f856bd371cff80f2f0f1e426de41d7fcfe1af9a8b2a61bd34561bbe363"} Jan 21 14:52:36 crc kubenswrapper[4902]: I0121 14:52:36.500392 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" event={"ID":"9cfdec8c-8d41-4ae4-ad01-a4b76f589140","Type":"ContainerStarted","Data":"db22298ae310fe9c4abed7194da190cb997c649d5ca02aff4d05d6c947c77a3f"} Jan 21 14:52:36 crc kubenswrapper[4902]: I0121 14:52:36.500839 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:36 crc kubenswrapper[4902]: I0121 14:52:36.521429 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" podStartSLOduration=3.521411357 podStartE2EDuration="3.521411357s" podCreationTimestamp="2026-01-21 14:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:36.517450515 +0000 UTC m=+1118.594283574" watchObservedRunningTime="2026-01-21 14:52:36.521411357 +0000 UTC m=+1118.598244386" Jan 21 14:52:37 crc kubenswrapper[4902]: I0121 14:52:37.799312 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.110601 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-4czjl"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.111874 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4czjl" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.122659 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4czjl"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.221685 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-np7hz"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.224609 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-np7hz" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.233878 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-np7hz"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.275932 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtj27\" (UniqueName: \"kubernetes.io/projected/f5dd3ace-42a8-4c8e-8531-0c04f145a002-kube-api-access-jtj27\") pod \"cinder-db-create-4czjl\" (UID: \"f5dd3ace-42a8-4c8e-8531-0c04f145a002\") " pod="openstack/cinder-db-create-4czjl" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.276011 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5dd3ace-42a8-4c8e-8531-0c04f145a002-operator-scripts\") pod \"cinder-db-create-4czjl\" (UID: \"f5dd3ace-42a8-4c8e-8531-0c04f145a002\") " pod="openstack/cinder-db-create-4czjl" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.323257 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-9bb1-account-create-update-f5hbr"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.324531 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9bb1-account-create-update-f5hbr" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.326614 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.337781 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9bb1-account-create-update-f5hbr"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.378425 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6d6225-3f7d-485d-a384-5f0e53c3055d-operator-scripts\") pod \"barbican-db-create-np7hz\" (UID: \"4c6d6225-3f7d-485d-a384-5f0e53c3055d\") " pod="openstack/barbican-db-create-np7hz" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.378573 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9pbc\" (UniqueName: \"kubernetes.io/projected/4c6d6225-3f7d-485d-a384-5f0e53c3055d-kube-api-access-b9pbc\") pod \"barbican-db-create-np7hz\" (UID: \"4c6d6225-3f7d-485d-a384-5f0e53c3055d\") " pod="openstack/barbican-db-create-np7hz" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.378629 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtj27\" (UniqueName: \"kubernetes.io/projected/f5dd3ace-42a8-4c8e-8531-0c04f145a002-kube-api-access-jtj27\") pod \"cinder-db-create-4czjl\" (UID: \"f5dd3ace-42a8-4c8e-8531-0c04f145a002\") " pod="openstack/cinder-db-create-4czjl" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.378807 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5dd3ace-42a8-4c8e-8531-0c04f145a002-operator-scripts\") pod \"cinder-db-create-4czjl\" (UID: \"f5dd3ace-42a8-4c8e-8531-0c04f145a002\") " pod="openstack/cinder-db-create-4czjl" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.379527 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5dd3ace-42a8-4c8e-8531-0c04f145a002-operator-scripts\") pod \"cinder-db-create-4czjl\" (UID: \"f5dd3ace-42a8-4c8e-8531-0c04f145a002\") " pod="openstack/cinder-db-create-4czjl" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.424887 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtj27\" (UniqueName: \"kubernetes.io/projected/f5dd3ace-42a8-4c8e-8531-0c04f145a002-kube-api-access-jtj27\") pod \"cinder-db-create-4czjl\" (UID: \"f5dd3ace-42a8-4c8e-8531-0c04f145a002\") " pod="openstack/cinder-db-create-4czjl" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.467242 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7226-account-create-update-krlk5"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.479183 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4czjl" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.482719 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7226-account-create-update-krlk5" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.490525 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.491690 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9pbc\" (UniqueName: \"kubernetes.io/projected/4c6d6225-3f7d-485d-a384-5f0e53c3055d-kube-api-access-b9pbc\") pod \"barbican-db-create-np7hz\" (UID: \"4c6d6225-3f7d-485d-a384-5f0e53c3055d\") " pod="openstack/barbican-db-create-np7hz" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.491773 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0fa0e74-137e-4ff6-9610-37b9ebe612c9-operator-scripts\") pod \"barbican-9bb1-account-create-update-f5hbr\" (UID: \"d0fa0e74-137e-4ff6-9610-37b9ebe612c9\") " pod="openstack/barbican-9bb1-account-create-update-f5hbr" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.491974 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6d6225-3f7d-485d-a384-5f0e53c3055d-operator-scripts\") pod \"barbican-db-create-np7hz\" (UID: \"4c6d6225-3f7d-485d-a384-5f0e53c3055d\") " pod="openstack/barbican-db-create-np7hz" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.492126 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmsqg\" (UniqueName: \"kubernetes.io/projected/d0fa0e74-137e-4ff6-9610-37b9ebe612c9-kube-api-access-hmsqg\") pod \"barbican-9bb1-account-create-update-f5hbr\" (UID: \"d0fa0e74-137e-4ff6-9610-37b9ebe612c9\") " pod="openstack/barbican-9bb1-account-create-update-f5hbr" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.492975 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6d6225-3f7d-485d-a384-5f0e53c3055d-operator-scripts\") pod \"barbican-db-create-np7hz\" (UID: \"4c6d6225-3f7d-485d-a384-5f0e53c3055d\") " pod="openstack/barbican-db-create-np7hz" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.497449 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7226-account-create-update-krlk5"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.532983 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9pbc\" (UniqueName: \"kubernetes.io/projected/4c6d6225-3f7d-485d-a384-5f0e53c3055d-kube-api-access-b9pbc\") pod \"barbican-db-create-np7hz\" (UID: \"4c6d6225-3f7d-485d-a384-5f0e53c3055d\") " pod="openstack/barbican-db-create-np7hz" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.553935 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-np7hz" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.612919 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dngvh\" (UniqueName: \"kubernetes.io/projected/e7eab019-1ec9-4109-93f8-2f3caa1fa508-kube-api-access-dngvh\") pod \"cinder-7226-account-create-update-krlk5\" (UID: \"e7eab019-1ec9-4109-93f8-2f3caa1fa508\") " pod="openstack/cinder-7226-account-create-update-krlk5" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.613012 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmsqg\" (UniqueName: \"kubernetes.io/projected/d0fa0e74-137e-4ff6-9610-37b9ebe612c9-kube-api-access-hmsqg\") pod \"barbican-9bb1-account-create-update-f5hbr\" (UID: \"d0fa0e74-137e-4ff6-9610-37b9ebe612c9\") " pod="openstack/barbican-9bb1-account-create-update-f5hbr" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.613148 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7eab019-1ec9-4109-93f8-2f3caa1fa508-operator-scripts\") pod \"cinder-7226-account-create-update-krlk5\" (UID: \"e7eab019-1ec9-4109-93f8-2f3caa1fa508\") " pod="openstack/cinder-7226-account-create-update-krlk5" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.613173 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0fa0e74-137e-4ff6-9610-37b9ebe612c9-operator-scripts\") pod \"barbican-9bb1-account-create-update-f5hbr\" (UID: \"d0fa0e74-137e-4ff6-9610-37b9ebe612c9\") " pod="openstack/barbican-9bb1-account-create-update-f5hbr" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.614130 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0fa0e74-137e-4ff6-9610-37b9ebe612c9-operator-scripts\") pod \"barbican-9bb1-account-create-update-f5hbr\" (UID: \"d0fa0e74-137e-4ff6-9610-37b9ebe612c9\") " pod="openstack/barbican-9bb1-account-create-update-f5hbr" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.641438 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-nxvvs"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.642910 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nxvvs" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.646700 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-8n66z"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.648034 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.657113 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.657719 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5z5b6" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.657849 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.667257 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.676034 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8n66z"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.695229 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-nxvvs"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.698316 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmsqg\" (UniqueName: \"kubernetes.io/projected/d0fa0e74-137e-4ff6-9610-37b9ebe612c9-kube-api-access-hmsqg\") pod \"barbican-9bb1-account-create-update-f5hbr\" (UID: \"d0fa0e74-137e-4ff6-9610-37b9ebe612c9\") " pod="openstack/barbican-9bb1-account-create-update-f5hbr" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.715971 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7eab019-1ec9-4109-93f8-2f3caa1fa508-operator-scripts\") pod \"cinder-7226-account-create-update-krlk5\" (UID: \"e7eab019-1ec9-4109-93f8-2f3caa1fa508\") " pod="openstack/cinder-7226-account-create-update-krlk5" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.716063 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-combined-ca-bundle\") pod \"keystone-db-sync-8n66z\" (UID: \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\") " pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.716096 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/095a6aec-1aa5-4754-818a-bbe7eedad9f2-operator-scripts\") pod \"neutron-db-create-nxvvs\" (UID: \"095a6aec-1aa5-4754-818a-bbe7eedad9f2\") " pod="openstack/neutron-db-create-nxvvs" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.716114 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2hvc\" (UniqueName: \"kubernetes.io/projected/095a6aec-1aa5-4754-818a-bbe7eedad9f2-kube-api-access-v2hvc\") pod \"neutron-db-create-nxvvs\" (UID: \"095a6aec-1aa5-4754-818a-bbe7eedad9f2\") " pod="openstack/neutron-db-create-nxvvs" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.716150 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dngvh\" (UniqueName: \"kubernetes.io/projected/e7eab019-1ec9-4109-93f8-2f3caa1fa508-kube-api-access-dngvh\") pod \"cinder-7226-account-create-update-krlk5\" (UID: \"e7eab019-1ec9-4109-93f8-2f3caa1fa508\") " pod="openstack/cinder-7226-account-create-update-krlk5" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.716171 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvf4j\" (UniqueName: \"kubernetes.io/projected/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-kube-api-access-qvf4j\") pod \"keystone-db-sync-8n66z\" (UID: \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\") " pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.716191 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-config-data\") pod \"keystone-db-sync-8n66z\" (UID: \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\") " pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.716918 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7eab019-1ec9-4109-93f8-2f3caa1fa508-operator-scripts\") pod \"cinder-7226-account-create-update-krlk5\" (UID: \"e7eab019-1ec9-4109-93f8-2f3caa1fa508\") " pod="openstack/cinder-7226-account-create-update-krlk5" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.751675 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dngvh\" (UniqueName: \"kubernetes.io/projected/e7eab019-1ec9-4109-93f8-2f3caa1fa508-kube-api-access-dngvh\") pod \"cinder-7226-account-create-update-krlk5\" (UID: \"e7eab019-1ec9-4109-93f8-2f3caa1fa508\") " pod="openstack/cinder-7226-account-create-update-krlk5" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.798608 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-457b-account-create-update-2trwh"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.799979 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-457b-account-create-update-2trwh" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.807985 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.819073 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b401edf-e2ca-4abb-adb7-008ce32403b1-operator-scripts\") pod \"neutron-457b-account-create-update-2trwh\" (UID: \"3b401edf-e2ca-4abb-adb7-008ce32403b1\") " pod="openstack/neutron-457b-account-create-update-2trwh" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.819211 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-combined-ca-bundle\") pod \"keystone-db-sync-8n66z\" (UID: \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\") " pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.819252 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/095a6aec-1aa5-4754-818a-bbe7eedad9f2-operator-scripts\") pod \"neutron-db-create-nxvvs\" (UID: \"095a6aec-1aa5-4754-818a-bbe7eedad9f2\") " pod="openstack/neutron-db-create-nxvvs" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.819274 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2hvc\" (UniqueName: \"kubernetes.io/projected/095a6aec-1aa5-4754-818a-bbe7eedad9f2-kube-api-access-v2hvc\") pod \"neutron-db-create-nxvvs\" (UID: \"095a6aec-1aa5-4754-818a-bbe7eedad9f2\") " pod="openstack/neutron-db-create-nxvvs" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.819334 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvf4j\" (UniqueName: \"kubernetes.io/projected/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-kube-api-access-qvf4j\") pod \"keystone-db-sync-8n66z\" (UID: \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\") " pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.819359 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-config-data\") pod \"keystone-db-sync-8n66z\" (UID: \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\") " pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.819392 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnpwx\" (UniqueName: \"kubernetes.io/projected/3b401edf-e2ca-4abb-adb7-008ce32403b1-kube-api-access-xnpwx\") pod \"neutron-457b-account-create-update-2trwh\" (UID: \"3b401edf-e2ca-4abb-adb7-008ce32403b1\") " pod="openstack/neutron-457b-account-create-update-2trwh" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.820780 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/095a6aec-1aa5-4754-818a-bbe7eedad9f2-operator-scripts\") pod \"neutron-db-create-nxvvs\" (UID: \"095a6aec-1aa5-4754-818a-bbe7eedad9f2\") " pod="openstack/neutron-db-create-nxvvs" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.824542 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-combined-ca-bundle\") pod \"keystone-db-sync-8n66z\" (UID: \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\") " pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.825363 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-config-data\") pod \"keystone-db-sync-8n66z\" (UID: \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\") " pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.838070 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-457b-account-create-update-2trwh"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.847408 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvf4j\" (UniqueName: \"kubernetes.io/projected/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-kube-api-access-qvf4j\") pod \"keystone-db-sync-8n66z\" (UID: \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\") " pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.853122 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2hvc\" (UniqueName: \"kubernetes.io/projected/095a6aec-1aa5-4754-818a-bbe7eedad9f2-kube-api-access-v2hvc\") pod \"neutron-db-create-nxvvs\" (UID: \"095a6aec-1aa5-4754-818a-bbe7eedad9f2\") " pod="openstack/neutron-db-create-nxvvs" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.920520 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnpwx\" (UniqueName: \"kubernetes.io/projected/3b401edf-e2ca-4abb-adb7-008ce32403b1-kube-api-access-xnpwx\") pod \"neutron-457b-account-create-update-2trwh\" (UID: \"3b401edf-e2ca-4abb-adb7-008ce32403b1\") " pod="openstack/neutron-457b-account-create-update-2trwh" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.920632 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b401edf-e2ca-4abb-adb7-008ce32403b1-operator-scripts\") pod \"neutron-457b-account-create-update-2trwh\" (UID: \"3b401edf-e2ca-4abb-adb7-008ce32403b1\") " pod="openstack/neutron-457b-account-create-update-2trwh" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.921709 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b401edf-e2ca-4abb-adb7-008ce32403b1-operator-scripts\") pod \"neutron-457b-account-create-update-2trwh\" (UID: \"3b401edf-e2ca-4abb-adb7-008ce32403b1\") " pod="openstack/neutron-457b-account-create-update-2trwh" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.939568 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9bb1-account-create-update-f5hbr" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.939838 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnpwx\" (UniqueName: \"kubernetes.io/projected/3b401edf-e2ca-4abb-adb7-008ce32403b1-kube-api-access-xnpwx\") pod \"neutron-457b-account-create-update-2trwh\" (UID: \"3b401edf-e2ca-4abb-adb7-008ce32403b1\") " pod="openstack/neutron-457b-account-create-update-2trwh" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.958532 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7226-account-create-update-krlk5" Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.037289 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nxvvs" Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.049750 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.134902 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-457b-account-create-update-2trwh" Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.153052 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4czjl"] Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.166977 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-np7hz"] Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.415002 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9bb1-account-create-update-f5hbr"] Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.539881 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-nxvvs"] Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.551525 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4czjl" event={"ID":"f5dd3ace-42a8-4c8e-8531-0c04f145a002","Type":"ContainerStarted","Data":"3d6ff1e2aa4c6d25b1afbc1c6226ab9dd8acd5472135388cee9dad4beae1dc39"} Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.551578 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4czjl" event={"ID":"f5dd3ace-42a8-4c8e-8531-0c04f145a002","Type":"ContainerStarted","Data":"de1ce05294c31ea3e7485201a8c03eb422a890c761c80f00cfdf53c008a3097c"} Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.553183 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-np7hz" event={"ID":"4c6d6225-3f7d-485d-a384-5f0e53c3055d","Type":"ContainerStarted","Data":"af472f5b3bb9010ffaa61382ab0352d28b368f2e713ea44d92c653fb5e095055"} Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.553222 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-np7hz" event={"ID":"4c6d6225-3f7d-485d-a384-5f0e53c3055d","Type":"ContainerStarted","Data":"e51262b04bb5a162fdf6ca544ef19f5ab4091e99cb9d8ee72320234ca0e42e90"} Jan 21 14:52:39 crc kubenswrapper[4902]: W0121 14:52:39.555928 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod095a6aec_1aa5_4754_818a_bbe7eedad9f2.slice/crio-be00237b138fbb3d6818c304c7c9f421c4adcf71270cef33add7a86999a6d925 WatchSource:0}: Error finding container be00237b138fbb3d6818c304c7c9f421c4adcf71270cef33add7a86999a6d925: Status 404 returned error can't find the container with id be00237b138fbb3d6818c304c7c9f421c4adcf71270cef33add7a86999a6d925 Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.556060 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9bb1-account-create-update-f5hbr" event={"ID":"d0fa0e74-137e-4ff6-9610-37b9ebe612c9","Type":"ContainerStarted","Data":"21de5eaae6221d07dff1d905aed3b34121b811bb09237e945729567417f596af"} Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.563734 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7226-account-create-update-krlk5"] Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.587840 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-np7hz" podStartSLOduration=1.587826001 podStartE2EDuration="1.587826001s" podCreationTimestamp="2026-01-21 14:52:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:39.584104716 +0000 UTC m=+1121.660937745" watchObservedRunningTime="2026-01-21 14:52:39.587826001 +0000 UTC m=+1121.664659030" Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.592858 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-4czjl" podStartSLOduration=1.592842133 podStartE2EDuration="1.592842133s" podCreationTimestamp="2026-01-21 14:52:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:39.569446671 +0000 UTC m=+1121.646279700" watchObservedRunningTime="2026-01-21 14:52:39.592842133 +0000 UTC m=+1121.669675162" Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.780675 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-457b-account-create-update-2trwh"] Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.791424 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8n66z"] Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.567603 4902 generic.go:334] "Generic (PLEG): container finished" podID="e7eab019-1ec9-4109-93f8-2f3caa1fa508" containerID="cff876825001ee2c7fa7f8bdbe379da8527d1a33b467f10b305adc0a8747aa98" exitCode=0 Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.567670 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7226-account-create-update-krlk5" event={"ID":"e7eab019-1ec9-4109-93f8-2f3caa1fa508","Type":"ContainerDied","Data":"cff876825001ee2c7fa7f8bdbe379da8527d1a33b467f10b305adc0a8747aa98"} Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.567945 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7226-account-create-update-krlk5" event={"ID":"e7eab019-1ec9-4109-93f8-2f3caa1fa508","Type":"ContainerStarted","Data":"fbbc8fb531dd195dba5a0e18a68911ad9d163e963b3fc357dfbd7a5adc3a9c2a"} Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.570242 4902 generic.go:334] "Generic (PLEG): container finished" podID="4c6d6225-3f7d-485d-a384-5f0e53c3055d" containerID="af472f5b3bb9010ffaa61382ab0352d28b368f2e713ea44d92c653fb5e095055" exitCode=0 Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.570309 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-np7hz" event={"ID":"4c6d6225-3f7d-485d-a384-5f0e53c3055d","Type":"ContainerDied","Data":"af472f5b3bb9010ffaa61382ab0352d28b368f2e713ea44d92c653fb5e095055"} Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.572955 4902 generic.go:334] "Generic (PLEG): container finished" podID="3b401edf-e2ca-4abb-adb7-008ce32403b1" containerID="689584950e8fe70d3a520e19880e648a9cfc4e1dba5d9cf1c7c92f94555adda3" exitCode=0 Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.573115 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-457b-account-create-update-2trwh" event={"ID":"3b401edf-e2ca-4abb-adb7-008ce32403b1","Type":"ContainerDied","Data":"689584950e8fe70d3a520e19880e648a9cfc4e1dba5d9cf1c7c92f94555adda3"} Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.573148 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-457b-account-create-update-2trwh" event={"ID":"3b401edf-e2ca-4abb-adb7-008ce32403b1","Type":"ContainerStarted","Data":"77d78f0cbe1513d2498b1175b95c511590a6e28042d8b44bfa705339d76861da"} Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.575476 4902 generic.go:334] "Generic (PLEG): container finished" podID="d0fa0e74-137e-4ff6-9610-37b9ebe612c9" containerID="b979d6e79dba97b3f526cfab4506aea68e0143adfc4356de611547f4493bec9f" exitCode=0 Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.575531 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9bb1-account-create-update-f5hbr" event={"ID":"d0fa0e74-137e-4ff6-9610-37b9ebe612c9","Type":"ContainerDied","Data":"b979d6e79dba97b3f526cfab4506aea68e0143adfc4356de611547f4493bec9f"} Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.578868 4902 generic.go:334] "Generic (PLEG): container finished" podID="f5dd3ace-42a8-4c8e-8531-0c04f145a002" containerID="3d6ff1e2aa4c6d25b1afbc1c6226ab9dd8acd5472135388cee9dad4beae1dc39" exitCode=0 Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.578971 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4czjl" event={"ID":"f5dd3ace-42a8-4c8e-8531-0c04f145a002","Type":"ContainerDied","Data":"3d6ff1e2aa4c6d25b1afbc1c6226ab9dd8acd5472135388cee9dad4beae1dc39"} Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.587923 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8n66z" event={"ID":"9bb9c4d9-a042-4a60-adca-03be4d8ec42d","Type":"ContainerStarted","Data":"587efae09cf4dc3391097e9809b54ed4301ac51dc9f5bbcbd21ecf03e068c9d6"} Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.592005 4902 generic.go:334] "Generic (PLEG): container finished" podID="095a6aec-1aa5-4754-818a-bbe7eedad9f2" containerID="277691b4cd995bb05532afffdba1de6a3149dc7dc1e0f0e9ce9ba32058b05cf6" exitCode=0 Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.592091 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nxvvs" event={"ID":"095a6aec-1aa5-4754-818a-bbe7eedad9f2","Type":"ContainerDied","Data":"277691b4cd995bb05532afffdba1de6a3149dc7dc1e0f0e9ce9ba32058b05cf6"} Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.592125 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nxvvs" event={"ID":"095a6aec-1aa5-4754-818a-bbe7eedad9f2","Type":"ContainerStarted","Data":"be00237b138fbb3d6818c304c7c9f421c4adcf71270cef33add7a86999a6d925"} Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.018656 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4czjl" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.099495 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtj27\" (UniqueName: \"kubernetes.io/projected/f5dd3ace-42a8-4c8e-8531-0c04f145a002-kube-api-access-jtj27\") pod \"f5dd3ace-42a8-4c8e-8531-0c04f145a002\" (UID: \"f5dd3ace-42a8-4c8e-8531-0c04f145a002\") " Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.141960 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5dd3ace-42a8-4c8e-8531-0c04f145a002-kube-api-access-jtj27" (OuterVolumeSpecName: "kube-api-access-jtj27") pod "f5dd3ace-42a8-4c8e-8531-0c04f145a002" (UID: "f5dd3ace-42a8-4c8e-8531-0c04f145a002"). InnerVolumeSpecName "kube-api-access-jtj27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.202294 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5dd3ace-42a8-4c8e-8531-0c04f145a002-operator-scripts\") pod \"f5dd3ace-42a8-4c8e-8531-0c04f145a002\" (UID: \"f5dd3ace-42a8-4c8e-8531-0c04f145a002\") " Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.203031 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtj27\" (UniqueName: \"kubernetes.io/projected/f5dd3ace-42a8-4c8e-8531-0c04f145a002-kube-api-access-jtj27\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.203699 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5dd3ace-42a8-4c8e-8531-0c04f145a002-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f5dd3ace-42a8-4c8e-8531-0c04f145a002" (UID: "f5dd3ace-42a8-4c8e-8531-0c04f145a002"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.233198 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nxvvs" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.235842 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-457b-account-create-update-2trwh" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.238356 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9bb1-account-create-update-f5hbr" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.244146 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-np7hz" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.255882 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7226-account-create-update-krlk5" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.304315 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5dd3ace-42a8-4c8e-8531-0c04f145a002-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.405048 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnpwx\" (UniqueName: \"kubernetes.io/projected/3b401edf-e2ca-4abb-adb7-008ce32403b1-kube-api-access-xnpwx\") pod \"3b401edf-e2ca-4abb-adb7-008ce32403b1\" (UID: \"3b401edf-e2ca-4abb-adb7-008ce32403b1\") " Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.405107 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9pbc\" (UniqueName: \"kubernetes.io/projected/4c6d6225-3f7d-485d-a384-5f0e53c3055d-kube-api-access-b9pbc\") pod \"4c6d6225-3f7d-485d-a384-5f0e53c3055d\" (UID: \"4c6d6225-3f7d-485d-a384-5f0e53c3055d\") " Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.405129 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dngvh\" (UniqueName: \"kubernetes.io/projected/e7eab019-1ec9-4109-93f8-2f3caa1fa508-kube-api-access-dngvh\") pod \"e7eab019-1ec9-4109-93f8-2f3caa1fa508\" (UID: \"e7eab019-1ec9-4109-93f8-2f3caa1fa508\") " Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.405145 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2hvc\" (UniqueName: \"kubernetes.io/projected/095a6aec-1aa5-4754-818a-bbe7eedad9f2-kube-api-access-v2hvc\") pod \"095a6aec-1aa5-4754-818a-bbe7eedad9f2\" (UID: \"095a6aec-1aa5-4754-818a-bbe7eedad9f2\") " Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.405222 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/095a6aec-1aa5-4754-818a-bbe7eedad9f2-operator-scripts\") pod \"095a6aec-1aa5-4754-818a-bbe7eedad9f2\" (UID: \"095a6aec-1aa5-4754-818a-bbe7eedad9f2\") " Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.405849 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/095a6aec-1aa5-4754-818a-bbe7eedad9f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "095a6aec-1aa5-4754-818a-bbe7eedad9f2" (UID: "095a6aec-1aa5-4754-818a-bbe7eedad9f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.406170 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6d6225-3f7d-485d-a384-5f0e53c3055d-operator-scripts\") pod \"4c6d6225-3f7d-485d-a384-5f0e53c3055d\" (UID: \"4c6d6225-3f7d-485d-a384-5f0e53c3055d\") " Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.406209 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7eab019-1ec9-4109-93f8-2f3caa1fa508-operator-scripts\") pod \"e7eab019-1ec9-4109-93f8-2f3caa1fa508\" (UID: \"e7eab019-1ec9-4109-93f8-2f3caa1fa508\") " Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.406235 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b401edf-e2ca-4abb-adb7-008ce32403b1-operator-scripts\") pod \"3b401edf-e2ca-4abb-adb7-008ce32403b1\" (UID: \"3b401edf-e2ca-4abb-adb7-008ce32403b1\") " Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.406287 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0fa0e74-137e-4ff6-9610-37b9ebe612c9-operator-scripts\") pod \"d0fa0e74-137e-4ff6-9610-37b9ebe612c9\" (UID: \"d0fa0e74-137e-4ff6-9610-37b9ebe612c9\") " Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.406364 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmsqg\" (UniqueName: \"kubernetes.io/projected/d0fa0e74-137e-4ff6-9610-37b9ebe612c9-kube-api-access-hmsqg\") pod \"d0fa0e74-137e-4ff6-9610-37b9ebe612c9\" (UID: \"d0fa0e74-137e-4ff6-9610-37b9ebe612c9\") " Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.406562 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c6d6225-3f7d-485d-a384-5f0e53c3055d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c6d6225-3f7d-485d-a384-5f0e53c3055d" (UID: "4c6d6225-3f7d-485d-a384-5f0e53c3055d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.406590 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7eab019-1ec9-4109-93f8-2f3caa1fa508-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e7eab019-1ec9-4109-93f8-2f3caa1fa508" (UID: "e7eab019-1ec9-4109-93f8-2f3caa1fa508"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.406634 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b401edf-e2ca-4abb-adb7-008ce32403b1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b401edf-e2ca-4abb-adb7-008ce32403b1" (UID: "3b401edf-e2ca-4abb-adb7-008ce32403b1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.406798 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0fa0e74-137e-4ff6-9610-37b9ebe612c9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d0fa0e74-137e-4ff6-9610-37b9ebe612c9" (UID: "d0fa0e74-137e-4ff6-9610-37b9ebe612c9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.408078 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6d6225-3f7d-485d-a384-5f0e53c3055d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.408101 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7eab019-1ec9-4109-93f8-2f3caa1fa508-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.408115 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b401edf-e2ca-4abb-adb7-008ce32403b1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.408126 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0fa0e74-137e-4ff6-9610-37b9ebe612c9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.408138 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/095a6aec-1aa5-4754-818a-bbe7eedad9f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.412236 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7eab019-1ec9-4109-93f8-2f3caa1fa508-kube-api-access-dngvh" (OuterVolumeSpecName: "kube-api-access-dngvh") pod "e7eab019-1ec9-4109-93f8-2f3caa1fa508" (UID: "e7eab019-1ec9-4109-93f8-2f3caa1fa508"). InnerVolumeSpecName "kube-api-access-dngvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.412272 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c6d6225-3f7d-485d-a384-5f0e53c3055d-kube-api-access-b9pbc" (OuterVolumeSpecName: "kube-api-access-b9pbc") pod "4c6d6225-3f7d-485d-a384-5f0e53c3055d" (UID: "4c6d6225-3f7d-485d-a384-5f0e53c3055d"). InnerVolumeSpecName "kube-api-access-b9pbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.412290 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0fa0e74-137e-4ff6-9610-37b9ebe612c9-kube-api-access-hmsqg" (OuterVolumeSpecName: "kube-api-access-hmsqg") pod "d0fa0e74-137e-4ff6-9610-37b9ebe612c9" (UID: "d0fa0e74-137e-4ff6-9610-37b9ebe612c9"). InnerVolumeSpecName "kube-api-access-hmsqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.412305 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b401edf-e2ca-4abb-adb7-008ce32403b1-kube-api-access-xnpwx" (OuterVolumeSpecName: "kube-api-access-xnpwx") pod "3b401edf-e2ca-4abb-adb7-008ce32403b1" (UID: "3b401edf-e2ca-4abb-adb7-008ce32403b1"). InnerVolumeSpecName "kube-api-access-xnpwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.412319 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/095a6aec-1aa5-4754-818a-bbe7eedad9f2-kube-api-access-v2hvc" (OuterVolumeSpecName: "kube-api-access-v2hvc") pod "095a6aec-1aa5-4754-818a-bbe7eedad9f2" (UID: "095a6aec-1aa5-4754-818a-bbe7eedad9f2"). InnerVolumeSpecName "kube-api-access-v2hvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.510770 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmsqg\" (UniqueName: \"kubernetes.io/projected/d0fa0e74-137e-4ff6-9610-37b9ebe612c9-kube-api-access-hmsqg\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.511025 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnpwx\" (UniqueName: \"kubernetes.io/projected/3b401edf-e2ca-4abb-adb7-008ce32403b1-kube-api-access-xnpwx\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.511034 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9pbc\" (UniqueName: \"kubernetes.io/projected/4c6d6225-3f7d-485d-a384-5f0e53c3055d-kube-api-access-b9pbc\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.511057 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dngvh\" (UniqueName: \"kubernetes.io/projected/e7eab019-1ec9-4109-93f8-2f3caa1fa508-kube-api-access-dngvh\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.511070 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2hvc\" (UniqueName: \"kubernetes.io/projected/095a6aec-1aa5-4754-818a-bbe7eedad9f2-kube-api-access-v2hvc\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.611494 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4czjl" event={"ID":"f5dd3ace-42a8-4c8e-8531-0c04f145a002","Type":"ContainerDied","Data":"de1ce05294c31ea3e7485201a8c03eb422a890c761c80f00cfdf53c008a3097c"} Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.611541 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de1ce05294c31ea3e7485201a8c03eb422a890c761c80f00cfdf53c008a3097c" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.611612 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4czjl" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.628363 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nxvvs" event={"ID":"095a6aec-1aa5-4754-818a-bbe7eedad9f2","Type":"ContainerDied","Data":"be00237b138fbb3d6818c304c7c9f421c4adcf71270cef33add7a86999a6d925"} Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.628451 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be00237b138fbb3d6818c304c7c9f421c4adcf71270cef33add7a86999a6d925" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.628508 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nxvvs" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.632309 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7226-account-create-update-krlk5" event={"ID":"e7eab019-1ec9-4109-93f8-2f3caa1fa508","Type":"ContainerDied","Data":"fbbc8fb531dd195dba5a0e18a68911ad9d163e963b3fc357dfbd7a5adc3a9c2a"} Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.632346 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbbc8fb531dd195dba5a0e18a68911ad9d163e963b3fc357dfbd7a5adc3a9c2a" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.632384 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7226-account-create-update-krlk5" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.634426 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-np7hz" event={"ID":"4c6d6225-3f7d-485d-a384-5f0e53c3055d","Type":"ContainerDied","Data":"e51262b04bb5a162fdf6ca544ef19f5ab4091e99cb9d8ee72320234ca0e42e90"} Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.634450 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e51262b04bb5a162fdf6ca544ef19f5ab4091e99cb9d8ee72320234ca0e42e90" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.634467 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-np7hz" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.639204 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-457b-account-create-update-2trwh" event={"ID":"3b401edf-e2ca-4abb-adb7-008ce32403b1","Type":"ContainerDied","Data":"77d78f0cbe1513d2498b1175b95c511590a6e28042d8b44bfa705339d76861da"} Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.639223 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-457b-account-create-update-2trwh" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.639236 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77d78f0cbe1513d2498b1175b95c511590a6e28042d8b44bfa705339d76861da" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.644525 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9bb1-account-create-update-f5hbr" event={"ID":"d0fa0e74-137e-4ff6-9610-37b9ebe612c9","Type":"ContainerDied","Data":"21de5eaae6221d07dff1d905aed3b34121b811bb09237e945729567417f596af"} Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.644582 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21de5eaae6221d07dff1d905aed3b34121b811bb09237e945729567417f596af" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.644670 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9bb1-account-create-update-f5hbr" Jan 21 14:52:43 crc kubenswrapper[4902]: I0121 14:52:43.655402 4902 generic.go:334] "Generic (PLEG): container finished" podID="eb5e91bc-7b75-4275-b1b6-998431981fca" containerID="dff0b2c9f0b06182d720253d8f2ef15a7b10dcf34cc35665586623d88b252d47" exitCode=0 Jan 21 14:52:43 crc kubenswrapper[4902]: I0121 14:52:43.655447 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ktqgj" event={"ID":"eb5e91bc-7b75-4275-b1b6-998431981fca","Type":"ContainerDied","Data":"dff0b2c9f0b06182d720253d8f2ef15a7b10dcf34cc35665586623d88b252d47"} Jan 21 14:52:44 crc kubenswrapper[4902]: I0121 14:52:44.098227 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:44 crc kubenswrapper[4902]: I0121 14:52:44.232477 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-xglm5"] Jan 21 14:52:44 crc kubenswrapper[4902]: I0121 14:52:44.233004 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" podUID="f26a414c-0df3-4829-ad7a-c444b795160a" containerName="dnsmasq-dns" containerID="cri-o://2835e971956ba6f6b6ef4af53fbc776463dd7dc5cf9fe6d1cb87ca296d232dda" gracePeriod=10 Jan 21 14:52:44 crc kubenswrapper[4902]: I0121 14:52:44.666955 4902 generic.go:334] "Generic (PLEG): container finished" podID="f26a414c-0df3-4829-ad7a-c444b795160a" containerID="2835e971956ba6f6b6ef4af53fbc776463dd7dc5cf9fe6d1cb87ca296d232dda" exitCode=0 Jan 21 14:52:44 crc kubenswrapper[4902]: I0121 14:52:44.667033 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" event={"ID":"f26a414c-0df3-4829-ad7a-c444b795160a","Type":"ContainerDied","Data":"2835e971956ba6f6b6ef4af53fbc776463dd7dc5cf9fe6d1cb87ca296d232dda"} Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.508529 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ktqgj" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.582416 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7zxd\" (UniqueName: \"kubernetes.io/projected/eb5e91bc-7b75-4275-b1b6-998431981fca-kube-api-access-r7zxd\") pod \"eb5e91bc-7b75-4275-b1b6-998431981fca\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.582515 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-config-data\") pod \"eb5e91bc-7b75-4275-b1b6-998431981fca\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.582545 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-combined-ca-bundle\") pod \"eb5e91bc-7b75-4275-b1b6-998431981fca\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.582589 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-db-sync-config-data\") pod \"eb5e91bc-7b75-4275-b1b6-998431981fca\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.603954 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "eb5e91bc-7b75-4275-b1b6-998431981fca" (UID: "eb5e91bc-7b75-4275-b1b6-998431981fca"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.604096 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb5e91bc-7b75-4275-b1b6-998431981fca-kube-api-access-r7zxd" (OuterVolumeSpecName: "kube-api-access-r7zxd") pod "eb5e91bc-7b75-4275-b1b6-998431981fca" (UID: "eb5e91bc-7b75-4275-b1b6-998431981fca"). InnerVolumeSpecName "kube-api-access-r7zxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.613981 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb5e91bc-7b75-4275-b1b6-998431981fca" (UID: "eb5e91bc-7b75-4275-b1b6-998431981fca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.645116 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-config-data" (OuterVolumeSpecName: "config-data") pod "eb5e91bc-7b75-4275-b1b6-998431981fca" (UID: "eb5e91bc-7b75-4275-b1b6-998431981fca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.679414 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.687711 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ktqgj" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.700186 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ktqgj" event={"ID":"eb5e91bc-7b75-4275-b1b6-998431981fca","Type":"ContainerDied","Data":"06bc6ebdb802a8f9e6cf31504f046445f838734188e18dd997d7eac178a9c70b"} Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.700262 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06bc6ebdb802a8f9e6cf31504f046445f838734188e18dd997d7eac178a9c70b" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.701782 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7zxd\" (UniqueName: \"kubernetes.io/projected/eb5e91bc-7b75-4275-b1b6-998431981fca-kube-api-access-r7zxd\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.701815 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.701828 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.701846 4902 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.802810 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfprk\" (UniqueName: \"kubernetes.io/projected/f26a414c-0df3-4829-ad7a-c444b795160a-kube-api-access-pfprk\") pod \"f26a414c-0df3-4829-ad7a-c444b795160a\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.803326 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-ovsdbserver-sb\") pod \"f26a414c-0df3-4829-ad7a-c444b795160a\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.803429 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-config\") pod \"f26a414c-0df3-4829-ad7a-c444b795160a\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.803892 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-dns-svc\") pod \"f26a414c-0df3-4829-ad7a-c444b795160a\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.803953 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-ovsdbserver-nb\") pod \"f26a414c-0df3-4829-ad7a-c444b795160a\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.809162 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f26a414c-0df3-4829-ad7a-c444b795160a-kube-api-access-pfprk" (OuterVolumeSpecName: "kube-api-access-pfprk") pod "f26a414c-0df3-4829-ad7a-c444b795160a" (UID: "f26a414c-0df3-4829-ad7a-c444b795160a"). InnerVolumeSpecName "kube-api-access-pfprk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.845552 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f26a414c-0df3-4829-ad7a-c444b795160a" (UID: "f26a414c-0df3-4829-ad7a-c444b795160a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.846818 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-config" (OuterVolumeSpecName: "config") pod "f26a414c-0df3-4829-ad7a-c444b795160a" (UID: "f26a414c-0df3-4829-ad7a-c444b795160a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.847137 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f26a414c-0df3-4829-ad7a-c444b795160a" (UID: "f26a414c-0df3-4829-ad7a-c444b795160a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.850017 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f26a414c-0df3-4829-ad7a-c444b795160a" (UID: "f26a414c-0df3-4829-ad7a-c444b795160a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.906103 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.906157 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.906174 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfprk\" (UniqueName: \"kubernetes.io/projected/f26a414c-0df3-4829-ad7a-c444b795160a-kube-api-access-pfprk\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.906185 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.906195 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.050523 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-m98rq"] Jan 21 14:52:46 crc kubenswrapper[4902]: E0121 14:52:46.050930 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26a414c-0df3-4829-ad7a-c444b795160a" containerName="init" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.050947 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26a414c-0df3-4829-ad7a-c444b795160a" containerName="init" Jan 21 14:52:46 crc kubenswrapper[4902]: E0121 14:52:46.050959 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5dd3ace-42a8-4c8e-8531-0c04f145a002" containerName="mariadb-database-create" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.050965 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5dd3ace-42a8-4c8e-8531-0c04f145a002" containerName="mariadb-database-create" Jan 21 14:52:46 crc kubenswrapper[4902]: E0121 14:52:46.050980 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26a414c-0df3-4829-ad7a-c444b795160a" containerName="dnsmasq-dns" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.050986 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26a414c-0df3-4829-ad7a-c444b795160a" containerName="dnsmasq-dns" Jan 21 14:52:46 crc kubenswrapper[4902]: E0121 14:52:46.050997 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0fa0e74-137e-4ff6-9610-37b9ebe612c9" containerName="mariadb-account-create-update" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051003 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0fa0e74-137e-4ff6-9610-37b9ebe612c9" containerName="mariadb-account-create-update" Jan 21 14:52:46 crc kubenswrapper[4902]: E0121 14:52:46.051012 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b401edf-e2ca-4abb-adb7-008ce32403b1" containerName="mariadb-account-create-update" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051017 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b401edf-e2ca-4abb-adb7-008ce32403b1" containerName="mariadb-account-create-update" Jan 21 14:52:46 crc kubenswrapper[4902]: E0121 14:52:46.051030 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb5e91bc-7b75-4275-b1b6-998431981fca" containerName="glance-db-sync" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051035 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb5e91bc-7b75-4275-b1b6-998431981fca" containerName="glance-db-sync" Jan 21 14:52:46 crc kubenswrapper[4902]: E0121 14:52:46.051058 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="095a6aec-1aa5-4754-818a-bbe7eedad9f2" containerName="mariadb-database-create" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051064 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="095a6aec-1aa5-4754-818a-bbe7eedad9f2" containerName="mariadb-database-create" Jan 21 14:52:46 crc kubenswrapper[4902]: E0121 14:52:46.051080 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c6d6225-3f7d-485d-a384-5f0e53c3055d" containerName="mariadb-database-create" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051086 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c6d6225-3f7d-485d-a384-5f0e53c3055d" containerName="mariadb-database-create" Jan 21 14:52:46 crc kubenswrapper[4902]: E0121 14:52:46.051092 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7eab019-1ec9-4109-93f8-2f3caa1fa508" containerName="mariadb-account-create-update" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051098 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7eab019-1ec9-4109-93f8-2f3caa1fa508" containerName="mariadb-account-create-update" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051296 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="095a6aec-1aa5-4754-818a-bbe7eedad9f2" containerName="mariadb-database-create" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051322 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b401edf-e2ca-4abb-adb7-008ce32403b1" containerName="mariadb-account-create-update" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051335 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5dd3ace-42a8-4c8e-8531-0c04f145a002" containerName="mariadb-database-create" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051348 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26a414c-0df3-4829-ad7a-c444b795160a" containerName="dnsmasq-dns" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051357 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c6d6225-3f7d-485d-a384-5f0e53c3055d" containerName="mariadb-database-create" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051366 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7eab019-1ec9-4109-93f8-2f3caa1fa508" containerName="mariadb-account-create-update" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051381 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb5e91bc-7b75-4275-b1b6-998431981fca" containerName="glance-db-sync" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051391 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0fa0e74-137e-4ff6-9610-37b9ebe612c9" containerName="mariadb-account-create-update" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.052585 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.073398 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-m98rq"] Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.111917 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.112004 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-config\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.112030 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-dns-swift-storage-0\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.112062 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-ovsdbserver-sb\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.112079 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t7cl\" (UniqueName: \"kubernetes.io/projected/55109ced-875d-425c-bfca-9df867fdc7c8-kube-api-access-9t7cl\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.112102 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-dns-svc\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.213584 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.213673 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-config\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.213693 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-dns-swift-storage-0\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.213708 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-ovsdbserver-sb\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.213726 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t7cl\" (UniqueName: \"kubernetes.io/projected/55109ced-875d-425c-bfca-9df867fdc7c8-kube-api-access-9t7cl\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.213746 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-dns-svc\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.214509 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-ovsdbserver-sb\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.214618 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-dns-swift-storage-0\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.214847 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.214872 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-dns-svc\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.214901 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-config\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.233157 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t7cl\" (UniqueName: \"kubernetes.io/projected/55109ced-875d-425c-bfca-9df867fdc7c8-kube-api-access-9t7cl\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.370102 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.707990 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8n66z" event={"ID":"9bb9c4d9-a042-4a60-adca-03be4d8ec42d","Type":"ContainerStarted","Data":"d68914c4c8e15dba0295d1fd9bb40d5fc60aa1162bc79ce24523d135a247b33e"} Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.711481 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" event={"ID":"f26a414c-0df3-4829-ad7a-c444b795160a","Type":"ContainerDied","Data":"f2e59dbbb8c6adb99cbeb35911a1b6de41741dd0dd7508b3dc32a7f75a4ed19c"} Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.711530 4902 scope.go:117] "RemoveContainer" containerID="2835e971956ba6f6b6ef4af53fbc776463dd7dc5cf9fe6d1cb87ca296d232dda" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.711547 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.745632 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-8n66z" podStartSLOduration=3.075403622 podStartE2EDuration="8.745614376s" podCreationTimestamp="2026-01-21 14:52:38 +0000 UTC" firstStartedPulling="2026-01-21 14:52:39.844922867 +0000 UTC m=+1121.921755896" lastFinishedPulling="2026-01-21 14:52:45.515133621 +0000 UTC m=+1127.591966650" observedRunningTime="2026-01-21 14:52:46.735294674 +0000 UTC m=+1128.812127703" watchObservedRunningTime="2026-01-21 14:52:46.745614376 +0000 UTC m=+1128.822447405" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.756187 4902 scope.go:117] "RemoveContainer" containerID="e19fecd53265fa377cce915a6f9d5418debd0cc0619facc38c21547ed0d4b095" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.759100 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-xglm5"] Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.779078 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-xglm5"] Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.880454 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-m98rq"] Jan 21 14:52:47 crc kubenswrapper[4902]: I0121 14:52:47.719730 4902 generic.go:334] "Generic (PLEG): container finished" podID="55109ced-875d-425c-bfca-9df867fdc7c8" containerID="be041a1eb36c6c6ae62d40b43bc1855f878c0bd015cf7aed44fdbbf69065c16c" exitCode=0 Jan 21 14:52:47 crc kubenswrapper[4902]: I0121 14:52:47.720002 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" event={"ID":"55109ced-875d-425c-bfca-9df867fdc7c8","Type":"ContainerDied","Data":"be041a1eb36c6c6ae62d40b43bc1855f878c0bd015cf7aed44fdbbf69065c16c"} Jan 21 14:52:47 crc kubenswrapper[4902]: I0121 14:52:47.720033 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" event={"ID":"55109ced-875d-425c-bfca-9df867fdc7c8","Type":"ContainerStarted","Data":"54d97cf6f5d6e6f549a44efb5396488f668a2044b5853012678a53f9be6c8a9c"} Jan 21 14:52:48 crc kubenswrapper[4902]: I0121 14:52:48.307175 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f26a414c-0df3-4829-ad7a-c444b795160a" path="/var/lib/kubelet/pods/f26a414c-0df3-4829-ad7a-c444b795160a/volumes" Jan 21 14:52:48 crc kubenswrapper[4902]: I0121 14:52:48.732884 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" event={"ID":"55109ced-875d-425c-bfca-9df867fdc7c8","Type":"ContainerStarted","Data":"ee373bfcc95337faf0d0a8d1d17928705fb09f163f6c59a2da58ed885e64a255"} Jan 21 14:52:48 crc kubenswrapper[4902]: I0121 14:52:48.733056 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:48 crc kubenswrapper[4902]: I0121 14:52:48.758903 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" podStartSLOduration=3.758882874 podStartE2EDuration="3.758882874s" podCreationTimestamp="2026-01-21 14:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:48.755626211 +0000 UTC m=+1130.832459250" watchObservedRunningTime="2026-01-21 14:52:48.758882874 +0000 UTC m=+1130.835715903" Jan 21 14:52:49 crc kubenswrapper[4902]: I0121 14:52:49.742405 4902 generic.go:334] "Generic (PLEG): container finished" podID="9bb9c4d9-a042-4a60-adca-03be4d8ec42d" containerID="d68914c4c8e15dba0295d1fd9bb40d5fc60aa1162bc79ce24523d135a247b33e" exitCode=0 Jan 21 14:52:49 crc kubenswrapper[4902]: I0121 14:52:49.742456 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8n66z" event={"ID":"9bb9c4d9-a042-4a60-adca-03be4d8ec42d","Type":"ContainerDied","Data":"d68914c4c8e15dba0295d1fd9bb40d5fc60aa1162bc79ce24523d135a247b33e"} Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.116130 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.204516 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-config-data\") pod \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\" (UID: \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\") " Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.204952 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-combined-ca-bundle\") pod \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\" (UID: \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\") " Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.205070 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvf4j\" (UniqueName: \"kubernetes.io/projected/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-kube-api-access-qvf4j\") pod \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\" (UID: \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\") " Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.209381 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-kube-api-access-qvf4j" (OuterVolumeSpecName: "kube-api-access-qvf4j") pod "9bb9c4d9-a042-4a60-adca-03be4d8ec42d" (UID: "9bb9c4d9-a042-4a60-adca-03be4d8ec42d"). InnerVolumeSpecName "kube-api-access-qvf4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.226199 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9bb9c4d9-a042-4a60-adca-03be4d8ec42d" (UID: "9bb9c4d9-a042-4a60-adca-03be4d8ec42d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.250133 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-config-data" (OuterVolumeSpecName: "config-data") pod "9bb9c4d9-a042-4a60-adca-03be4d8ec42d" (UID: "9bb9c4d9-a042-4a60-adca-03be4d8ec42d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.306521 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvf4j\" (UniqueName: \"kubernetes.io/projected/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-kube-api-access-qvf4j\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.306561 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.306574 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.757663 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8n66z" event={"ID":"9bb9c4d9-a042-4a60-adca-03be4d8ec42d","Type":"ContainerDied","Data":"587efae09cf4dc3391097e9809b54ed4301ac51dc9f5bbcbd21ecf03e068c9d6"} Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.757710 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="587efae09cf4dc3391097e9809b54ed4301ac51dc9f5bbcbd21ecf03e068c9d6" Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.757710 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.108792 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-m98rq"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.109008 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" podUID="55109ced-875d-425c-bfca-9df867fdc7c8" containerName="dnsmasq-dns" containerID="cri-o://ee373bfcc95337faf0d0a8d1d17928705fb09f163f6c59a2da58ed885e64a255" gracePeriod=10 Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.118309 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.161806 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-dk26m"] Jan 21 14:52:52 crc kubenswrapper[4902]: E0121 14:52:52.162185 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bb9c4d9-a042-4a60-adca-03be4d8ec42d" containerName="keystone-db-sync" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.162201 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bb9c4d9-a042-4a60-adca-03be4d8ec42d" containerName="keystone-db-sync" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.162363 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bb9c4d9-a042-4a60-adca-03be4d8ec42d" containerName="keystone-db-sync" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.162920 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.164680 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5z5b6" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.164883 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.164989 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.165164 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.168305 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.185596 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-credential-keys\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.185650 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dblzl\" (UniqueName: \"kubernetes.io/projected/e2be2f88-2ef5-4773-a31c-a8acd6e27608-kube-api-access-dblzl\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.185701 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-fernet-keys\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.185734 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-scripts\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.185749 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-combined-ca-bundle\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.185766 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-config-data\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.189115 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-glhzf"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.190963 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.215480 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-glhzf"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.229484 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dk26m"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.300833 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-fernet-keys\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.300892 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-config\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.300921 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.300942 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.300964 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.300989 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-scripts\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.301009 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-combined-ca-bundle\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.301030 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-config-data\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.301099 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-credential-keys\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.301145 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dblzl\" (UniqueName: \"kubernetes.io/projected/e2be2f88-2ef5-4773-a31c-a8acd6e27608-kube-api-access-dblzl\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.301190 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.301214 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrt5g\" (UniqueName: \"kubernetes.io/projected/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-kube-api-access-hrt5g\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.324940 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-scripts\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.342817 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-combined-ca-bundle\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.349661 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-config-data\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.374384 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-fernet-keys\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.374742 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-credential-keys\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.386238 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dblzl\" (UniqueName: \"kubernetes.io/projected/e2be2f88-2ef5-4773-a31c-a8acd6e27608-kube-api-access-dblzl\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.411955 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.412208 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrt5g\" (UniqueName: \"kubernetes.io/projected/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-kube-api-access-hrt5g\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.412318 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-config\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.412385 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.412446 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.412508 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.414305 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.414908 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.415261 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.415533 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.416365 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-config\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.417792 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-zlh54"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.419022 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zlh54" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.427162 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-twg7k"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.428194 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.438774 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-zlh54"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.444923 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-twg7k"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.446821 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.447009 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.447203 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-wh7dk" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.447364 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.459488 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.477070 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.484424 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrt5g\" (UniqueName: \"kubernetes.io/projected/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-kube-api-access-hrt5g\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.507751 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.507976 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d52d6" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.508139 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513335 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-scripts\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513407 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513433 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8d84757-ad27-4177-be9f-d7d351e771e2-run-httpd\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513457 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513484 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-combined-ca-bundle\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513514 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/15ef0c45-4c21-4824-850e-545f66a2c20a-config\") pod \"neutron-db-sync-zlh54\" (UID: \"15ef0c45-4c21-4824-850e-545f66a2c20a\") " pod="openstack/neutron-db-sync-zlh54" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513537 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-config-data\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513569 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-db-sync-config-data\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513596 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz5m2\" (UniqueName: \"kubernetes.io/projected/15ef0c45-4c21-4824-850e-545f66a2c20a-kube-api-access-bz5m2\") pod \"neutron-db-sync-zlh54\" (UID: \"15ef0c45-4c21-4824-850e-545f66a2c20a\") " pod="openstack/neutron-db-sync-zlh54" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513643 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4xph\" (UniqueName: \"kubernetes.io/projected/137b1040-d368-4b6d-a4db-ba7c626f666f-kube-api-access-x4xph\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513682 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8d84757-ad27-4177-be9f-d7d351e771e2-log-httpd\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513715 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ef0c45-4c21-4824-850e-545f66a2c20a-combined-ca-bundle\") pod \"neutron-db-sync-zlh54\" (UID: \"15ef0c45-4c21-4824-850e-545f66a2c20a\") " pod="openstack/neutron-db-sync-zlh54" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513740 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-config-data\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513769 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/137b1040-d368-4b6d-a4db-ba7c626f666f-etc-machine-id\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513791 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-scripts\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513813 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg6k2\" (UniqueName: \"kubernetes.io/projected/d8d84757-ad27-4177-be9f-d7d351e771e2-kube-api-access-bg6k2\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.543216 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.582585 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.588741 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.614210 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615016 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615058 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8d84757-ad27-4177-be9f-d7d351e771e2-run-httpd\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615091 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615112 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-combined-ca-bundle\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615129 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/15ef0c45-4c21-4824-850e-545f66a2c20a-config\") pod \"neutron-db-sync-zlh54\" (UID: \"15ef0c45-4c21-4824-850e-545f66a2c20a\") " pod="openstack/neutron-db-sync-zlh54" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615143 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-config-data\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615172 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-db-sync-config-data\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615194 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz5m2\" (UniqueName: \"kubernetes.io/projected/15ef0c45-4c21-4824-850e-545f66a2c20a-kube-api-access-bz5m2\") pod \"neutron-db-sync-zlh54\" (UID: \"15ef0c45-4c21-4824-850e-545f66a2c20a\") " pod="openstack/neutron-db-sync-zlh54" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615230 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4xph\" (UniqueName: \"kubernetes.io/projected/137b1040-d368-4b6d-a4db-ba7c626f666f-kube-api-access-x4xph\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615257 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8d84757-ad27-4177-be9f-d7d351e771e2-log-httpd\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615273 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ef0c45-4c21-4824-850e-545f66a2c20a-combined-ca-bundle\") pod \"neutron-db-sync-zlh54\" (UID: \"15ef0c45-4c21-4824-850e-545f66a2c20a\") " pod="openstack/neutron-db-sync-zlh54" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615289 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-config-data\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615310 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/137b1040-d368-4b6d-a4db-ba7c626f666f-etc-machine-id\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615325 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-scripts\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615341 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg6k2\" (UniqueName: \"kubernetes.io/projected/d8d84757-ad27-4177-be9f-d7d351e771e2-kube-api-access-bg6k2\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615376 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-scripts\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.627921 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-scripts\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.628065 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/137b1040-d368-4b6d-a4db-ba7c626f666f-etc-machine-id\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.632317 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8d84757-ad27-4177-be9f-d7d351e771e2-log-httpd\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.632593 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8d84757-ad27-4177-be9f-d7d351e771e2-run-httpd\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.645862 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.656904 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-combined-ca-bundle\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.657418 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-config-data\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.657852 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-scripts\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.662476 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.662820 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ef0c45-4c21-4824-850e-545f66a2c20a-combined-ca-bundle\") pod \"neutron-db-sync-zlh54\" (UID: \"15ef0c45-4c21-4824-850e-545f66a2c20a\") " pod="openstack/neutron-db-sync-zlh54" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.662955 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/15ef0c45-4c21-4824-850e-545f66a2c20a-config\") pod \"neutron-db-sync-zlh54\" (UID: \"15ef0c45-4c21-4824-850e-545f66a2c20a\") " pod="openstack/neutron-db-sync-zlh54" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.663132 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-db-sync-config-data\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.665359 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-config-data\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.698291 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz5m2\" (UniqueName: \"kubernetes.io/projected/15ef0c45-4c21-4824-850e-545f66a2c20a-kube-api-access-bz5m2\") pod \"neutron-db-sync-zlh54\" (UID: \"15ef0c45-4c21-4824-850e-545f66a2c20a\") " pod="openstack/neutron-db-sync-zlh54" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.706740 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4xph\" (UniqueName: \"kubernetes.io/projected/137b1040-d368-4b6d-a4db-ba7c626f666f-kube-api-access-x4xph\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.715625 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg6k2\" (UniqueName: \"kubernetes.io/projected/d8d84757-ad27-4177-be9f-d7d351e771e2-kube-api-access-bg6k2\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.722248 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-b64dh"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.723906 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.736890 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.749342 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-26vvq" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.753311 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.773057 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.787562 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zlh54" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.800111 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-b64dh"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.823990 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-scripts\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.824085 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-config-data\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.824106 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83490157-abed-443f-8843-945bb43715af-logs\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.824136 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlgnt\" (UniqueName: \"kubernetes.io/projected/83490157-abed-443f-8843-945bb43715af-kube-api-access-xlgnt\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.824176 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-combined-ca-bundle\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.845167 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-4ds4z"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.846291 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.861894 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.863133 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-lxg2q" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.863289 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.894921 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4ds4z"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.900007 4902 generic.go:334] "Generic (PLEG): container finished" podID="55109ced-875d-425c-bfca-9df867fdc7c8" containerID="ee373bfcc95337faf0d0a8d1d17928705fb09f163f6c59a2da58ed885e64a255" exitCode=0 Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.900072 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" event={"ID":"55109ced-875d-425c-bfca-9df867fdc7c8","Type":"ContainerDied","Data":"ee373bfcc95337faf0d0a8d1d17928705fb09f163f6c59a2da58ed885e64a255"} Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.925906 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9277be-e557-4d2e-b799-8fc6def975b9-combined-ca-bundle\") pod \"barbican-db-sync-4ds4z\" (UID: \"df9277be-e557-4d2e-b799-8fc6def975b9\") " pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.925969 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-scripts\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.925992 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tjb8\" (UniqueName: \"kubernetes.io/projected/df9277be-e557-4d2e-b799-8fc6def975b9-kube-api-access-6tjb8\") pod \"barbican-db-sync-4ds4z\" (UID: \"df9277be-e557-4d2e-b799-8fc6def975b9\") " pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.926043 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df9277be-e557-4d2e-b799-8fc6def975b9-db-sync-config-data\") pod \"barbican-db-sync-4ds4z\" (UID: \"df9277be-e557-4d2e-b799-8fc6def975b9\") " pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.926078 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-config-data\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.926094 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83490157-abed-443f-8843-945bb43715af-logs\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.926125 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlgnt\" (UniqueName: \"kubernetes.io/projected/83490157-abed-443f-8843-945bb43715af-kube-api-access-xlgnt\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.926143 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-combined-ca-bundle\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.934640 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-combined-ca-bundle\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.937570 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-scripts\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.941872 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83490157-abed-443f-8843-945bb43715af-logs\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.956832 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-config-data\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.966226 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-glhzf"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.991190 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlgnt\" (UniqueName: \"kubernetes.io/projected/83490157-abed-443f-8843-945bb43715af-kube-api-access-xlgnt\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.011966 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-xstzn"] Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.013246 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.026953 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.027543 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9277be-e557-4d2e-b799-8fc6def975b9-combined-ca-bundle\") pod \"barbican-db-sync-4ds4z\" (UID: \"df9277be-e557-4d2e-b799-8fc6def975b9\") " pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.027622 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6hzz\" (UniqueName: \"kubernetes.io/projected/fdcec88e-b290-47a2-a111-f353528b337e-kube-api-access-d6hzz\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.027688 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.027769 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tjb8\" (UniqueName: \"kubernetes.io/projected/df9277be-e557-4d2e-b799-8fc6def975b9-kube-api-access-6tjb8\") pod \"barbican-db-sync-4ds4z\" (UID: \"df9277be-e557-4d2e-b799-8fc6def975b9\") " pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.027856 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-config\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.027930 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.027999 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df9277be-e557-4d2e-b799-8fc6def975b9-db-sync-config-data\") pod \"barbican-db-sync-4ds4z\" (UID: \"df9277be-e557-4d2e-b799-8fc6def975b9\") " pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.028083 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.032462 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9277be-e557-4d2e-b799-8fc6def975b9-combined-ca-bundle\") pod \"barbican-db-sync-4ds4z\" (UID: \"df9277be-e557-4d2e-b799-8fc6def975b9\") " pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.036618 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df9277be-e557-4d2e-b799-8fc6def975b9-db-sync-config-data\") pod \"barbican-db-sync-4ds4z\" (UID: \"df9277be-e557-4d2e-b799-8fc6def975b9\") " pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.058604 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tjb8\" (UniqueName: \"kubernetes.io/projected/df9277be-e557-4d2e-b799-8fc6def975b9-kube-api-access-6tjb8\") pod \"barbican-db-sync-4ds4z\" (UID: \"df9277be-e557-4d2e-b799-8fc6def975b9\") " pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.067241 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-xstzn"] Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.087436 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.135034 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-config\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.135087 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.135117 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.135166 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.135207 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6hzz\" (UniqueName: \"kubernetes.io/projected/fdcec88e-b290-47a2-a111-f353528b337e-kube-api-access-d6hzz\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.135226 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.136157 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.136250 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.136275 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.136709 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.137110 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-config\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.155213 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6hzz\" (UniqueName: \"kubernetes.io/projected/fdcec88e-b290-47a2-a111-f353528b337e-kube-api-access-d6hzz\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.171977 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.201520 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.240252 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-ovsdbserver-nb\") pod \"55109ced-875d-425c-bfca-9df867fdc7c8\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.240350 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-dns-swift-storage-0\") pod \"55109ced-875d-425c-bfca-9df867fdc7c8\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.240502 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t7cl\" (UniqueName: \"kubernetes.io/projected/55109ced-875d-425c-bfca-9df867fdc7c8-kube-api-access-9t7cl\") pod \"55109ced-875d-425c-bfca-9df867fdc7c8\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.240544 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-ovsdbserver-sb\") pod \"55109ced-875d-425c-bfca-9df867fdc7c8\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.240602 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-dns-svc\") pod \"55109ced-875d-425c-bfca-9df867fdc7c8\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.240635 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-config\") pod \"55109ced-875d-425c-bfca-9df867fdc7c8\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.251020 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55109ced-875d-425c-bfca-9df867fdc7c8-kube-api-access-9t7cl" (OuterVolumeSpecName: "kube-api-access-9t7cl") pod "55109ced-875d-425c-bfca-9df867fdc7c8" (UID: "55109ced-875d-425c-bfca-9df867fdc7c8"). InnerVolumeSpecName "kube-api-access-9t7cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.309318 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:52:53 crc kubenswrapper[4902]: E0121 14:52:53.309808 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55109ced-875d-425c-bfca-9df867fdc7c8" containerName="dnsmasq-dns" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.309824 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="55109ced-875d-425c-bfca-9df867fdc7c8" containerName="dnsmasq-dns" Jan 21 14:52:53 crc kubenswrapper[4902]: E0121 14:52:53.309837 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55109ced-875d-425c-bfca-9df867fdc7c8" containerName="init" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.309845 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="55109ced-875d-425c-bfca-9df867fdc7c8" containerName="init" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.310113 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="55109ced-875d-425c-bfca-9df867fdc7c8" containerName="dnsmasq-dns" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.311144 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.335207 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.335539 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-n2trw" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.335729 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.335886 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.375991 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.377446 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t7cl\" (UniqueName: \"kubernetes.io/projected/55109ced-875d-425c-bfca-9df867fdc7c8-kube-api-access-9t7cl\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.382377 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.406092 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "55109ced-875d-425c-bfca-9df867fdc7c8" (UID: "55109ced-875d-425c-bfca-9df867fdc7c8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.441613 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-glhzf"] Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.460832 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "55109ced-875d-425c-bfca-9df867fdc7c8" (UID: "55109ced-875d-425c-bfca-9df867fdc7c8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.472755 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dk26m"] Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.481007 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-scripts\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.481210 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v429g\" (UniqueName: \"kubernetes.io/projected/55daf4a6-0e2d-4832-8740-87f628a6e2cc-kube-api-access-v429g\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.481405 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55daf4a6-0e2d-4832-8740-87f628a6e2cc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.481563 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.481697 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-config-data\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.481877 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.481973 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.482097 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55daf4a6-0e2d-4832-8740-87f628a6e2cc-logs\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.484468 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.484497 4902 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.506921 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.514967 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-config" (OuterVolumeSpecName: "config") pod "55109ced-875d-425c-bfca-9df867fdc7c8" (UID: "55109ced-875d-425c-bfca-9df867fdc7c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.518547 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "55109ced-875d-425c-bfca-9df867fdc7c8" (UID: "55109ced-875d-425c-bfca-9df867fdc7c8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.520972 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.530980 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.531025 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.578965 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.586810 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "55109ced-875d-425c-bfca-9df867fdc7c8" (UID: "55109ced-875d-425c-bfca-9df867fdc7c8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603334 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb6s8\" (UniqueName: \"kubernetes.io/projected/7cbc3227-2b2b-489c-bc35-2266eae99935-kube-api-access-sb6s8\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603395 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603429 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cbc3227-2b2b-489c-bc35-2266eae99935-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603475 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-config-data\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603516 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cbc3227-2b2b-489c-bc35-2266eae99935-logs\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603584 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603641 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603669 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55daf4a6-0e2d-4832-8740-87f628a6e2cc-logs\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603698 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603781 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603830 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-scripts\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603859 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429g\" (UniqueName: \"kubernetes.io/projected/55daf4a6-0e2d-4832-8740-87f628a6e2cc-kube-api-access-v429g\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603936 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603971 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603999 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55daf4a6-0e2d-4832-8740-87f628a6e2cc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.604028 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.604139 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.604155 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.604168 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.609668 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55daf4a6-0e2d-4832-8740-87f628a6e2cc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.610405 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.612495 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55daf4a6-0e2d-4832-8740-87f628a6e2cc-logs\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.615020 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.615200 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.615564 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-config-data\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.630439 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-scripts\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.635310 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v429g\" (UniqueName: \"kubernetes.io/projected/55daf4a6-0e2d-4832-8740-87f628a6e2cc-kube-api-access-v429g\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.646740 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.708265 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.708339 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.708423 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.708449 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.708478 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.708512 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb6s8\" (UniqueName: \"kubernetes.io/projected/7cbc3227-2b2b-489c-bc35-2266eae99935-kube-api-access-sb6s8\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.708555 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cbc3227-2b2b-489c-bc35-2266eae99935-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.708608 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cbc3227-2b2b-489c-bc35-2266eae99935-logs\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.708868 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.709274 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cbc3227-2b2b-489c-bc35-2266eae99935-logs\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.709664 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cbc3227-2b2b-489c-bc35-2266eae99935-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.714665 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.719351 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-twg7k"] Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.726896 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.731761 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.732284 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.737350 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb6s8\" (UniqueName: \"kubernetes.io/projected/7cbc3227-2b2b-489c-bc35-2266eae99935-kube-api-access-sb6s8\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.741408 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.866858 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.875508 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.880443 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-zlh54"] Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.895149 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-b64dh"] Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.899582 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.910665 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" event={"ID":"55109ced-875d-425c-bfca-9df867fdc7c8","Type":"ContainerDied","Data":"54d97cf6f5d6e6f549a44efb5396488f668a2044b5853012678a53f9be6c8a9c"} Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.910696 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.910715 4902 scope.go:117] "RemoveContainer" containerID="ee373bfcc95337faf0d0a8d1d17928705fb09f163f6c59a2da58ed885e64a255" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.912104 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" event={"ID":"cb4097d7-2ce0-4a7c-b524-82d34c3d368c","Type":"ContainerStarted","Data":"97e640bda0bdcefdf4097a70f64afdf78317cfb56209fc082cd41b1db92af0f8"} Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.946128 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-m98rq"] Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.953658 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-m98rq"] Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.050294 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4ds4z"] Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.150625 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-xstzn"] Jan 21 14:52:54 crc kubenswrapper[4902]: W0121 14:52:54.176410 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2be2f88_2ef5_4773_a31c_a8acd6e27608.slice/crio-7544f3d00e92bd9decbbbaa4f539cd4161aa3325e13242445399b9b398495d75 WatchSource:0}: Error finding container 7544f3d00e92bd9decbbbaa4f539cd4161aa3325e13242445399b9b398495d75: Status 404 returned error can't find the container with id 7544f3d00e92bd9decbbbaa4f539cd4161aa3325e13242445399b9b398495d75 Jan 21 14:52:54 crc kubenswrapper[4902]: W0121 14:52:54.181825 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15ef0c45_4c21_4824_850e_545f66a2c20a.slice/crio-e397414d8ec5ae40c73abfed568886ab434c67c248ca086a30736dc5c091823f WatchSource:0}: Error finding container e397414d8ec5ae40c73abfed568886ab434c67c248ca086a30736dc5c091823f: Status 404 returned error can't find the container with id e397414d8ec5ae40c73abfed568886ab434c67c248ca086a30736dc5c091823f Jan 21 14:52:54 crc kubenswrapper[4902]: W0121 14:52:54.207078 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdcec88e_b290_47a2_a111_f353528b337e.slice/crio-7ebaaeb2e75c14c9e558716e3947916d738a9e8482d276a5a56a2360272de153 WatchSource:0}: Error finding container 7ebaaeb2e75c14c9e558716e3947916d738a9e8482d276a5a56a2360272de153: Status 404 returned error can't find the container with id 7ebaaeb2e75c14c9e558716e3947916d738a9e8482d276a5a56a2360272de153 Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.232277 4902 scope.go:117] "RemoveContainer" containerID="be041a1eb36c6c6ae62d40b43bc1855f878c0bd015cf7aed44fdbbf69065c16c" Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.331282 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55109ced-875d-425c-bfca-9df867fdc7c8" path="/var/lib/kubelet/pods/55109ced-875d-425c-bfca-9df867fdc7c8/volumes" Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.560953 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.636933 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.736979 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.872974 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.928884 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8d84757-ad27-4177-be9f-d7d351e771e2","Type":"ContainerStarted","Data":"93b4216849acc7e83ad93b11dfedacb592b887f2e39ca3b5b2c28470072e2c3e"} Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.933385 4902 generic.go:334] "Generic (PLEG): container finished" podID="cb4097d7-2ce0-4a7c-b524-82d34c3d368c" containerID="3ff2eb1b3a6a4a80f1d99b80a1c9dfd1a67c9101a6c6bbf9236def171548312e" exitCode=0 Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.933572 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" event={"ID":"cb4097d7-2ce0-4a7c-b524-82d34c3d368c","Type":"ContainerDied","Data":"3ff2eb1b3a6a4a80f1d99b80a1c9dfd1a67c9101a6c6bbf9236def171548312e"} Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.943981 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zlh54" event={"ID":"15ef0c45-4c21-4824-850e-545f66a2c20a","Type":"ContainerStarted","Data":"c05aa038a30ca68cb9b9875b1713755a7a748b30cae2fd412e457a921170733c"} Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.944050 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zlh54" event={"ID":"15ef0c45-4c21-4824-850e-545f66a2c20a","Type":"ContainerStarted","Data":"e397414d8ec5ae40c73abfed568886ab434c67c248ca086a30736dc5c091823f"} Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.949794 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-b64dh" event={"ID":"83490157-abed-443f-8843-945bb43715af","Type":"ContainerStarted","Data":"9f26212e4bdc5bda5416b6956048e081a79eb4fe056e9e364faed24f7ac4f14f"} Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.959117 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.966664 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dk26m" event={"ID":"e2be2f88-2ef5-4773-a31c-a8acd6e27608","Type":"ContainerStarted","Data":"276b271b02ab000b334b001c5253fa10542fc6c000e67438f4ac84d47645e83c"} Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.966705 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dk26m" event={"ID":"e2be2f88-2ef5-4773-a31c-a8acd6e27608","Type":"ContainerStarted","Data":"7544f3d00e92bd9decbbbaa4f539cd4161aa3325e13242445399b9b398495d75"} Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.982706 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"55daf4a6-0e2d-4832-8740-87f628a6e2cc","Type":"ContainerStarted","Data":"7b40ba155df5f9ea3c66a7bdb479c5b6c0f2b6eda7d8e8f89404b65e212bd221"} Jan 21 14:52:54 crc kubenswrapper[4902]: W0121 14:52:54.987561 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cbc3227_2b2b_489c_bc35_2266eae99935.slice/crio-f3dd33815c8a4853e7c5aeed751d1592a9c789599335b4e59224938b34e8c9a4 WatchSource:0}: Error finding container f3dd33815c8a4853e7c5aeed751d1592a9c789599335b4e59224938b34e8c9a4: Status 404 returned error can't find the container with id f3dd33815c8a4853e7c5aeed751d1592a9c789599335b4e59224938b34e8c9a4 Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.987868 4902 generic.go:334] "Generic (PLEG): container finished" podID="fdcec88e-b290-47a2-a111-f353528b337e" containerID="e09162a3ec37680929590914b38193023c428285227f1464b2740e369fca6b12" exitCode=0 Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.987912 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" event={"ID":"fdcec88e-b290-47a2-a111-f353528b337e","Type":"ContainerDied","Data":"e09162a3ec37680929590914b38193023c428285227f1464b2740e369fca6b12"} Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.987931 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" event={"ID":"fdcec88e-b290-47a2-a111-f353528b337e","Type":"ContainerStarted","Data":"7ebaaeb2e75c14c9e558716e3947916d738a9e8482d276a5a56a2360272de153"} Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.991432 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-zlh54" podStartSLOduration=2.991409842 podStartE2EDuration="2.991409842s" podCreationTimestamp="2026-01-21 14:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:54.982842289 +0000 UTC m=+1137.059675318" watchObservedRunningTime="2026-01-21 14:52:54.991409842 +0000 UTC m=+1137.068242871" Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.992711 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4ds4z" event={"ID":"df9277be-e557-4d2e-b799-8fc6def975b9","Type":"ContainerStarted","Data":"1240a2082e984db724460dca85452b351506f660a1b70f26c765e2a219ef66f2"} Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.997837 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-twg7k" event={"ID":"137b1040-d368-4b6d-a4db-ba7c626f666f","Type":"ContainerStarted","Data":"83424afb06205a5855d6b3c92c92324b00e4ab6828b9f7a1bf1115dc87d2cda6"} Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.005226 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-dk26m" podStartSLOduration=3.005206272 podStartE2EDuration="3.005206272s" podCreationTimestamp="2026-01-21 14:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:55.003274838 +0000 UTC m=+1137.080107867" watchObservedRunningTime="2026-01-21 14:52:55.005206272 +0000 UTC m=+1137.082039301" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.416624 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.567564 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-dns-swift-storage-0\") pod \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.567758 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrt5g\" (UniqueName: \"kubernetes.io/projected/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-kube-api-access-hrt5g\") pod \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.567777 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-config\") pod \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.567799 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-ovsdbserver-nb\") pod \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.567846 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-dns-svc\") pod \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.567893 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-ovsdbserver-sb\") pod \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.585371 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-kube-api-access-hrt5g" (OuterVolumeSpecName: "kube-api-access-hrt5g") pod "cb4097d7-2ce0-4a7c-b524-82d34c3d368c" (UID: "cb4097d7-2ce0-4a7c-b524-82d34c3d368c"). InnerVolumeSpecName "kube-api-access-hrt5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.597919 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cb4097d7-2ce0-4a7c-b524-82d34c3d368c" (UID: "cb4097d7-2ce0-4a7c-b524-82d34c3d368c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.600507 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cb4097d7-2ce0-4a7c-b524-82d34c3d368c" (UID: "cb4097d7-2ce0-4a7c-b524-82d34c3d368c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.624822 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cb4097d7-2ce0-4a7c-b524-82d34c3d368c" (UID: "cb4097d7-2ce0-4a7c-b524-82d34c3d368c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.641716 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-config" (OuterVolumeSpecName: "config") pod "cb4097d7-2ce0-4a7c-b524-82d34c3d368c" (UID: "cb4097d7-2ce0-4a7c-b524-82d34c3d368c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.652408 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cb4097d7-2ce0-4a7c-b524-82d34c3d368c" (UID: "cb4097d7-2ce0-4a7c-b524-82d34c3d368c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.671022 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrt5g\" (UniqueName: \"kubernetes.io/projected/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-kube-api-access-hrt5g\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.671079 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.671090 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.671097 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.671106 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.671115 4902 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:56 crc kubenswrapper[4902]: I0121 14:52:56.061948 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" event={"ID":"cb4097d7-2ce0-4a7c-b524-82d34c3d368c","Type":"ContainerDied","Data":"97e640bda0bdcefdf4097a70f64afdf78317cfb56209fc082cd41b1db92af0f8"} Jan 21 14:52:56 crc kubenswrapper[4902]: I0121 14:52:56.062011 4902 scope.go:117] "RemoveContainer" containerID="3ff2eb1b3a6a4a80f1d99b80a1c9dfd1a67c9101a6c6bbf9236def171548312e" Jan 21 14:52:56 crc kubenswrapper[4902]: I0121 14:52:56.062193 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:56 crc kubenswrapper[4902]: I0121 14:52:56.069649 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7cbc3227-2b2b-489c-bc35-2266eae99935","Type":"ContainerStarted","Data":"f3dd33815c8a4853e7c5aeed751d1592a9c789599335b4e59224938b34e8c9a4"} Jan 21 14:52:56 crc kubenswrapper[4902]: I0121 14:52:56.077172 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" event={"ID":"fdcec88e-b290-47a2-a111-f353528b337e","Type":"ContainerStarted","Data":"7692fd62f5f8d970ca1dd253fc5c7512cbe9da4bdb84caf7d56a5669f3d8f303"} Jan 21 14:52:56 crc kubenswrapper[4902]: I0121 14:52:56.077231 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:56 crc kubenswrapper[4902]: I0121 14:52:56.103025 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" podStartSLOduration=4.103002152 podStartE2EDuration="4.103002152s" podCreationTimestamp="2026-01-21 14:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:56.101645773 +0000 UTC m=+1138.178478802" watchObservedRunningTime="2026-01-21 14:52:56.103002152 +0000 UTC m=+1138.179835181" Jan 21 14:52:56 crc kubenswrapper[4902]: I0121 14:52:56.154870 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-glhzf"] Jan 21 14:52:56 crc kubenswrapper[4902]: I0121 14:52:56.157837 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-glhzf"] Jan 21 14:52:56 crc kubenswrapper[4902]: I0121 14:52:56.305862 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb4097d7-2ce0-4a7c-b524-82d34c3d368c" path="/var/lib/kubelet/pods/cb4097d7-2ce0-4a7c-b524-82d34c3d368c/volumes" Jan 21 14:52:57 crc kubenswrapper[4902]: I0121 14:52:57.090541 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7cbc3227-2b2b-489c-bc35-2266eae99935","Type":"ContainerStarted","Data":"4cc1203e814fc62d40f33869f21d58884c995a732338ba7c6403b666fa8b712d"} Jan 21 14:52:57 crc kubenswrapper[4902]: I0121 14:52:57.098414 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"55daf4a6-0e2d-4832-8740-87f628a6e2cc","Type":"ContainerStarted","Data":"0217c431d44df2f20d49bb80b38d3634c60d8d298432655fbcd6784934a969c5"} Jan 21 14:52:58 crc kubenswrapper[4902]: I0121 14:52:58.110416 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7cbc3227-2b2b-489c-bc35-2266eae99935","Type":"ContainerStarted","Data":"8d87c9d3ad1eb4e5b4f658d4e0a489d56cdaee9fc570202fc41afc9916f4ea6a"} Jan 21 14:52:58 crc kubenswrapper[4902]: I0121 14:52:58.110517 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7cbc3227-2b2b-489c-bc35-2266eae99935" containerName="glance-log" containerID="cri-o://4cc1203e814fc62d40f33869f21d58884c995a732338ba7c6403b666fa8b712d" gracePeriod=30 Jan 21 14:52:58 crc kubenswrapper[4902]: I0121 14:52:58.110580 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7cbc3227-2b2b-489c-bc35-2266eae99935" containerName="glance-httpd" containerID="cri-o://8d87c9d3ad1eb4e5b4f658d4e0a489d56cdaee9fc570202fc41afc9916f4ea6a" gracePeriod=30 Jan 21 14:52:58 crc kubenswrapper[4902]: I0121 14:52:58.118401 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"55daf4a6-0e2d-4832-8740-87f628a6e2cc","Type":"ContainerStarted","Data":"b8ee32986cfb942a0b8084ffe5d7119b0f8740770e39cc90369ef4116159b309"} Jan 21 14:52:58 crc kubenswrapper[4902]: I0121 14:52:58.118594 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="55daf4a6-0e2d-4832-8740-87f628a6e2cc" containerName="glance-log" containerID="cri-o://0217c431d44df2f20d49bb80b38d3634c60d8d298432655fbcd6784934a969c5" gracePeriod=30 Jan 21 14:52:58 crc kubenswrapper[4902]: I0121 14:52:58.118675 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="55daf4a6-0e2d-4832-8740-87f628a6e2cc" containerName="glance-httpd" containerID="cri-o://b8ee32986cfb942a0b8084ffe5d7119b0f8740770e39cc90369ef4116159b309" gracePeriod=30 Jan 21 14:52:58 crc kubenswrapper[4902]: I0121 14:52:58.139826 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.139805627 podStartE2EDuration="6.139805627s" podCreationTimestamp="2026-01-21 14:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:58.134814676 +0000 UTC m=+1140.211647705" watchObservedRunningTime="2026-01-21 14:52:58.139805627 +0000 UTC m=+1140.216638656" Jan 21 14:52:58 crc kubenswrapper[4902]: I0121 14:52:58.164719 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.164670161 podStartE2EDuration="6.164670161s" podCreationTimestamp="2026-01-21 14:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:58.162118078 +0000 UTC m=+1140.238951127" watchObservedRunningTime="2026-01-21 14:52:58.164670161 +0000 UTC m=+1140.241503190" Jan 21 14:52:59 crc kubenswrapper[4902]: I0121 14:52:59.152457 4902 generic.go:334] "Generic (PLEG): container finished" podID="55daf4a6-0e2d-4832-8740-87f628a6e2cc" containerID="b8ee32986cfb942a0b8084ffe5d7119b0f8740770e39cc90369ef4116159b309" exitCode=143 Jan 21 14:52:59 crc kubenswrapper[4902]: I0121 14:52:59.152501 4902 generic.go:334] "Generic (PLEG): container finished" podID="55daf4a6-0e2d-4832-8740-87f628a6e2cc" containerID="0217c431d44df2f20d49bb80b38d3634c60d8d298432655fbcd6784934a969c5" exitCode=143 Jan 21 14:52:59 crc kubenswrapper[4902]: I0121 14:52:59.152533 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"55daf4a6-0e2d-4832-8740-87f628a6e2cc","Type":"ContainerDied","Data":"b8ee32986cfb942a0b8084ffe5d7119b0f8740770e39cc90369ef4116159b309"} Jan 21 14:52:59 crc kubenswrapper[4902]: I0121 14:52:59.152573 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"55daf4a6-0e2d-4832-8740-87f628a6e2cc","Type":"ContainerDied","Data":"0217c431d44df2f20d49bb80b38d3634c60d8d298432655fbcd6784934a969c5"} Jan 21 14:52:59 crc kubenswrapper[4902]: I0121 14:52:59.155553 4902 generic.go:334] "Generic (PLEG): container finished" podID="7cbc3227-2b2b-489c-bc35-2266eae99935" containerID="8d87c9d3ad1eb4e5b4f658d4e0a489d56cdaee9fc570202fc41afc9916f4ea6a" exitCode=143 Jan 21 14:52:59 crc kubenswrapper[4902]: I0121 14:52:59.155608 4902 generic.go:334] "Generic (PLEG): container finished" podID="7cbc3227-2b2b-489c-bc35-2266eae99935" containerID="4cc1203e814fc62d40f33869f21d58884c995a732338ba7c6403b666fa8b712d" exitCode=143 Jan 21 14:52:59 crc kubenswrapper[4902]: I0121 14:52:59.155651 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7cbc3227-2b2b-489c-bc35-2266eae99935","Type":"ContainerDied","Data":"8d87c9d3ad1eb4e5b4f658d4e0a489d56cdaee9fc570202fc41afc9916f4ea6a"} Jan 21 14:52:59 crc kubenswrapper[4902]: I0121 14:52:59.155739 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7cbc3227-2b2b-489c-bc35-2266eae99935","Type":"ContainerDied","Data":"4cc1203e814fc62d40f33869f21d58884c995a732338ba7c6403b666fa8b712d"} Jan 21 14:53:00 crc kubenswrapper[4902]: I0121 14:53:00.168063 4902 generic.go:334] "Generic (PLEG): container finished" podID="e2be2f88-2ef5-4773-a31c-a8acd6e27608" containerID="276b271b02ab000b334b001c5253fa10542fc6c000e67438f4ac84d47645e83c" exitCode=0 Jan 21 14:53:00 crc kubenswrapper[4902]: I0121 14:53:00.168098 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dk26m" event={"ID":"e2be2f88-2ef5-4773-a31c-a8acd6e27608","Type":"ContainerDied","Data":"276b271b02ab000b334b001c5253fa10542fc6c000e67438f4ac84d47645e83c"} Jan 21 14:53:03 crc kubenswrapper[4902]: I0121 14:53:03.379023 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:53:03 crc kubenswrapper[4902]: I0121 14:53:03.447190 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-ns7jh"] Jan 21 14:53:03 crc kubenswrapper[4902]: I0121 14:53:03.447485 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" podUID="9cfdec8c-8d41-4ae4-ad01-a4b76f589140" containerName="dnsmasq-dns" containerID="cri-o://db22298ae310fe9c4abed7194da190cb997c649d5ca02aff4d05d6c947c77a3f" gracePeriod=10 Jan 21 14:53:04 crc kubenswrapper[4902]: I0121 14:53:04.161286 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" podUID="9cfdec8c-8d41-4ae4-ad01-a4b76f589140" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Jan 21 14:53:04 crc kubenswrapper[4902]: I0121 14:53:04.225624 4902 generic.go:334] "Generic (PLEG): container finished" podID="9cfdec8c-8d41-4ae4-ad01-a4b76f589140" containerID="db22298ae310fe9c4abed7194da190cb997c649d5ca02aff4d05d6c947c77a3f" exitCode=0 Jan 21 14:53:04 crc kubenswrapper[4902]: I0121 14:53:04.225681 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" event={"ID":"9cfdec8c-8d41-4ae4-ad01-a4b76f589140","Type":"ContainerDied","Data":"db22298ae310fe9c4abed7194da190cb997c649d5ca02aff4d05d6c947c77a3f"} Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.243375 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dk26m" event={"ID":"e2be2f88-2ef5-4773-a31c-a8acd6e27608","Type":"ContainerDied","Data":"7544f3d00e92bd9decbbbaa4f539cd4161aa3325e13242445399b9b398495d75"} Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.243993 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7544f3d00e92bd9decbbbaa4f539cd4161aa3325e13242445399b9b398495d75" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.245434 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7cbc3227-2b2b-489c-bc35-2266eae99935","Type":"ContainerDied","Data":"f3dd33815c8a4853e7c5aeed751d1592a9c789599335b4e59224938b34e8c9a4"} Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.245460 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3dd33815c8a4853e7c5aeed751d1592a9c789599335b4e59224938b34e8c9a4" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.326395 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.326980 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.499965 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-scripts\") pod \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.502392 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-combined-ca-bundle\") pod \"7cbc3227-2b2b-489c-bc35-2266eae99935\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.502747 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-combined-ca-bundle\") pod \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.502805 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-scripts\") pod \"7cbc3227-2b2b-489c-bc35-2266eae99935\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.502851 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-credential-keys\") pod \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.502887 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-fernet-keys\") pod \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.502976 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cbc3227-2b2b-489c-bc35-2266eae99935-httpd-run\") pod \"7cbc3227-2b2b-489c-bc35-2266eae99935\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.503093 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cbc3227-2b2b-489c-bc35-2266eae99935-logs\") pod \"7cbc3227-2b2b-489c-bc35-2266eae99935\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.503135 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6s8\" (UniqueName: \"kubernetes.io/projected/7cbc3227-2b2b-489c-bc35-2266eae99935-kube-api-access-sb6s8\") pod \"7cbc3227-2b2b-489c-bc35-2266eae99935\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.503219 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-config-data\") pod \"7cbc3227-2b2b-489c-bc35-2266eae99935\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.503233 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cbc3227-2b2b-489c-bc35-2266eae99935-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7cbc3227-2b2b-489c-bc35-2266eae99935" (UID: "7cbc3227-2b2b-489c-bc35-2266eae99935"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.503494 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cbc3227-2b2b-489c-bc35-2266eae99935-logs" (OuterVolumeSpecName: "logs") pod "7cbc3227-2b2b-489c-bc35-2266eae99935" (UID: "7cbc3227-2b2b-489c-bc35-2266eae99935"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.503677 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-config-data\") pod \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.503762 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-internal-tls-certs\") pod \"7cbc3227-2b2b-489c-bc35-2266eae99935\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.503852 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"7cbc3227-2b2b-489c-bc35-2266eae99935\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.503925 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dblzl\" (UniqueName: \"kubernetes.io/projected/e2be2f88-2ef5-4773-a31c-a8acd6e27608-kube-api-access-dblzl\") pod \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.504589 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cbc3227-2b2b-489c-bc35-2266eae99935-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.504607 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cbc3227-2b2b-489c-bc35-2266eae99935-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.507736 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-scripts" (OuterVolumeSpecName: "scripts") pod "e2be2f88-2ef5-4773-a31c-a8acd6e27608" (UID: "e2be2f88-2ef5-4773-a31c-a8acd6e27608"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.507949 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e2be2f88-2ef5-4773-a31c-a8acd6e27608" (UID: "e2be2f88-2ef5-4773-a31c-a8acd6e27608"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.507973 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cbc3227-2b2b-489c-bc35-2266eae99935-kube-api-access-sb6s8" (OuterVolumeSpecName: "kube-api-access-sb6s8") pod "7cbc3227-2b2b-489c-bc35-2266eae99935" (UID: "7cbc3227-2b2b-489c-bc35-2266eae99935"). InnerVolumeSpecName "kube-api-access-sb6s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.508078 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-scripts" (OuterVolumeSpecName: "scripts") pod "7cbc3227-2b2b-489c-bc35-2266eae99935" (UID: "7cbc3227-2b2b-489c-bc35-2266eae99935"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.509585 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e2be2f88-2ef5-4773-a31c-a8acd6e27608" (UID: "e2be2f88-2ef5-4773-a31c-a8acd6e27608"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.509743 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "7cbc3227-2b2b-489c-bc35-2266eae99935" (UID: "7cbc3227-2b2b-489c-bc35-2266eae99935"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.517975 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2be2f88-2ef5-4773-a31c-a8acd6e27608-kube-api-access-dblzl" (OuterVolumeSpecName: "kube-api-access-dblzl") pod "e2be2f88-2ef5-4773-a31c-a8acd6e27608" (UID: "e2be2f88-2ef5-4773-a31c-a8acd6e27608"). InnerVolumeSpecName "kube-api-access-dblzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.528676 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2be2f88-2ef5-4773-a31c-a8acd6e27608" (UID: "e2be2f88-2ef5-4773-a31c-a8acd6e27608"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.531227 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cbc3227-2b2b-489c-bc35-2266eae99935" (UID: "7cbc3227-2b2b-489c-bc35-2266eae99935"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.536157 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-config-data" (OuterVolumeSpecName: "config-data") pod "e2be2f88-2ef5-4773-a31c-a8acd6e27608" (UID: "e2be2f88-2ef5-4773-a31c-a8acd6e27608"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.550115 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-config-data" (OuterVolumeSpecName: "config-data") pod "7cbc3227-2b2b-489c-bc35-2266eae99935" (UID: "7cbc3227-2b2b-489c-bc35-2266eae99935"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.557608 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7cbc3227-2b2b-489c-bc35-2266eae99935" (UID: "7cbc3227-2b2b-489c-bc35-2266eae99935"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.605880 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6s8\" (UniqueName: \"kubernetes.io/projected/7cbc3227-2b2b-489c-bc35-2266eae99935-kube-api-access-sb6s8\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.605924 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.605941 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.605952 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.605991 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.606005 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dblzl\" (UniqueName: \"kubernetes.io/projected/e2be2f88-2ef5-4773-a31c-a8acd6e27608-kube-api-access-dblzl\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.606018 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.606032 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.606062 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.606073 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.606083 4902 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.606095 4902 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.628422 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.707506 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.254117 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.254192 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.297685 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.316224 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.331178 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:53:07 crc kubenswrapper[4902]: E0121 14:53:07.331850 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cbc3227-2b2b-489c-bc35-2266eae99935" containerName="glance-log" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.331963 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cbc3227-2b2b-489c-bc35-2266eae99935" containerName="glance-log" Jan 21 14:53:07 crc kubenswrapper[4902]: E0121 14:53:07.332064 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cbc3227-2b2b-489c-bc35-2266eae99935" containerName="glance-httpd" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.332154 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cbc3227-2b2b-489c-bc35-2266eae99935" containerName="glance-httpd" Jan 21 14:53:07 crc kubenswrapper[4902]: E0121 14:53:07.332235 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2be2f88-2ef5-4773-a31c-a8acd6e27608" containerName="keystone-bootstrap" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.332302 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2be2f88-2ef5-4773-a31c-a8acd6e27608" containerName="keystone-bootstrap" Jan 21 14:53:07 crc kubenswrapper[4902]: E0121 14:53:07.332398 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb4097d7-2ce0-4a7c-b524-82d34c3d368c" containerName="init" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.332469 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb4097d7-2ce0-4a7c-b524-82d34c3d368c" containerName="init" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.332731 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cbc3227-2b2b-489c-bc35-2266eae99935" containerName="glance-log" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.332822 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2be2f88-2ef5-4773-a31c-a8acd6e27608" containerName="keystone-bootstrap" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.332901 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb4097d7-2ce0-4a7c-b524-82d34c3d368c" containerName="init" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.332980 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cbc3227-2b2b-489c-bc35-2266eae99935" containerName="glance-httpd" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.334149 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.336688 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.336863 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.340855 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.524673 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5048d12c-b66b-4f2f-a706-0e2978b5f0db-logs\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.525086 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.525136 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tphvc\" (UniqueName: \"kubernetes.io/projected/5048d12c-b66b-4f2f-a706-0e2978b5f0db-kube-api-access-tphvc\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.525169 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.525218 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.525262 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5048d12c-b66b-4f2f-a706-0e2978b5f0db-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.525292 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.525321 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.546401 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-dk26m"] Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.561002 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-dk26m"] Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.626597 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tphvc\" (UniqueName: \"kubernetes.io/projected/5048d12c-b66b-4f2f-a706-0e2978b5f0db-kube-api-access-tphvc\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.626651 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.626691 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.626735 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5048d12c-b66b-4f2f-a706-0e2978b5f0db-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.626758 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.626780 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.626817 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5048d12c-b66b-4f2f-a706-0e2978b5f0db-logs\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.626857 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.631685 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-c6zzp"] Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.632015 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.632155 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.632349 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5048d12c-b66b-4f2f-a706-0e2978b5f0db-logs\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.633280 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.635528 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.635640 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.635880 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5048d12c-b66b-4f2f-a706-0e2978b5f0db-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.640583 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-c6zzp"] Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.641370 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.641907 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.641967 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.642118 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.642173 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5z5b6" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.645245 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tphvc\" (UniqueName: \"kubernetes.io/projected/5048d12c-b66b-4f2f-a706-0e2978b5f0db-kube-api-access-tphvc\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.656929 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.677535 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.830068 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-fernet-keys\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.830148 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-config-data\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.830226 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-combined-ca-bundle\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.830289 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6smc6\" (UniqueName: \"kubernetes.io/projected/966f492d-0f8f-4bef-b60f-777f25367104-kube-api-access-6smc6\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.830313 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-credential-keys\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.830402 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-scripts\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.931920 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-combined-ca-bundle\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.932031 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6smc6\" (UniqueName: \"kubernetes.io/projected/966f492d-0f8f-4bef-b60f-777f25367104-kube-api-access-6smc6\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.932088 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-credential-keys\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.932152 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-scripts\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.932186 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-fernet-keys\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.932216 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-config-data\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.938768 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-fernet-keys\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.940950 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-combined-ca-bundle\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.951275 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-config-data\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.951882 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-credential-keys\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.958457 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-scripts\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.959124 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.970741 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6smc6\" (UniqueName: \"kubernetes.io/projected/966f492d-0f8f-4bef-b60f-777f25367104-kube-api-access-6smc6\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:08 crc kubenswrapper[4902]: I0121 14:53:08.033931 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:08 crc kubenswrapper[4902]: I0121 14:53:08.316662 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cbc3227-2b2b-489c-bc35-2266eae99935" path="/var/lib/kubelet/pods/7cbc3227-2b2b-489c-bc35-2266eae99935/volumes" Jan 21 14:53:08 crc kubenswrapper[4902]: I0121 14:53:08.317679 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2be2f88-2ef5-4773-a31c-a8acd6e27608" path="/var/lib/kubelet/pods/e2be2f88-2ef5-4773-a31c-a8acd6e27608/volumes" Jan 21 14:53:14 crc kubenswrapper[4902]: I0121 14:53:14.097321 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" podUID="9cfdec8c-8d41-4ae4-ad01-a4b76f589140" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: i/o timeout" Jan 21 14:53:16 crc kubenswrapper[4902]: E0121 14:53:16.041981 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16" Jan 21 14:53:16 crc kubenswrapper[4902]: E0121 14:53:16.042482 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6tjb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-4ds4z_openstack(df9277be-e557-4d2e-b799-8fc6def975b9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:53:16 crc kubenswrapper[4902]: E0121 14:53:16.043996 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-4ds4z" podUID="df9277be-e557-4d2e-b799-8fc6def975b9" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.149913 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.156855 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.161862 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-combined-ca-bundle\") pod \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.161972 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-config-data\") pod \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.162088 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-scripts\") pod \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.162120 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-public-tls-certs\") pod \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.162157 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v429g\" (UniqueName: \"kubernetes.io/projected/55daf4a6-0e2d-4832-8740-87f628a6e2cc-kube-api-access-v429g\") pod \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.162240 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55daf4a6-0e2d-4832-8740-87f628a6e2cc-logs\") pod \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.162284 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55daf4a6-0e2d-4832-8740-87f628a6e2cc-httpd-run\") pod \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.162325 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.163298 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55daf4a6-0e2d-4832-8740-87f628a6e2cc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "55daf4a6-0e2d-4832-8740-87f628a6e2cc" (UID: "55daf4a6-0e2d-4832-8740-87f628a6e2cc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.163338 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55daf4a6-0e2d-4832-8740-87f628a6e2cc-logs" (OuterVolumeSpecName: "logs") pod "55daf4a6-0e2d-4832-8740-87f628a6e2cc" (UID: "55daf4a6-0e2d-4832-8740-87f628a6e2cc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.171418 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55daf4a6-0e2d-4832-8740-87f628a6e2cc-kube-api-access-v429g" (OuterVolumeSpecName: "kube-api-access-v429g") pod "55daf4a6-0e2d-4832-8740-87f628a6e2cc" (UID: "55daf4a6-0e2d-4832-8740-87f628a6e2cc"). InnerVolumeSpecName "kube-api-access-v429g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.186521 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "55daf4a6-0e2d-4832-8740-87f628a6e2cc" (UID: "55daf4a6-0e2d-4832-8740-87f628a6e2cc"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.186564 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-scripts" (OuterVolumeSpecName: "scripts") pod "55daf4a6-0e2d-4832-8740-87f628a6e2cc" (UID: "55daf4a6-0e2d-4832-8740-87f628a6e2cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.206607 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55daf4a6-0e2d-4832-8740-87f628a6e2cc" (UID: "55daf4a6-0e2d-4832-8740-87f628a6e2cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.218888 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "55daf4a6-0e2d-4832-8740-87f628a6e2cc" (UID: "55daf4a6-0e2d-4832-8740-87f628a6e2cc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.251005 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-config-data" (OuterVolumeSpecName: "config-data") pod "55daf4a6-0e2d-4832-8740-87f628a6e2cc" (UID: "55daf4a6-0e2d-4832-8740-87f628a6e2cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.263862 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgdkp\" (UniqueName: \"kubernetes.io/projected/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-kube-api-access-xgdkp\") pod \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.263913 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-dns-svc\") pod \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.263963 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-ovsdbserver-nb\") pod \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.264032 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-ovsdbserver-sb\") pod \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.264109 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-dns-swift-storage-0\") pod \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.264524 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-config\") pod \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.264894 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v429g\" (UniqueName: \"kubernetes.io/projected/55daf4a6-0e2d-4832-8740-87f628a6e2cc-kube-api-access-v429g\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.264907 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55daf4a6-0e2d-4832-8740-87f628a6e2cc-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.264915 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55daf4a6-0e2d-4832-8740-87f628a6e2cc-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.264932 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.264941 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.264949 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.264956 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.264968 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.269664 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-kube-api-access-xgdkp" (OuterVolumeSpecName: "kube-api-access-xgdkp") pod "9cfdec8c-8d41-4ae4-ad01-a4b76f589140" (UID: "9cfdec8c-8d41-4ae4-ad01-a4b76f589140"). InnerVolumeSpecName "kube-api-access-xgdkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.284730 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.312395 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-config" (OuterVolumeSpecName: "config") pod "9cfdec8c-8d41-4ae4-ad01-a4b76f589140" (UID: "9cfdec8c-8d41-4ae4-ad01-a4b76f589140"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.312755 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9cfdec8c-8d41-4ae4-ad01-a4b76f589140" (UID: "9cfdec8c-8d41-4ae4-ad01-a4b76f589140"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.313595 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9cfdec8c-8d41-4ae4-ad01-a4b76f589140" (UID: "9cfdec8c-8d41-4ae4-ad01-a4b76f589140"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.316667 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9cfdec8c-8d41-4ae4-ad01-a4b76f589140" (UID: "9cfdec8c-8d41-4ae4-ad01-a4b76f589140"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.327231 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9cfdec8c-8d41-4ae4-ad01-a4b76f589140" (UID: "9cfdec8c-8d41-4ae4-ad01-a4b76f589140"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.366589 4902 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.366629 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.366640 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgdkp\" (UniqueName: \"kubernetes.io/projected/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-kube-api-access-xgdkp\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.366651 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.366659 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.366670 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.366679 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.430768 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"55daf4a6-0e2d-4832-8740-87f628a6e2cc","Type":"ContainerDied","Data":"7b40ba155df5f9ea3c66a7bdb479c5b6c0f2b6eda7d8e8f89404b65e212bd221"} Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.430864 4902 scope.go:117] "RemoveContainer" containerID="b8ee32986cfb942a0b8084ffe5d7119b0f8740770e39cc90369ef4116159b309" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.431018 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.438199 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" event={"ID":"9cfdec8c-8d41-4ae4-ad01-a4b76f589140","Type":"ContainerDied","Data":"f37440f856bd371cff80f2f0f1e426de41d7fcfe1af9a8b2a61bd34561bbe363"} Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.438220 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:53:16 crc kubenswrapper[4902]: E0121 14:53:16.439494 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16\\\"\"" pod="openstack/barbican-db-sync-4ds4z" podUID="df9277be-e557-4d2e-b799-8fc6def975b9" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.482459 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.490406 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.506532 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-ns7jh"] Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.517569 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-ns7jh"] Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.538859 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:53:16 crc kubenswrapper[4902]: E0121 14:53:16.539316 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55daf4a6-0e2d-4832-8740-87f628a6e2cc" containerName="glance-log" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.539339 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="55daf4a6-0e2d-4832-8740-87f628a6e2cc" containerName="glance-log" Jan 21 14:53:16 crc kubenswrapper[4902]: E0121 14:53:16.539363 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cfdec8c-8d41-4ae4-ad01-a4b76f589140" containerName="dnsmasq-dns" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.539373 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cfdec8c-8d41-4ae4-ad01-a4b76f589140" containerName="dnsmasq-dns" Jan 21 14:53:16 crc kubenswrapper[4902]: E0121 14:53:16.539405 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cfdec8c-8d41-4ae4-ad01-a4b76f589140" containerName="init" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.539415 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cfdec8c-8d41-4ae4-ad01-a4b76f589140" containerName="init" Jan 21 14:53:16 crc kubenswrapper[4902]: E0121 14:53:16.539431 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55daf4a6-0e2d-4832-8740-87f628a6e2cc" containerName="glance-httpd" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.539438 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="55daf4a6-0e2d-4832-8740-87f628a6e2cc" containerName="glance-httpd" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.539651 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="55daf4a6-0e2d-4832-8740-87f628a6e2cc" containerName="glance-httpd" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.539682 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="55daf4a6-0e2d-4832-8740-87f628a6e2cc" containerName="glance-log" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.539694 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cfdec8c-8d41-4ae4-ad01-a4b76f589140" containerName="dnsmasq-dns" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.540727 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.542814 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.543918 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.551782 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.569457 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.569536 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.569577 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x9kr\" (UniqueName: \"kubernetes.io/projected/30ff158a-452e-4180-b99e-9a171035d794-kube-api-access-5x9kr\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.569608 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30ff158a-452e-4180-b99e-9a171035d794-logs\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.569627 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-scripts\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.569644 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.569696 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-config-data\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.569724 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/30ff158a-452e-4180-b99e-9a171035d794-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.670699 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-config-data\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.670768 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/30ff158a-452e-4180-b99e-9a171035d794-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.670803 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.670867 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.670902 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x9kr\" (UniqueName: \"kubernetes.io/projected/30ff158a-452e-4180-b99e-9a171035d794-kube-api-access-5x9kr\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.670944 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30ff158a-452e-4180-b99e-9a171035d794-logs\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.670970 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-scripts\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.670993 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.671336 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.671945 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30ff158a-452e-4180-b99e-9a171035d794-logs\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.672072 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/30ff158a-452e-4180-b99e-9a171035d794-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.675513 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.676438 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-config-data\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.676669 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.685373 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-scripts\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.689467 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x9kr\" (UniqueName: \"kubernetes.io/projected/30ff158a-452e-4180-b99e-9a171035d794-kube-api-access-5x9kr\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.695926 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.867301 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:53:17 crc kubenswrapper[4902]: I0121 14:53:17.511397 4902 scope.go:117] "RemoveContainer" containerID="0217c431d44df2f20d49bb80b38d3634c60d8d298432655fbcd6784934a969c5" Jan 21 14:53:17 crc kubenswrapper[4902]: E0121 14:53:17.522178 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Jan 21 14:53:17 crc kubenswrapper[4902]: E0121 14:53:17.522391 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4xph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-twg7k_openstack(137b1040-d368-4b6d-a4db-ba7c626f666f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:53:17 crc kubenswrapper[4902]: E0121 14:53:17.523684 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-twg7k" podUID="137b1040-d368-4b6d-a4db-ba7c626f666f" Jan 21 14:53:17 crc kubenswrapper[4902]: I0121 14:53:17.673150 4902 scope.go:117] "RemoveContainer" containerID="db22298ae310fe9c4abed7194da190cb997c649d5ca02aff4d05d6c947c77a3f" Jan 21 14:53:17 crc kubenswrapper[4902]: I0121 14:53:17.725704 4902 scope.go:117] "RemoveContainer" containerID="085cc064e188fc067c109385def65abea0a69b47e1fe8f6dadc55d4ea12c4007" Jan 21 14:53:17 crc kubenswrapper[4902]: I0121 14:53:17.978998 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-c6zzp"] Jan 21 14:53:17 crc kubenswrapper[4902]: W0121 14:53:17.980701 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod966f492d_0f8f_4bef_b60f_777f25367104.slice/crio-3313ecc8a66d98cc624d87ed14dbd071277551fa0b6dc3f15d60fe60589dd37a WatchSource:0}: Error finding container 3313ecc8a66d98cc624d87ed14dbd071277551fa0b6dc3f15d60fe60589dd37a: Status 404 returned error can't find the container with id 3313ecc8a66d98cc624d87ed14dbd071277551fa0b6dc3f15d60fe60589dd37a Jan 21 14:53:18 crc kubenswrapper[4902]: I0121 14:53:18.160563 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:53:18 crc kubenswrapper[4902]: I0121 14:53:18.260119 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:53:18 crc kubenswrapper[4902]: W0121 14:53:18.274549 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30ff158a_452e_4180_b99e_9a171035d794.slice/crio-fd2671c5a041bc1da6743eacf4aa7bb033c883d2faa27ca05b2e9c42b04bf8ce WatchSource:0}: Error finding container fd2671c5a041bc1da6743eacf4aa7bb033c883d2faa27ca05b2e9c42b04bf8ce: Status 404 returned error can't find the container with id fd2671c5a041bc1da6743eacf4aa7bb033c883d2faa27ca05b2e9c42b04bf8ce Jan 21 14:53:18 crc kubenswrapper[4902]: I0121 14:53:18.325885 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55daf4a6-0e2d-4832-8740-87f628a6e2cc" path="/var/lib/kubelet/pods/55daf4a6-0e2d-4832-8740-87f628a6e2cc/volumes" Jan 21 14:53:18 crc kubenswrapper[4902]: I0121 14:53:18.326778 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cfdec8c-8d41-4ae4-ad01-a4b76f589140" path="/var/lib/kubelet/pods/9cfdec8c-8d41-4ae4-ad01-a4b76f589140/volumes" Jan 21 14:53:18 crc kubenswrapper[4902]: I0121 14:53:18.456068 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-c6zzp" event={"ID":"966f492d-0f8f-4bef-b60f-777f25367104","Type":"ContainerStarted","Data":"b1d80a37b9ccbfaf4f2535ef16320e6b2227313b028f8ea36eaf1aa897c3fa62"} Jan 21 14:53:18 crc kubenswrapper[4902]: I0121 14:53:18.456655 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-c6zzp" event={"ID":"966f492d-0f8f-4bef-b60f-777f25367104","Type":"ContainerStarted","Data":"3313ecc8a66d98cc624d87ed14dbd071277551fa0b6dc3f15d60fe60589dd37a"} Jan 21 14:53:18 crc kubenswrapper[4902]: I0121 14:53:18.460010 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5048d12c-b66b-4f2f-a706-0e2978b5f0db","Type":"ContainerStarted","Data":"8a331ab6e6d4779bd2c4ccae990c6e7b561e92f584c35ef4a58e44ff1375f620"} Jan 21 14:53:18 crc kubenswrapper[4902]: I0121 14:53:18.465342 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8d84757-ad27-4177-be9f-d7d351e771e2","Type":"ContainerStarted","Data":"51f743b434cb645595957492e22a74e7b49af55c8dc22934431bf87a6f8fd041"} Jan 21 14:53:18 crc kubenswrapper[4902]: I0121 14:53:18.468932 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-b64dh" event={"ID":"83490157-abed-443f-8843-945bb43715af","Type":"ContainerStarted","Data":"885834645f14556231a3e7a784298540883e5e957ef165eb89b1d865e26a97ac"} Jan 21 14:53:18 crc kubenswrapper[4902]: I0121 14:53:18.474829 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"30ff158a-452e-4180-b99e-9a171035d794","Type":"ContainerStarted","Data":"fd2671c5a041bc1da6743eacf4aa7bb033c883d2faa27ca05b2e9c42b04bf8ce"} Jan 21 14:53:18 crc kubenswrapper[4902]: E0121 14:53:18.478692 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-twg7k" podUID="137b1040-d368-4b6d-a4db-ba7c626f666f" Jan 21 14:53:18 crc kubenswrapper[4902]: I0121 14:53:18.483757 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-c6zzp" podStartSLOduration=11.483733018 podStartE2EDuration="11.483733018s" podCreationTimestamp="2026-01-21 14:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:18.470891304 +0000 UTC m=+1160.547724333" watchObservedRunningTime="2026-01-21 14:53:18.483733018 +0000 UTC m=+1160.560566047" Jan 21 14:53:18 crc kubenswrapper[4902]: I0121 14:53:18.501003 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-b64dh" podStartSLOduration=3.23968357 podStartE2EDuration="26.500983506s" podCreationTimestamp="2026-01-21 14:52:52 +0000 UTC" firstStartedPulling="2026-01-21 14:52:54.198878863 +0000 UTC m=+1136.275711902" lastFinishedPulling="2026-01-21 14:53:17.460178799 +0000 UTC m=+1159.537011838" observedRunningTime="2026-01-21 14:53:18.492731612 +0000 UTC m=+1160.569564641" watchObservedRunningTime="2026-01-21 14:53:18.500983506 +0000 UTC m=+1160.577816535" Jan 21 14:53:19 crc kubenswrapper[4902]: I0121 14:53:19.098002 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" podUID="9cfdec8c-8d41-4ae4-ad01-a4b76f589140" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: i/o timeout" Jan 21 14:53:19 crc kubenswrapper[4902]: I0121 14:53:19.483178 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"30ff158a-452e-4180-b99e-9a171035d794","Type":"ContainerStarted","Data":"5409eefc8bdd22f49ffbcce65cbb54882443f5c301c1ffa55abf84f9c6380456"} Jan 21 14:53:19 crc kubenswrapper[4902]: I0121 14:53:19.485860 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5048d12c-b66b-4f2f-a706-0e2978b5f0db","Type":"ContainerStarted","Data":"42387b53645327b4ee53a9ee8c0b9dee11eb438a26b3f114231094d19f35ba72"} Jan 21 14:53:20 crc kubenswrapper[4902]: I0121 14:53:20.494149 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5048d12c-b66b-4f2f-a706-0e2978b5f0db","Type":"ContainerStarted","Data":"14edd557813066a0cf1d74f214913ca1b420c477db44eb9e034dfe2ede5a72df"} Jan 21 14:53:28 crc kubenswrapper[4902]: I0121 14:53:28.569934 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"30ff158a-452e-4180-b99e-9a171035d794","Type":"ContainerStarted","Data":"71cca2f7cf5320c189b79d957584fa123f879c7a9cb4707bef6dd5f5eb455d19"} Jan 21 14:53:28 crc kubenswrapper[4902]: I0121 14:53:28.595929 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=21.595910255 podStartE2EDuration="21.595910255s" podCreationTimestamp="2026-01-21 14:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:28.58829618 +0000 UTC m=+1170.665129209" watchObservedRunningTime="2026-01-21 14:53:28.595910255 +0000 UTC m=+1170.672743284" Jan 21 14:53:28 crc kubenswrapper[4902]: I0121 14:53:28.611184 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=12.611164957 podStartE2EDuration="12.611164957s" podCreationTimestamp="2026-01-21 14:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:28.608861302 +0000 UTC m=+1170.685694331" watchObservedRunningTime="2026-01-21 14:53:28.611164957 +0000 UTC m=+1170.687997986" Jan 21 14:53:31 crc kubenswrapper[4902]: I0121 14:53:31.599824 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4ds4z" event={"ID":"df9277be-e557-4d2e-b799-8fc6def975b9","Type":"ContainerStarted","Data":"ae254c62b0513ec3d622f49b853707c7b475818d264ba6a9ceb8efcfd14f5993"} Jan 21 14:53:31 crc kubenswrapper[4902]: I0121 14:53:31.601999 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8d84757-ad27-4177-be9f-d7d351e771e2","Type":"ContainerStarted","Data":"ae5941c9751c3e64a04d8b1b8d712d2bab081ccec45d54e0c30b1caa64b2df4d"} Jan 21 14:53:32 crc kubenswrapper[4902]: I0121 14:53:32.614309 4902 generic.go:334] "Generic (PLEG): container finished" podID="966f492d-0f8f-4bef-b60f-777f25367104" containerID="b1d80a37b9ccbfaf4f2535ef16320e6b2227313b028f8ea36eaf1aa897c3fa62" exitCode=0 Jan 21 14:53:32 crc kubenswrapper[4902]: I0121 14:53:32.614346 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-c6zzp" event={"ID":"966f492d-0f8f-4bef-b60f-777f25367104","Type":"ContainerDied","Data":"b1d80a37b9ccbfaf4f2535ef16320e6b2227313b028f8ea36eaf1aa897c3fa62"} Jan 21 14:53:32 crc kubenswrapper[4902]: I0121 14:53:32.654612 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-4ds4z" podStartSLOduration=3.521797194 podStartE2EDuration="40.654594741s" podCreationTimestamp="2026-01-21 14:52:52 +0000 UTC" firstStartedPulling="2026-01-21 14:52:54.211971664 +0000 UTC m=+1136.288804693" lastFinishedPulling="2026-01-21 14:53:31.344769221 +0000 UTC m=+1173.421602240" observedRunningTime="2026-01-21 14:53:32.65065108 +0000 UTC m=+1174.727484109" watchObservedRunningTime="2026-01-21 14:53:32.654594741 +0000 UTC m=+1174.731427770" Jan 21 14:53:33 crc kubenswrapper[4902]: I0121 14:53:33.622605 4902 generic.go:334] "Generic (PLEG): container finished" podID="83490157-abed-443f-8843-945bb43715af" containerID="885834645f14556231a3e7a784298540883e5e957ef165eb89b1d865e26a97ac" exitCode=0 Jan 21 14:53:33 crc kubenswrapper[4902]: I0121 14:53:33.622689 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-b64dh" event={"ID":"83490157-abed-443f-8843-945bb43715af","Type":"ContainerDied","Data":"885834645f14556231a3e7a784298540883e5e957ef165eb89b1d865e26a97ac"} Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.341432 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.425900 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-credential-keys\") pod \"966f492d-0f8f-4bef-b60f-777f25367104\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.425998 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-config-data\") pod \"966f492d-0f8f-4bef-b60f-777f25367104\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.426121 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-combined-ca-bundle\") pod \"966f492d-0f8f-4bef-b60f-777f25367104\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.426170 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-fernet-keys\") pod \"966f492d-0f8f-4bef-b60f-777f25367104\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.426201 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6smc6\" (UniqueName: \"kubernetes.io/projected/966f492d-0f8f-4bef-b60f-777f25367104-kube-api-access-6smc6\") pod \"966f492d-0f8f-4bef-b60f-777f25367104\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.426232 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-scripts\") pod \"966f492d-0f8f-4bef-b60f-777f25367104\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.431969 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "966f492d-0f8f-4bef-b60f-777f25367104" (UID: "966f492d-0f8f-4bef-b60f-777f25367104"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.432172 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "966f492d-0f8f-4bef-b60f-777f25367104" (UID: "966f492d-0f8f-4bef-b60f-777f25367104"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.433368 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/966f492d-0f8f-4bef-b60f-777f25367104-kube-api-access-6smc6" (OuterVolumeSpecName: "kube-api-access-6smc6") pod "966f492d-0f8f-4bef-b60f-777f25367104" (UID: "966f492d-0f8f-4bef-b60f-777f25367104"). InnerVolumeSpecName "kube-api-access-6smc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.433495 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-scripts" (OuterVolumeSpecName: "scripts") pod "966f492d-0f8f-4bef-b60f-777f25367104" (UID: "966f492d-0f8f-4bef-b60f-777f25367104"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.451193 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "966f492d-0f8f-4bef-b60f-777f25367104" (UID: "966f492d-0f8f-4bef-b60f-777f25367104"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.466080 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-config-data" (OuterVolumeSpecName: "config-data") pod "966f492d-0f8f-4bef-b60f-777f25367104" (UID: "966f492d-0f8f-4bef-b60f-777f25367104"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.528249 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.528288 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.528301 4902 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.528311 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6smc6\" (UniqueName: \"kubernetes.io/projected/966f492d-0f8f-4bef-b60f-777f25367104-kube-api-access-6smc6\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.528321 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.528329 4902 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.642914 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.642980 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-c6zzp" event={"ID":"966f492d-0f8f-4bef-b60f-777f25367104","Type":"ContainerDied","Data":"3313ecc8a66d98cc624d87ed14dbd071277551fa0b6dc3f15d60fe60589dd37a"} Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.643008 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3313ecc8a66d98cc624d87ed14dbd071277551fa0b6dc3f15d60fe60589dd37a" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.733323 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5684459db4-jgdkj"] Jan 21 14:53:34 crc kubenswrapper[4902]: E0121 14:53:34.733790 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="966f492d-0f8f-4bef-b60f-777f25367104" containerName="keystone-bootstrap" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.733812 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="966f492d-0f8f-4bef-b60f-777f25367104" containerName="keystone-bootstrap" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.737225 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="966f492d-0f8f-4bef-b60f-777f25367104" containerName="keystone-bootstrap" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.737973 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.744142 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.744814 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5z5b6" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.745082 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.745294 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.745434 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.746004 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.747464 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5684459db4-jgdkj"] Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.842801 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-fernet-keys\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.843028 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-combined-ca-bundle\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.843122 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-scripts\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.843222 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-public-tls-certs\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.843276 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-config-data\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.843418 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-internal-tls-certs\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.843484 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-credential-keys\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.843511 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtlrk\" (UniqueName: \"kubernetes.io/projected/8e00c7d5-7199-4602-9d3b-5af4f14124bc-kube-api-access-rtlrk\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.951066 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-internal-tls-certs\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.951136 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-credential-keys\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.951156 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtlrk\" (UniqueName: \"kubernetes.io/projected/8e00c7d5-7199-4602-9d3b-5af4f14124bc-kube-api-access-rtlrk\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.951225 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-fernet-keys\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.951280 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-combined-ca-bundle\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.951306 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-scripts\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.951369 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-public-tls-certs\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.951400 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-config-data\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.955790 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-credential-keys\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.955952 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-fernet-keys\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.955950 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-internal-tls-certs\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.957733 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-combined-ca-bundle\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.958944 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-scripts\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.964313 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-config-data\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.965029 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-public-tls-certs\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.987618 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtlrk\" (UniqueName: \"kubernetes.io/projected/8e00c7d5-7199-4602-9d3b-5af4f14124bc-kube-api-access-rtlrk\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:35 crc kubenswrapper[4902]: I0121 14:53:35.075935 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.675318 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-b64dh" event={"ID":"83490157-abed-443f-8843-945bb43715af","Type":"ContainerDied","Data":"9f26212e4bdc5bda5416b6956048e081a79eb4fe056e9e364faed24f7ac4f14f"} Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.675750 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f26212e4bdc5bda5416b6956048e081a79eb4fe056e9e364faed24f7ac4f14f" Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.763584 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-b64dh" Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.867974 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.868067 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.908150 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.911315 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-scripts\") pod \"83490157-abed-443f-8843-945bb43715af\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.911444 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-combined-ca-bundle\") pod \"83490157-abed-443f-8843-945bb43715af\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.911497 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83490157-abed-443f-8843-945bb43715af-logs\") pod \"83490157-abed-443f-8843-945bb43715af\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.911526 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlgnt\" (UniqueName: \"kubernetes.io/projected/83490157-abed-443f-8843-945bb43715af-kube-api-access-xlgnt\") pod \"83490157-abed-443f-8843-945bb43715af\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.911563 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-config-data\") pod \"83490157-abed-443f-8843-945bb43715af\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.912431 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83490157-abed-443f-8843-945bb43715af-logs" (OuterVolumeSpecName: "logs") pod "83490157-abed-443f-8843-945bb43715af" (UID: "83490157-abed-443f-8843-945bb43715af"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.917522 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-scripts" (OuterVolumeSpecName: "scripts") pod "83490157-abed-443f-8843-945bb43715af" (UID: "83490157-abed-443f-8843-945bb43715af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.929643 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.940149 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83490157-abed-443f-8843-945bb43715af-kube-api-access-xlgnt" (OuterVolumeSpecName: "kube-api-access-xlgnt") pod "83490157-abed-443f-8843-945bb43715af" (UID: "83490157-abed-443f-8843-945bb43715af"). InnerVolumeSpecName "kube-api-access-xlgnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.945257 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83490157-abed-443f-8843-945bb43715af" (UID: "83490157-abed-443f-8843-945bb43715af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.952212 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-config-data" (OuterVolumeSpecName: "config-data") pod "83490157-abed-443f-8843-945bb43715af" (UID: "83490157-abed-443f-8843-945bb43715af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.013701 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.013744 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83490157-abed-443f-8843-945bb43715af-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.013756 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlgnt\" (UniqueName: \"kubernetes.io/projected/83490157-abed-443f-8843-945bb43715af-kube-api-access-xlgnt\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.013767 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.013778 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.028760 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5684459db4-jgdkj"] Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.683925 4902 generic.go:334] "Generic (PLEG): container finished" podID="df9277be-e557-4d2e-b799-8fc6def975b9" containerID="ae254c62b0513ec3d622f49b853707c7b475818d264ba6a9ceb8efcfd14f5993" exitCode=0 Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.684012 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4ds4z" event={"ID":"df9277be-e557-4d2e-b799-8fc6def975b9","Type":"ContainerDied","Data":"ae254c62b0513ec3d622f49b853707c7b475818d264ba6a9ceb8efcfd14f5993"} Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.686122 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-twg7k" event={"ID":"137b1040-d368-4b6d-a4db-ba7c626f666f","Type":"ContainerStarted","Data":"b544fa374d13ef6e784a8d5d16f0cdb36de690b191b7cd286db841a786a83df0"} Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.693174 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8d84757-ad27-4177-be9f-d7d351e771e2","Type":"ContainerStarted","Data":"81c864035291d1070b465dc09d84ebe0421ec1a192cefda871ae7e174cd7d5cb"} Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.695007 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-b64dh" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.695350 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5684459db4-jgdkj" event={"ID":"8e00c7d5-7199-4602-9d3b-5af4f14124bc","Type":"ContainerStarted","Data":"ea8dbb434ad9bd3e85adcd00febd132baf741c5aae1afe358fb761a39bcb889e"} Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.695395 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5684459db4-jgdkj" event={"ID":"8e00c7d5-7199-4602-9d3b-5af4f14124bc","Type":"ContainerStarted","Data":"a7b81b6927c5878e4864d8eea63ac6db97be31623e53b2291bbb5d03097d4cf8"} Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.695777 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.695922 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.695984 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.726212 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5684459db4-jgdkj" podStartSLOduration=3.726187633 podStartE2EDuration="3.726187633s" podCreationTimestamp="2026-01-21 14:53:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:37.720453631 +0000 UTC m=+1179.797286670" watchObservedRunningTime="2026-01-21 14:53:37.726187633 +0000 UTC m=+1179.803020662" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.748981 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-twg7k" podStartSLOduration=3.33336474 podStartE2EDuration="45.748965208s" podCreationTimestamp="2026-01-21 14:52:52 +0000 UTC" firstStartedPulling="2026-01-21 14:52:54.182085558 +0000 UTC m=+1136.258918587" lastFinishedPulling="2026-01-21 14:53:36.597686026 +0000 UTC m=+1178.674519055" observedRunningTime="2026-01-21 14:53:37.740402756 +0000 UTC m=+1179.817235785" watchObservedRunningTime="2026-01-21 14:53:37.748965208 +0000 UTC m=+1179.825798237" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.918091 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7ddf9d8f68-jjk7f"] Jan 21 14:53:37 crc kubenswrapper[4902]: E0121 14:53:37.918509 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83490157-abed-443f-8843-945bb43715af" containerName="placement-db-sync" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.918530 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="83490157-abed-443f-8843-945bb43715af" containerName="placement-db-sync" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.918739 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="83490157-abed-443f-8843-945bb43715af" containerName="placement-db-sync" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.919796 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.923630 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.924774 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.925172 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-26vvq" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.925406 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.925563 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.951822 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7ddf9d8f68-jjk7f"] Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.959288 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.959325 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.959337 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.959438 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.016480 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.019326 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.033063 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-combined-ca-bundle\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.033148 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-scripts\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.033217 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-public-tls-certs\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.033322 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b71fc896-318c-4277-bb32-70e3424a26c9-logs\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.033359 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-internal-tls-certs\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.033379 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-config-data\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.033407 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btb6d\" (UniqueName: \"kubernetes.io/projected/b71fc896-318c-4277-bb32-70e3424a26c9-kube-api-access-btb6d\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.135515 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b71fc896-318c-4277-bb32-70e3424a26c9-logs\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.135558 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-internal-tls-certs\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.135578 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-config-data\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.136113 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b71fc896-318c-4277-bb32-70e3424a26c9-logs\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.136523 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btb6d\" (UniqueName: \"kubernetes.io/projected/b71fc896-318c-4277-bb32-70e3424a26c9-kube-api-access-btb6d\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.136786 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-combined-ca-bundle\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.136819 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-scripts\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.136849 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-public-tls-certs\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.142702 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-combined-ca-bundle\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.142758 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-config-data\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.145823 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-scripts\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.153949 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-public-tls-certs\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.156607 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-internal-tls-certs\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.156823 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btb6d\" (UniqueName: \"kubernetes.io/projected/b71fc896-318c-4277-bb32-70e3424a26c9-kube-api-access-btb6d\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.236472 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.711689 4902 generic.go:334] "Generic (PLEG): container finished" podID="15ef0c45-4c21-4824-850e-545f66a2c20a" containerID="c05aa038a30ca68cb9b9875b1713755a7a748b30cae2fd412e457a921170733c" exitCode=0 Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.711775 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zlh54" event={"ID":"15ef0c45-4c21-4824-850e-545f66a2c20a","Type":"ContainerDied","Data":"c05aa038a30ca68cb9b9875b1713755a7a748b30cae2fd412e457a921170733c"} Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.791440 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7ddf9d8f68-jjk7f"] Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.104000 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.275852 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df9277be-e557-4d2e-b799-8fc6def975b9-db-sync-config-data\") pod \"df9277be-e557-4d2e-b799-8fc6def975b9\" (UID: \"df9277be-e557-4d2e-b799-8fc6def975b9\") " Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.276165 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tjb8\" (UniqueName: \"kubernetes.io/projected/df9277be-e557-4d2e-b799-8fc6def975b9-kube-api-access-6tjb8\") pod \"df9277be-e557-4d2e-b799-8fc6def975b9\" (UID: \"df9277be-e557-4d2e-b799-8fc6def975b9\") " Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.276306 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9277be-e557-4d2e-b799-8fc6def975b9-combined-ca-bundle\") pod \"df9277be-e557-4d2e-b799-8fc6def975b9\" (UID: \"df9277be-e557-4d2e-b799-8fc6def975b9\") " Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.280088 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df9277be-e557-4d2e-b799-8fc6def975b9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "df9277be-e557-4d2e-b799-8fc6def975b9" (UID: "df9277be-e557-4d2e-b799-8fc6def975b9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.280294 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df9277be-e557-4d2e-b799-8fc6def975b9-kube-api-access-6tjb8" (OuterVolumeSpecName: "kube-api-access-6tjb8") pod "df9277be-e557-4d2e-b799-8fc6def975b9" (UID: "df9277be-e557-4d2e-b799-8fc6def975b9"). InnerVolumeSpecName "kube-api-access-6tjb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.303803 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df9277be-e557-4d2e-b799-8fc6def975b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df9277be-e557-4d2e-b799-8fc6def975b9" (UID: "df9277be-e557-4d2e-b799-8fc6def975b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.378006 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9277be-e557-4d2e-b799-8fc6def975b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.378055 4902 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df9277be-e557-4d2e-b799-8fc6def975b9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.378067 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tjb8\" (UniqueName: \"kubernetes.io/projected/df9277be-e557-4d2e-b799-8fc6def975b9-kube-api-access-6tjb8\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.772116 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ddf9d8f68-jjk7f" event={"ID":"b71fc896-318c-4277-bb32-70e3424a26c9","Type":"ContainerStarted","Data":"51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd"} Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.773126 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ddf9d8f68-jjk7f" event={"ID":"b71fc896-318c-4277-bb32-70e3424a26c9","Type":"ContainerStarted","Data":"bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924"} Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.773256 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ddf9d8f68-jjk7f" event={"ID":"b71fc896-318c-4277-bb32-70e3424a26c9","Type":"ContainerStarted","Data":"31d5a67184f80e0f8e30cfab691135f2f1fd9f01d89fed99d676f711a03521eb"} Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.781200 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.781275 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.844864 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.847138 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4ds4z" event={"ID":"df9277be-e557-4d2e-b799-8fc6def975b9","Type":"ContainerDied","Data":"1240a2082e984db724460dca85452b351506f660a1b70f26c765e2a219ef66f2"} Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.847188 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1240a2082e984db724460dca85452b351506f660a1b70f26c765e2a219ef66f2" Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.951549 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7ddf9d8f68-jjk7f" podStartSLOduration=2.9515312639999998 podStartE2EDuration="2.951531264s" podCreationTimestamp="2026-01-21 14:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:39.945540784 +0000 UTC m=+1182.022373813" watchObservedRunningTime="2026-01-21 14:53:39.951531264 +0000 UTC m=+1182.028364293" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.009576 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-68564cb5c-bh98h"] Jan 21 14:53:40 crc kubenswrapper[4902]: E0121 14:53:40.010124 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9277be-e557-4d2e-b799-8fc6def975b9" containerName="barbican-db-sync" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.010210 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9277be-e557-4d2e-b799-8fc6def975b9" containerName="barbican-db-sync" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.010487 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9277be-e557-4d2e-b799-8fc6def975b9" containerName="barbican-db-sync" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.011665 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.016455 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.016830 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-lxg2q" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.017372 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.024603 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68564cb5c-bh98h"] Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.100102 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-b755cd77b-nd6p7"] Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.101846 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.105949 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.117226 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-b755cd77b-nd6p7"] Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.148121 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-config-data-custom\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.148347 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-combined-ca-bundle\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.148420 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c653ffa0-195e-4eda-8c25-cfcff2715bdf-logs\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.148551 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d74jj\" (UniqueName: \"kubernetes.io/projected/c653ffa0-195e-4eda-8c25-cfcff2715bdf-kube-api-access-d74jj\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.148621 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-config-data\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.167757 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8fffc8985-dfqgq"] Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.169302 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.191637 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8fffc8985-dfqgq"] Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.251944 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-dns-svc\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.251982 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-config-data-custom\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252013 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmpfc\" (UniqueName: \"kubernetes.io/projected/6ffe1f41-e154-4eb4-a871-60bdfaee1507-kube-api-access-rmpfc\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252036 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-dns-swift-storage-0\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252074 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/365d6c18-395e-4a62-939d-a04927ffa8aa-logs\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252101 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d74jj\" (UniqueName: \"kubernetes.io/projected/c653ffa0-195e-4eda-8c25-cfcff2715bdf-kube-api-access-d74jj\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252119 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-combined-ca-bundle\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252158 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-config-data\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252184 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-config-data-custom\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252216 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-combined-ca-bundle\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252239 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96b9v\" (UniqueName: \"kubernetes.io/projected/365d6c18-395e-4a62-939d-a04927ffa8aa-kube-api-access-96b9v\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252260 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-config\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252284 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c653ffa0-195e-4eda-8c25-cfcff2715bdf-logs\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252350 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-ovsdbserver-sb\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252367 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-config-data\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252383 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-ovsdbserver-nb\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.254421 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c653ffa0-195e-4eda-8c25-cfcff2715bdf-logs\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.264994 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6445cbf9c4-z4mzt"] Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.266952 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-combined-ca-bundle\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.268262 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.273073 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.275322 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-config-data\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.276422 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-config-data-custom\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.282688 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d74jj\" (UniqueName: \"kubernetes.io/projected/c653ffa0-195e-4eda-8c25-cfcff2715bdf-kube-api-access-d74jj\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.282910 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6445cbf9c4-z4mzt"] Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444117 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-ovsdbserver-sb\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444157 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-config-data\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444179 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-config-data-custom\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444198 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-ovsdbserver-nb\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444225 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-dns-svc\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444258 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-config-data-custom\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444274 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vz6l\" (UniqueName: \"kubernetes.io/projected/b8579d67-5e61-40f2-9725-b695f7d7bb81-kube-api-access-8vz6l\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444302 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmpfc\" (UniqueName: \"kubernetes.io/projected/6ffe1f41-e154-4eb4-a871-60bdfaee1507-kube-api-access-rmpfc\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444323 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-dns-swift-storage-0\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444351 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/365d6c18-395e-4a62-939d-a04927ffa8aa-logs\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444405 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-combined-ca-bundle\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444471 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8579d67-5e61-40f2-9725-b695f7d7bb81-logs\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444492 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-combined-ca-bundle\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444534 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96b9v\" (UniqueName: \"kubernetes.io/projected/365d6c18-395e-4a62-939d-a04927ffa8aa-kube-api-access-96b9v\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444554 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-config\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444579 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-config-data\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.457995 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.463969 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-ovsdbserver-sb\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.465511 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-ovsdbserver-nb\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.466173 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-dns-svc\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.467957 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-config-data\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.471336 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-combined-ca-bundle\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.474803 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-dns-swift-storage-0\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.475114 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/365d6c18-395e-4a62-939d-a04927ffa8aa-logs\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.477035 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-config\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.477733 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-config-data-custom\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.494690 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmpfc\" (UniqueName: \"kubernetes.io/projected/6ffe1f41-e154-4eb4-a871-60bdfaee1507-kube-api-access-rmpfc\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.503202 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96b9v\" (UniqueName: \"kubernetes.io/projected/365d6c18-395e-4a62-939d-a04927ffa8aa-kube-api-access-96b9v\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.504080 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.546548 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-config-data-custom\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.546641 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vz6l\" (UniqueName: \"kubernetes.io/projected/b8579d67-5e61-40f2-9725-b695f7d7bb81-kube-api-access-8vz6l\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.546800 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8579d67-5e61-40f2-9725-b695f7d7bb81-logs\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.546835 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-combined-ca-bundle\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.546903 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-config-data\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.547458 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8579d67-5e61-40f2-9725-b695f7d7bb81-logs\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.550270 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-config-data-custom\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.563843 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-combined-ca-bundle\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.563859 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-config-data\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.580097 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vz6l\" (UniqueName: \"kubernetes.io/projected/b8579d67-5e61-40f2-9725-b695f7d7bb81-kube-api-access-8vz6l\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.608777 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.638287 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zlh54" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.648106 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ef0c45-4c21-4824-850e-545f66a2c20a-combined-ca-bundle\") pod \"15ef0c45-4c21-4824-850e-545f66a2c20a\" (UID: \"15ef0c45-4c21-4824-850e-545f66a2c20a\") " Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.648319 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/15ef0c45-4c21-4824-850e-545f66a2c20a-config\") pod \"15ef0c45-4c21-4824-850e-545f66a2c20a\" (UID: \"15ef0c45-4c21-4824-850e-545f66a2c20a\") " Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.648409 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz5m2\" (UniqueName: \"kubernetes.io/projected/15ef0c45-4c21-4824-850e-545f66a2c20a-kube-api-access-bz5m2\") pod \"15ef0c45-4c21-4824-850e-545f66a2c20a\" (UID: \"15ef0c45-4c21-4824-850e-545f66a2c20a\") " Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.659071 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15ef0c45-4c21-4824-850e-545f66a2c20a-kube-api-access-bz5m2" (OuterVolumeSpecName: "kube-api-access-bz5m2") pod "15ef0c45-4c21-4824-850e-545f66a2c20a" (UID: "15ef0c45-4c21-4824-850e-545f66a2c20a"). InnerVolumeSpecName "kube-api-access-bz5m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.687526 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15ef0c45-4c21-4824-850e-545f66a2c20a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15ef0c45-4c21-4824-850e-545f66a2c20a" (UID: "15ef0c45-4c21-4824-850e-545f66a2c20a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.708381 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.715690 4902 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.733568 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.733575 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15ef0c45-4c21-4824-850e-545f66a2c20a-config" (OuterVolumeSpecName: "config") pod "15ef0c45-4c21-4824-850e-545f66a2c20a" (UID: "15ef0c45-4c21-4824-850e-545f66a2c20a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.751786 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz5m2\" (UniqueName: \"kubernetes.io/projected/15ef0c45-4c21-4824-850e-545f66a2c20a-kube-api-access-bz5m2\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.752384 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ef0c45-4c21-4824-850e-545f66a2c20a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.752402 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/15ef0c45-4c21-4824-850e-545f66a2c20a-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.909801 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zlh54" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.910947 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zlh54" event={"ID":"15ef0c45-4c21-4824-850e-545f66a2c20a","Type":"ContainerDied","Data":"e397414d8ec5ae40c73abfed568886ab434c67c248ca086a30736dc5c091823f"} Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.910986 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e397414d8ec5ae40c73abfed568886ab434c67c248ca086a30736dc5c091823f" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.018083 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8fffc8985-dfqgq"] Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.048564 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-g578w"] Jan 21 14:53:41 crc kubenswrapper[4902]: E0121 14:53:41.051429 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ef0c45-4c21-4824-850e-545f66a2c20a" containerName="neutron-db-sync" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.051467 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ef0c45-4c21-4824-850e-545f66a2c20a" containerName="neutron-db-sync" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.051733 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="15ef0c45-4c21-4824-850e-545f66a2c20a" containerName="neutron-db-sync" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.053491 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.119703 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-g578w"] Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.193094 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-569676bc6b-gw28h"] Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.197119 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.199734 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.199772 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-config\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.199838 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.199877 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.199926 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.199982 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxthz\" (UniqueName: \"kubernetes.io/projected/95d16264-79d8-49aa-92aa-9f95f6f88ee5-kube-api-access-lxthz\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.212383 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-569676bc6b-gw28h"] Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.215580 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.215676 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.215975 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.216255 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d52d6" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.235106 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.254465 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68564cb5c-bh98h"] Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.301895 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-httpd-config\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.302159 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.302192 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxthz\" (UniqueName: \"kubernetes.io/projected/95d16264-79d8-49aa-92aa-9f95f6f88ee5-kube-api-access-lxthz\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.302242 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slg7l\" (UniqueName: \"kubernetes.io/projected/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-kube-api-access-slg7l\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.302272 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-config\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.302409 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.302448 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-config\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.302524 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.302575 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.302622 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-ovndb-tls-certs\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.302655 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-combined-ca-bundle\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.303198 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.304029 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.305992 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.306153 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.306569 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-config\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.330301 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8fffc8985-dfqgq"] Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.337969 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxthz\" (UniqueName: \"kubernetes.io/projected/95d16264-79d8-49aa-92aa-9f95f6f88ee5-kube-api-access-lxthz\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.405232 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-combined-ca-bundle\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.405314 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-httpd-config\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.405383 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slg7l\" (UniqueName: \"kubernetes.io/projected/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-kube-api-access-slg7l\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.405403 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-config\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.405602 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-ovndb-tls-certs\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.411503 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.419698 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-ovndb-tls-certs\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.427743 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-httpd-config\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.429825 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-combined-ca-bundle\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.440878 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-config\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.468634 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slg7l\" (UniqueName: \"kubernetes.io/projected/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-kube-api-access-slg7l\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.551726 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.555814 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6445cbf9c4-z4mzt"] Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.676580 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-b755cd77b-nd6p7"] Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.994416 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68564cb5c-bh98h" event={"ID":"c653ffa0-195e-4eda-8c25-cfcff2715bdf","Type":"ContainerStarted","Data":"0adea585b27eb9363f63f38b86e1f0b5aee1a5b47c7b1b2342897a2515892311"} Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.003263 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" event={"ID":"365d6c18-395e-4a62-939d-a04927ffa8aa","Type":"ContainerStarted","Data":"d7ec9f34e635f9308b93f9d0dc6cda96b623b10532da8d7eb05383f6117459ce"} Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.018379 4902 generic.go:334] "Generic (PLEG): container finished" podID="6ffe1f41-e154-4eb4-a871-60bdfaee1507" containerID="d79940242a6d6496842cce005349ce2182772f08d24d0ebbf491cb39873ab862" exitCode=0 Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.018461 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" event={"ID":"6ffe1f41-e154-4eb4-a871-60bdfaee1507","Type":"ContainerDied","Data":"d79940242a6d6496842cce005349ce2182772f08d24d0ebbf491cb39873ab862"} Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.018508 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" event={"ID":"6ffe1f41-e154-4eb4-a871-60bdfaee1507","Type":"ContainerStarted","Data":"874a2c6cdd912e6f6172031fa775f0dcec49b3d3d1e70dc96e7ff9e1e1fbe364"} Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.025281 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6445cbf9c4-z4mzt" event={"ID":"b8579d67-5e61-40f2-9725-b695f7d7bb81","Type":"ContainerStarted","Data":"b50cfe39dc085f1d64312e802e7695d3392dba0e54502997f28533561b89173b"} Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.242859 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-g578w"] Jan 21 14:53:42 crc kubenswrapper[4902]: W0121 14:53:42.262513 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95d16264_79d8_49aa_92aa_9f95f6f88ee5.slice/crio-3301d34c981a80c2f15194742c15041f8dd34876ffe78731a10b3a3473a032e9 WatchSource:0}: Error finding container 3301d34c981a80c2f15194742c15041f8dd34876ffe78731a10b3a3473a032e9: Status 404 returned error can't find the container with id 3301d34c981a80c2f15194742c15041f8dd34876ffe78731a10b3a3473a032e9 Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.506502 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.558466 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-569676bc6b-gw28h"] Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.567711 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-config\") pod \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.567798 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-ovsdbserver-sb\") pod \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.567927 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-dns-svc\") pod \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.567966 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmpfc\" (UniqueName: \"kubernetes.io/projected/6ffe1f41-e154-4eb4-a871-60bdfaee1507-kube-api-access-rmpfc\") pod \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.568780 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-dns-swift-storage-0\") pod \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.568812 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-ovsdbserver-nb\") pod \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.582700 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ffe1f41-e154-4eb4-a871-60bdfaee1507-kube-api-access-rmpfc" (OuterVolumeSpecName: "kube-api-access-rmpfc") pod "6ffe1f41-e154-4eb4-a871-60bdfaee1507" (UID: "6ffe1f41-e154-4eb4-a871-60bdfaee1507"). InnerVolumeSpecName "kube-api-access-rmpfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.593458 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6ffe1f41-e154-4eb4-a871-60bdfaee1507" (UID: "6ffe1f41-e154-4eb4-a871-60bdfaee1507"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.594225 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-config" (OuterVolumeSpecName: "config") pod "6ffe1f41-e154-4eb4-a871-60bdfaee1507" (UID: "6ffe1f41-e154-4eb4-a871-60bdfaee1507"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.597070 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6ffe1f41-e154-4eb4-a871-60bdfaee1507" (UID: "6ffe1f41-e154-4eb4-a871-60bdfaee1507"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:42 crc kubenswrapper[4902]: W0121 14:53:42.610941 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2e9efdb_8b95_4082_8a1d_8b5a987b2516.slice/crio-d6d295ac44d5e84b8146c28859766bda166d60fe457d50975c21ca70126c4bc2 WatchSource:0}: Error finding container d6d295ac44d5e84b8146c28859766bda166d60fe457d50975c21ca70126c4bc2: Status 404 returned error can't find the container with id d6d295ac44d5e84b8146c28859766bda166d60fe457d50975c21ca70126c4bc2 Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.650900 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6ffe1f41-e154-4eb4-a871-60bdfaee1507" (UID: "6ffe1f41-e154-4eb4-a871-60bdfaee1507"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.674690 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.675139 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmpfc\" (UniqueName: \"kubernetes.io/projected/6ffe1f41-e154-4eb4-a871-60bdfaee1507-kube-api-access-rmpfc\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.675239 4902 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.675331 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.675386 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.675493 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.675677 4902 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.677164 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6ffe1f41-e154-4eb4-a871-60bdfaee1507" (UID: "6ffe1f41-e154-4eb4-a871-60bdfaee1507"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.704315 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.777196 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:43 crc kubenswrapper[4902]: I0121 14:53:43.053274 4902 generic.go:334] "Generic (PLEG): container finished" podID="95d16264-79d8-49aa-92aa-9f95f6f88ee5" containerID="1bf0f0d44db0898ad571fc3dc44b938eda2883d14d10f909c94ababfd0ac149c" exitCode=0 Jan 21 14:53:43 crc kubenswrapper[4902]: I0121 14:53:43.053336 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" event={"ID":"95d16264-79d8-49aa-92aa-9f95f6f88ee5","Type":"ContainerDied","Data":"1bf0f0d44db0898ad571fc3dc44b938eda2883d14d10f909c94ababfd0ac149c"} Jan 21 14:53:43 crc kubenswrapper[4902]: I0121 14:53:43.053652 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" event={"ID":"95d16264-79d8-49aa-92aa-9f95f6f88ee5","Type":"ContainerStarted","Data":"3301d34c981a80c2f15194742c15041f8dd34876ffe78731a10b3a3473a032e9"} Jan 21 14:53:43 crc kubenswrapper[4902]: I0121 14:53:43.054827 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-569676bc6b-gw28h" event={"ID":"b2e9efdb-8b95-4082-8a1d-8b5a987b2516","Type":"ContainerStarted","Data":"d6d295ac44d5e84b8146c28859766bda166d60fe457d50975c21ca70126c4bc2"} Jan 21 14:53:43 crc kubenswrapper[4902]: I0121 14:53:43.068499 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" event={"ID":"6ffe1f41-e154-4eb4-a871-60bdfaee1507","Type":"ContainerDied","Data":"874a2c6cdd912e6f6172031fa775f0dcec49b3d3d1e70dc96e7ff9e1e1fbe364"} Jan 21 14:53:43 crc kubenswrapper[4902]: I0121 14:53:43.068549 4902 scope.go:117] "RemoveContainer" containerID="d79940242a6d6496842cce005349ce2182772f08d24d0ebbf491cb39873ab862" Jan 21 14:53:43 crc kubenswrapper[4902]: I0121 14:53:43.068676 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:43 crc kubenswrapper[4902]: I0121 14:53:43.072960 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6445cbf9c4-z4mzt" event={"ID":"b8579d67-5e61-40f2-9725-b695f7d7bb81","Type":"ContainerStarted","Data":"23ea5f83fdd14483bee45b939eae36b3f6796ce08c5e8747aa8cd9acb4874d6c"} Jan 21 14:53:43 crc kubenswrapper[4902]: I0121 14:53:43.298613 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8fffc8985-dfqgq"] Jan 21 14:53:43 crc kubenswrapper[4902]: I0121 14:53:43.315029 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8fffc8985-dfqgq"] Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.081627 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" event={"ID":"95d16264-79d8-49aa-92aa-9f95f6f88ee5","Type":"ContainerStarted","Data":"44d81e25a8e35f144214a1d8f5c1fb7aa5a894a9f7d53ffd55c93106fb15e4c3"} Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.085346 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-569676bc6b-gw28h" event={"ID":"b2e9efdb-8b95-4082-8a1d-8b5a987b2516","Type":"ContainerStarted","Data":"a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30"} Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.092109 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6445cbf9c4-z4mzt" event={"ID":"b8579d67-5e61-40f2-9725-b695f7d7bb81","Type":"ContainerStarted","Data":"f1c482b66b7a7732a3644035350a84867090e3f9916167da7455775012f80692"} Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.092287 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.092303 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.128231 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6445cbf9c4-z4mzt" podStartSLOduration=4.12821037 podStartE2EDuration="4.12821037s" podCreationTimestamp="2026-01-21 14:53:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:44.116701754 +0000 UTC m=+1186.193534783" watchObservedRunningTime="2026-01-21 14:53:44.12821037 +0000 UTC m=+1186.205043399" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.306562 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ffe1f41-e154-4eb4-a871-60bdfaee1507" path="/var/lib/kubelet/pods/6ffe1f41-e154-4eb4-a871-60bdfaee1507/volumes" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.686726 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7887695489-rtxbl"] Jan 21 14:53:44 crc kubenswrapper[4902]: E0121 14:53:44.687113 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ffe1f41-e154-4eb4-a871-60bdfaee1507" containerName="init" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.687126 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ffe1f41-e154-4eb4-a871-60bdfaee1507" containerName="init" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.687337 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ffe1f41-e154-4eb4-a871-60bdfaee1507" containerName="init" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.691147 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.692804 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.693238 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.699561 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7887695489-rtxbl"] Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.815956 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-ovndb-tls-certs\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.816418 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-combined-ca-bundle\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.816444 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-public-tls-certs\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.816463 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-config\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.816780 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-httpd-config\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.817007 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-internal-tls-certs\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.817127 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hptjz\" (UniqueName: \"kubernetes.io/projected/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-kube-api-access-hptjz\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.919197 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-ovndb-tls-certs\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.919274 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-combined-ca-bundle\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.919299 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-public-tls-certs\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.919318 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-config\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.919396 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-httpd-config\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.919482 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-internal-tls-certs\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.919515 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hptjz\" (UniqueName: \"kubernetes.io/projected/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-kube-api-access-hptjz\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.925112 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-internal-tls-certs\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.925206 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-public-tls-certs\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.927977 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-httpd-config\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.928334 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-ovndb-tls-certs\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.935286 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-config\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.936407 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-combined-ca-bundle\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.945437 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hptjz\" (UniqueName: \"kubernetes.io/projected/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-kube-api-access-hptjz\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:45 crc kubenswrapper[4902]: I0121 14:53:45.008091 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:45 crc kubenswrapper[4902]: W0121 14:53:45.783942 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9dc3ac42_826c_4f25_a3f7_d1ab2eb8cbf5.slice/crio-b6ec2a7ebbd2aee467c0043661c91112bee51c7e8687af847e64a040bb7767f9 WatchSource:0}: Error finding container b6ec2a7ebbd2aee467c0043661c91112bee51c7e8687af847e64a040bb7767f9: Status 404 returned error can't find the container with id b6ec2a7ebbd2aee467c0043661c91112bee51c7e8687af847e64a040bb7767f9 Jan 21 14:53:45 crc kubenswrapper[4902]: I0121 14:53:45.784811 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7887695489-rtxbl"] Jan 21 14:53:46 crc kubenswrapper[4902]: I0121 14:53:46.115969 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887695489-rtxbl" event={"ID":"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5","Type":"ContainerStarted","Data":"b6ec2a7ebbd2aee467c0043661c91112bee51c7e8687af847e64a040bb7767f9"} Jan 21 14:53:46 crc kubenswrapper[4902]: I0121 14:53:46.119305 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-569676bc6b-gw28h" event={"ID":"b2e9efdb-8b95-4082-8a1d-8b5a987b2516","Type":"ContainerStarted","Data":"6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69"} Jan 21 14:53:46 crc kubenswrapper[4902]: I0121 14:53:46.119344 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:46 crc kubenswrapper[4902]: I0121 14:53:46.119366 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:46 crc kubenswrapper[4902]: I0121 14:53:46.136685 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" podStartSLOduration=5.136661921 podStartE2EDuration="5.136661921s" podCreationTimestamp="2026-01-21 14:53:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:46.133951094 +0000 UTC m=+1188.210784123" watchObservedRunningTime="2026-01-21 14:53:46.136661921 +0000 UTC m=+1188.213494950" Jan 21 14:53:46 crc kubenswrapper[4902]: I0121 14:53:46.167136 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-569676bc6b-gw28h" podStartSLOduration=5.167110913 podStartE2EDuration="5.167110913s" podCreationTimestamp="2026-01-21 14:53:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:46.159765705 +0000 UTC m=+1188.236598734" watchObservedRunningTime="2026-01-21 14:53:46.167110913 +0000 UTC m=+1188.243943962" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.127262 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887695489-rtxbl" event={"ID":"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5","Type":"ContainerStarted","Data":"51583e6b97e071d7cf96bdf513ff863344bb3712ef59fd993cdce4376b16aa3c"} Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.673971 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5df595696d-2ftxp"] Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.676131 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.678326 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.679125 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.698268 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5df595696d-2ftxp"] Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.772201 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/561efc1e-a930-440f-83b1-a75217a11f32-logs\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.772266 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-combined-ca-bundle\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.772320 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-config-data\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.772485 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvkf7\" (UniqueName: \"kubernetes.io/projected/561efc1e-a930-440f-83b1-a75217a11f32-kube-api-access-jvkf7\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.772563 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-config-data-custom\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.772623 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-public-tls-certs\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.772709 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-internal-tls-certs\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.875215 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/561efc1e-a930-440f-83b1-a75217a11f32-logs\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.875288 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-combined-ca-bundle\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.875324 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-config-data\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.875389 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvkf7\" (UniqueName: \"kubernetes.io/projected/561efc1e-a930-440f-83b1-a75217a11f32-kube-api-access-jvkf7\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.875427 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-config-data-custom\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.875463 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-public-tls-certs\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.875512 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-internal-tls-certs\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.875713 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/561efc1e-a930-440f-83b1-a75217a11f32-logs\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.880367 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-internal-tls-certs\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.880748 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-combined-ca-bundle\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.884369 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-config-data\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.889717 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-public-tls-certs\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.892326 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvkf7\" (UniqueName: \"kubernetes.io/projected/561efc1e-a930-440f-83b1-a75217a11f32-kube-api-access-jvkf7\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.895817 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-config-data-custom\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:48 crc kubenswrapper[4902]: I0121 14:53:47.999074 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:51 crc kubenswrapper[4902]: I0121 14:53:51.415169 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:51 crc kubenswrapper[4902]: I0121 14:53:51.497234 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-xstzn"] Jan 21 14:53:51 crc kubenswrapper[4902]: I0121 14:53:51.497483 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" podUID="fdcec88e-b290-47a2-a111-f353528b337e" containerName="dnsmasq-dns" containerID="cri-o://7692fd62f5f8d970ca1dd253fc5c7512cbe9da4bdb84caf7d56a5669f3d8f303" gracePeriod=10 Jan 21 14:53:52 crc kubenswrapper[4902]: I0121 14:53:52.369573 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6445cbf9c4-z4mzt" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:53:52 crc kubenswrapper[4902]: I0121 14:53:52.372831 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:52 crc kubenswrapper[4902]: I0121 14:53:52.380272 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6445cbf9c4-z4mzt" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:53:53 crc kubenswrapper[4902]: I0121 14:53:53.200799 4902 generic.go:334] "Generic (PLEG): container finished" podID="fdcec88e-b290-47a2-a111-f353528b337e" containerID="7692fd62f5f8d970ca1dd253fc5c7512cbe9da4bdb84caf7d56a5669f3d8f303" exitCode=0 Jan 21 14:53:53 crc kubenswrapper[4902]: I0121 14:53:53.200849 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" event={"ID":"fdcec88e-b290-47a2-a111-f353528b337e","Type":"ContainerDied","Data":"7692fd62f5f8d970ca1dd253fc5c7512cbe9da4bdb84caf7d56a5669f3d8f303"} Jan 21 14:53:53 crc kubenswrapper[4902]: I0121 14:53:53.899165 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.213079 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" event={"ID":"fdcec88e-b290-47a2-a111-f353528b337e","Type":"ContainerDied","Data":"7ebaaeb2e75c14c9e558716e3947916d738a9e8482d276a5a56a2360272de153"} Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.213131 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ebaaeb2e75c14c9e558716e3947916d738a9e8482d276a5a56a2360272de153" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.259467 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.434264 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-ovsdbserver-sb\") pod \"fdcec88e-b290-47a2-a111-f353528b337e\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.434344 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-dns-swift-storage-0\") pod \"fdcec88e-b290-47a2-a111-f353528b337e\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.434383 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-config\") pod \"fdcec88e-b290-47a2-a111-f353528b337e\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.434438 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-dns-svc\") pod \"fdcec88e-b290-47a2-a111-f353528b337e\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.434544 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6hzz\" (UniqueName: \"kubernetes.io/projected/fdcec88e-b290-47a2-a111-f353528b337e-kube-api-access-d6hzz\") pod \"fdcec88e-b290-47a2-a111-f353528b337e\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.434585 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-ovsdbserver-nb\") pod \"fdcec88e-b290-47a2-a111-f353528b337e\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.440847 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdcec88e-b290-47a2-a111-f353528b337e-kube-api-access-d6hzz" (OuterVolumeSpecName: "kube-api-access-d6hzz") pod "fdcec88e-b290-47a2-a111-f353528b337e" (UID: "fdcec88e-b290-47a2-a111-f353528b337e"). InnerVolumeSpecName "kube-api-access-d6hzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.480828 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fdcec88e-b290-47a2-a111-f353528b337e" (UID: "fdcec88e-b290-47a2-a111-f353528b337e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.483947 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-config" (OuterVolumeSpecName: "config") pod "fdcec88e-b290-47a2-a111-f353528b337e" (UID: "fdcec88e-b290-47a2-a111-f353528b337e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.488643 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fdcec88e-b290-47a2-a111-f353528b337e" (UID: "fdcec88e-b290-47a2-a111-f353528b337e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.512419 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fdcec88e-b290-47a2-a111-f353528b337e" (UID: "fdcec88e-b290-47a2-a111-f353528b337e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.512573 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fdcec88e-b290-47a2-a111-f353528b337e" (UID: "fdcec88e-b290-47a2-a111-f353528b337e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.537391 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6hzz\" (UniqueName: \"kubernetes.io/projected/fdcec88e-b290-47a2-a111-f353528b337e-kube-api-access-d6hzz\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.537423 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.537432 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.537440 4902 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.537449 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.537458 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.112033 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5df595696d-2ftxp"] Jan 21 14:53:55 crc kubenswrapper[4902]: W0121 14:53:55.118426 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod561efc1e_a930_440f_83b1_a75217a11f32.slice/crio-596188194bb88a2f6c89003cb099ac4ba874000a54cf1ceffb7115b26f061225 WatchSource:0}: Error finding container 596188194bb88a2f6c89003cb099ac4ba874000a54cf1ceffb7115b26f061225: Status 404 returned error can't find the container with id 596188194bb88a2f6c89003cb099ac4ba874000a54cf1ceffb7115b26f061225 Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.236358 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" event={"ID":"365d6c18-395e-4a62-939d-a04927ffa8aa","Type":"ContainerStarted","Data":"c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8"} Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.236416 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" event={"ID":"365d6c18-395e-4a62-939d-a04927ffa8aa","Type":"ContainerStarted","Data":"f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48"} Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.241226 4902 generic.go:334] "Generic (PLEG): container finished" podID="137b1040-d368-4b6d-a4db-ba7c626f666f" containerID="b544fa374d13ef6e784a8d5d16f0cdb36de690b191b7cd286db841a786a83df0" exitCode=0 Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.241258 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-twg7k" event={"ID":"137b1040-d368-4b6d-a4db-ba7c626f666f","Type":"ContainerDied","Data":"b544fa374d13ef6e784a8d5d16f0cdb36de690b191b7cd286db841a786a83df0"} Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.243498 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8d84757-ad27-4177-be9f-d7d351e771e2","Type":"ContainerStarted","Data":"83d78b65070bf1a4a82cd01725c6ee07441aaf87ba91a13a89576279ece3cbbe"} Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.243593 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="ceilometer-central-agent" containerID="cri-o://51f743b434cb645595957492e22a74e7b49af55c8dc22934431bf87a6f8fd041" gracePeriod=30 Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.243657 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="sg-core" containerID="cri-o://81c864035291d1070b465dc09d84ebe0421ec1a192cefda871ae7e174cd7d5cb" gracePeriod=30 Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.243717 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="proxy-httpd" containerID="cri-o://83d78b65070bf1a4a82cd01725c6ee07441aaf87ba91a13a89576279ece3cbbe" gracePeriod=30 Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.243729 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="ceilometer-notification-agent" containerID="cri-o://ae5941c9751c3e64a04d8b1b8d712d2bab081ccec45d54e0c30b1caa64b2df4d" gracePeriod=30 Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.243621 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.254823 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5df595696d-2ftxp" event={"ID":"561efc1e-a930-440f-83b1-a75217a11f32","Type":"ContainerStarted","Data":"596188194bb88a2f6c89003cb099ac4ba874000a54cf1ceffb7115b26f061225"} Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.255546 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" podStartSLOduration=2.240614697 podStartE2EDuration="15.255537307s" podCreationTimestamp="2026-01-21 14:53:40 +0000 UTC" firstStartedPulling="2026-01-21 14:53:41.731964982 +0000 UTC m=+1183.808798011" lastFinishedPulling="2026-01-21 14:53:54.746887582 +0000 UTC m=+1196.823720621" observedRunningTime="2026-01-21 14:53:55.254076336 +0000 UTC m=+1197.330909365" watchObservedRunningTime="2026-01-21 14:53:55.255537307 +0000 UTC m=+1197.332370336" Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.258022 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887695489-rtxbl" event={"ID":"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5","Type":"ContainerStarted","Data":"2c30f8fcf44519868021b999009e6e0a364f65ba9bb5e12d8b816868d45e7ed6"} Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.261875 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.262150 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68564cb5c-bh98h" event={"ID":"c653ffa0-195e-4eda-8c25-cfcff2715bdf","Type":"ContainerStarted","Data":"5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0"} Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.262204 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68564cb5c-bh98h" event={"ID":"c653ffa0-195e-4eda-8c25-cfcff2715bdf","Type":"ContainerStarted","Data":"43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb"} Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.292477 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.69273282 podStartE2EDuration="1m3.292460652s" podCreationTimestamp="2026-01-21 14:52:52 +0000 UTC" firstStartedPulling="2026-01-21 14:52:54.199754258 +0000 UTC m=+1136.276587287" lastFinishedPulling="2026-01-21 14:53:54.79948209 +0000 UTC m=+1196.876315119" observedRunningTime="2026-01-21 14:53:55.28708765 +0000 UTC m=+1197.363920679" watchObservedRunningTime="2026-01-21 14:53:55.292460652 +0000 UTC m=+1197.369293671" Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.319671 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7887695489-rtxbl" podStartSLOduration=11.319651722 podStartE2EDuration="11.319651722s" podCreationTimestamp="2026-01-21 14:53:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:55.309157205 +0000 UTC m=+1197.385990234" watchObservedRunningTime="2026-01-21 14:53:55.319651722 +0000 UTC m=+1197.396484751" Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.349881 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-68564cb5c-bh98h" podStartSLOduration=2.821576359 podStartE2EDuration="16.349845867s" podCreationTimestamp="2026-01-21 14:53:39 +0000 UTC" firstStartedPulling="2026-01-21 14:53:41.211232325 +0000 UTC m=+1183.288065354" lastFinishedPulling="2026-01-21 14:53:54.739501833 +0000 UTC m=+1196.816334862" observedRunningTime="2026-01-21 14:53:55.33159835 +0000 UTC m=+1197.408431379" watchObservedRunningTime="2026-01-21 14:53:55.349845867 +0000 UTC m=+1197.426678896" Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.360861 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-xstzn"] Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.368510 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-xstzn"] Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.272929 4902 generic.go:334] "Generic (PLEG): container finished" podID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerID="83d78b65070bf1a4a82cd01725c6ee07441aaf87ba91a13a89576279ece3cbbe" exitCode=0 Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.273311 4902 generic.go:334] "Generic (PLEG): container finished" podID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerID="81c864035291d1070b465dc09d84ebe0421ec1a192cefda871ae7e174cd7d5cb" exitCode=2 Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.273329 4902 generic.go:334] "Generic (PLEG): container finished" podID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerID="51f743b434cb645595957492e22a74e7b49af55c8dc22934431bf87a6f8fd041" exitCode=0 Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.272991 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8d84757-ad27-4177-be9f-d7d351e771e2","Type":"ContainerDied","Data":"83d78b65070bf1a4a82cd01725c6ee07441aaf87ba91a13a89576279ece3cbbe"} Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.273428 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8d84757-ad27-4177-be9f-d7d351e771e2","Type":"ContainerDied","Data":"81c864035291d1070b465dc09d84ebe0421ec1a192cefda871ae7e174cd7d5cb"} Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.273441 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8d84757-ad27-4177-be9f-d7d351e771e2","Type":"ContainerDied","Data":"51f743b434cb645595957492e22a74e7b49af55c8dc22934431bf87a6f8fd041"} Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.276862 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5df595696d-2ftxp" event={"ID":"561efc1e-a930-440f-83b1-a75217a11f32","Type":"ContainerStarted","Data":"709dea640199a3e29bbff0c5bd046ca78f3c55c233e1043ae28cc59e518b7cd2"} Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.277245 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5df595696d-2ftxp" event={"ID":"561efc1e-a930-440f-83b1-a75217a11f32","Type":"ContainerStarted","Data":"b91bda9e24415f053bbf7e3136ae0eb36d0535911dff5c3a69ee2c9fd40feb34"} Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.277288 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.277312 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.277739 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.312794 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdcec88e-b290-47a2-a111-f353528b337e" path="/var/lib/kubelet/pods/fdcec88e-b290-47a2-a111-f353528b337e/volumes" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.313194 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5df595696d-2ftxp" podStartSLOduration=9.313169489 podStartE2EDuration="9.313169489s" podCreationTimestamp="2026-01-21 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:56.303586388 +0000 UTC m=+1198.380419417" watchObservedRunningTime="2026-01-21 14:53:56.313169489 +0000 UTC m=+1198.390002518" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.687627 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-twg7k" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.886705 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-config-data\") pod \"137b1040-d368-4b6d-a4db-ba7c626f666f\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.886765 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4xph\" (UniqueName: \"kubernetes.io/projected/137b1040-d368-4b6d-a4db-ba7c626f666f-kube-api-access-x4xph\") pod \"137b1040-d368-4b6d-a4db-ba7c626f666f\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.886805 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-combined-ca-bundle\") pod \"137b1040-d368-4b6d-a4db-ba7c626f666f\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.886839 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/137b1040-d368-4b6d-a4db-ba7c626f666f-etc-machine-id\") pod \"137b1040-d368-4b6d-a4db-ba7c626f666f\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.886859 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-db-sync-config-data\") pod \"137b1040-d368-4b6d-a4db-ba7c626f666f\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.887034 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/137b1040-d368-4b6d-a4db-ba7c626f666f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "137b1040-d368-4b6d-a4db-ba7c626f666f" (UID: "137b1040-d368-4b6d-a4db-ba7c626f666f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.887085 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-scripts\") pod \"137b1040-d368-4b6d-a4db-ba7c626f666f\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.887898 4902 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/137b1040-d368-4b6d-a4db-ba7c626f666f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.900190 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "137b1040-d368-4b6d-a4db-ba7c626f666f" (UID: "137b1040-d368-4b6d-a4db-ba7c626f666f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.901772 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-scripts" (OuterVolumeSpecName: "scripts") pod "137b1040-d368-4b6d-a4db-ba7c626f666f" (UID: "137b1040-d368-4b6d-a4db-ba7c626f666f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.904493 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/137b1040-d368-4b6d-a4db-ba7c626f666f-kube-api-access-x4xph" (OuterVolumeSpecName: "kube-api-access-x4xph") pod "137b1040-d368-4b6d-a4db-ba7c626f666f" (UID: "137b1040-d368-4b6d-a4db-ba7c626f666f"). InnerVolumeSpecName "kube-api-access-x4xph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.912861 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "137b1040-d368-4b6d-a4db-ba7c626f666f" (UID: "137b1040-d368-4b6d-a4db-ba7c626f666f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.938482 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-config-data" (OuterVolumeSpecName: "config-data") pod "137b1040-d368-4b6d-a4db-ba7c626f666f" (UID: "137b1040-d368-4b6d-a4db-ba7c626f666f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.989790 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.989820 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.989830 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4xph\" (UniqueName: \"kubernetes.io/projected/137b1040-d368-4b6d-a4db-ba7c626f666f-kube-api-access-x4xph\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.989839 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.989848 4902 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.291564 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-twg7k" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.294094 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-twg7k" event={"ID":"137b1040-d368-4b6d-a4db-ba7c626f666f","Type":"ContainerDied","Data":"83424afb06205a5855d6b3c92c92324b00e4ab6828b9f7a1bf1115dc87d2cda6"} Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.294138 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83424afb06205a5855d6b3c92c92324b00e4ab6828b9f7a1bf1115dc87d2cda6" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.671571 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:53:57 crc kubenswrapper[4902]: E0121 14:53:57.671934 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdcec88e-b290-47a2-a111-f353528b337e" containerName="init" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.671951 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdcec88e-b290-47a2-a111-f353528b337e" containerName="init" Jan 21 14:53:57 crc kubenswrapper[4902]: E0121 14:53:57.671971 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="137b1040-d368-4b6d-a4db-ba7c626f666f" containerName="cinder-db-sync" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.671978 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="137b1040-d368-4b6d-a4db-ba7c626f666f" containerName="cinder-db-sync" Jan 21 14:53:57 crc kubenswrapper[4902]: E0121 14:53:57.671991 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdcec88e-b290-47a2-a111-f353528b337e" containerName="dnsmasq-dns" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.671997 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdcec88e-b290-47a2-a111-f353528b337e" containerName="dnsmasq-dns" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.672232 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="137b1040-d368-4b6d-a4db-ba7c626f666f" containerName="cinder-db-sync" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.672256 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdcec88e-b290-47a2-a111-f353528b337e" containerName="dnsmasq-dns" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.673331 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.680674 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.680815 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.681132 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.682335 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-wh7dk" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.737992 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.745103 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-94qng"] Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.746689 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.789289 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-94qng"] Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.813452 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hssmx\" (UniqueName: \"kubernetes.io/projected/9c09608f-53ce-4d79-85d0-75bf0e552380-kube-api-access-hssmx\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.813506 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-config-data\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.813534 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-config\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.813588 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.813693 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.813750 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.813787 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.813856 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.813886 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.813908 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.813928 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-scripts\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.813946 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msnnk\" (UniqueName: \"kubernetes.io/projected/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-kube-api-access-msnnk\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.915368 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.915443 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.915501 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.915554 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.915627 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.915656 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.915679 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.915736 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-scripts\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.915759 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msnnk\" (UniqueName: \"kubernetes.io/projected/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-kube-api-access-msnnk\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.915805 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hssmx\" (UniqueName: \"kubernetes.io/projected/9c09608f-53ce-4d79-85d0-75bf0e552380-kube-api-access-hssmx\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.915834 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-config-data\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.915857 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-config\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.916882 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-config\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.917313 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.917866 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.918174 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.918191 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.918470 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.921964 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.922783 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-scripts\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.923388 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.932967 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-config-data\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.933921 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hssmx\" (UniqueName: \"kubernetes.io/projected/9c09608f-53ce-4d79-85d0-75bf0e552380-kube-api-access-hssmx\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.937433 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msnnk\" (UniqueName: \"kubernetes.io/projected/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-kube-api-access-msnnk\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.960413 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.966287 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.973314 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.983676 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.995212 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.016573 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.016624 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/644ddd93-5cca-4483-b62d-548f6a863d72-logs\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.016663 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/644ddd93-5cca-4483-b62d-548f6a863d72-etc-machine-id\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.016682 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-scripts\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.016808 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-config-data-custom\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.016843 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-config-data\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.016879 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrr4g\" (UniqueName: \"kubernetes.io/projected/644ddd93-5cca-4483-b62d-548f6a863d72-kube-api-access-hrr4g\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.079849 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.118496 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-config-data-custom\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.118563 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-config-data\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.118639 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrr4g\" (UniqueName: \"kubernetes.io/projected/644ddd93-5cca-4483-b62d-548f6a863d72-kube-api-access-hrr4g\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.118689 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.118719 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/644ddd93-5cca-4483-b62d-548f6a863d72-logs\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.118786 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/644ddd93-5cca-4483-b62d-548f6a863d72-etc-machine-id\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.118814 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-scripts\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.119550 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/644ddd93-5cca-4483-b62d-548f6a863d72-etc-machine-id\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.119973 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/644ddd93-5cca-4483-b62d-548f6a863d72-logs\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.124023 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-config-data\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.128430 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-scripts\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.130124 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.131079 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-config-data-custom\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.142196 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrr4g\" (UniqueName: \"kubernetes.io/projected/644ddd93-5cca-4483-b62d-548f6a863d72-kube-api-access-hrr4g\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.377972 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" podUID="fdcec88e-b290-47a2-a111-f353528b337e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.145:5353: i/o timeout" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.430224 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.529387 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:53:58 crc kubenswrapper[4902]: W0121 14:53:58.533634 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca715fd9_410d_4675_bbc0_3cfc6a3e2b14.slice/crio-12b1ce4108345896bb01a54435bff262c712e46700037435ba35ac6fcf91daeb WatchSource:0}: Error finding container 12b1ce4108345896bb01a54435bff262c712e46700037435ba35ac6fcf91daeb: Status 404 returned error can't find the container with id 12b1ce4108345896bb01a54435bff262c712e46700037435ba35ac6fcf91daeb Jan 21 14:53:58 crc kubenswrapper[4902]: W0121 14:53:58.693174 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c09608f_53ce_4d79_85d0_75bf0e552380.slice/crio-d5afd22000b4d3f3c6a7c4d47e16d67d68cf4b35e698216b1393d6178399d3b9 WatchSource:0}: Error finding container d5afd22000b4d3f3c6a7c4d47e16d67d68cf4b35e698216b1393d6178399d3b9: Status 404 returned error can't find the container with id d5afd22000b4d3f3c6a7c4d47e16d67d68cf4b35e698216b1393d6178399d3b9 Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.721333 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-94qng"] Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.926147 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:53:58 crc kubenswrapper[4902]: W0121 14:53:58.927485 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod644ddd93_5cca_4483_b62d_548f6a863d72.slice/crio-cc0cb71f6492ec85d79117f468c95c7c85fc7030893dc36c8264cffcce5d3dad WatchSource:0}: Error finding container cc0cb71f6492ec85d79117f468c95c7c85fc7030893dc36c8264cffcce5d3dad: Status 404 returned error can't find the container with id cc0cb71f6492ec85d79117f468c95c7c85fc7030893dc36c8264cffcce5d3dad Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.328572 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"644ddd93-5cca-4483-b62d-548f6a863d72","Type":"ContainerStarted","Data":"cc0cb71f6492ec85d79117f468c95c7c85fc7030893dc36c8264cffcce5d3dad"} Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.330755 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14","Type":"ContainerStarted","Data":"12b1ce4108345896bb01a54435bff262c712e46700037435ba35ac6fcf91daeb"} Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.334195 4902 generic.go:334] "Generic (PLEG): container finished" podID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerID="ae5941c9751c3e64a04d8b1b8d712d2bab081ccec45d54e0c30b1caa64b2df4d" exitCode=0 Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.334270 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8d84757-ad27-4177-be9f-d7d351e771e2","Type":"ContainerDied","Data":"ae5941c9751c3e64a04d8b1b8d712d2bab081ccec45d54e0c30b1caa64b2df4d"} Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.336681 4902 generic.go:334] "Generic (PLEG): container finished" podID="9c09608f-53ce-4d79-85d0-75bf0e552380" containerID="054019a0d14354ed0c0e875d417095f6b26794e582d8869760a6468e64837519" exitCode=0 Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.336721 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" event={"ID":"9c09608f-53ce-4d79-85d0-75bf0e552380","Type":"ContainerDied","Data":"054019a0d14354ed0c0e875d417095f6b26794e582d8869760a6468e64837519"} Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.336746 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" event={"ID":"9c09608f-53ce-4d79-85d0-75bf0e552380","Type":"ContainerStarted","Data":"d5afd22000b4d3f3c6a7c4d47e16d67d68cf4b35e698216b1393d6178399d3b9"} Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.684765 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.747668 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-combined-ca-bundle\") pod \"d8d84757-ad27-4177-be9f-d7d351e771e2\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.747724 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-sg-core-conf-yaml\") pod \"d8d84757-ad27-4177-be9f-d7d351e771e2\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.747756 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg6k2\" (UniqueName: \"kubernetes.io/projected/d8d84757-ad27-4177-be9f-d7d351e771e2-kube-api-access-bg6k2\") pod \"d8d84757-ad27-4177-be9f-d7d351e771e2\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.747783 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8d84757-ad27-4177-be9f-d7d351e771e2-run-httpd\") pod \"d8d84757-ad27-4177-be9f-d7d351e771e2\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.747878 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8d84757-ad27-4177-be9f-d7d351e771e2-log-httpd\") pod \"d8d84757-ad27-4177-be9f-d7d351e771e2\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.747942 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-scripts\") pod \"d8d84757-ad27-4177-be9f-d7d351e771e2\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.747987 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-config-data\") pod \"d8d84757-ad27-4177-be9f-d7d351e771e2\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.759501 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8d84757-ad27-4177-be9f-d7d351e771e2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d8d84757-ad27-4177-be9f-d7d351e771e2" (UID: "d8d84757-ad27-4177-be9f-d7d351e771e2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.759718 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8d84757-ad27-4177-be9f-d7d351e771e2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d8d84757-ad27-4177-be9f-d7d351e771e2" (UID: "d8d84757-ad27-4177-be9f-d7d351e771e2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.780245 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-scripts" (OuterVolumeSpecName: "scripts") pod "d8d84757-ad27-4177-be9f-d7d351e771e2" (UID: "d8d84757-ad27-4177-be9f-d7d351e771e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.782502 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8d84757-ad27-4177-be9f-d7d351e771e2-kube-api-access-bg6k2" (OuterVolumeSpecName: "kube-api-access-bg6k2") pod "d8d84757-ad27-4177-be9f-d7d351e771e2" (UID: "d8d84757-ad27-4177-be9f-d7d351e771e2"). InnerVolumeSpecName "kube-api-access-bg6k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.849702 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.849736 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg6k2\" (UniqueName: \"kubernetes.io/projected/d8d84757-ad27-4177-be9f-d7d351e771e2-kube-api-access-bg6k2\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.849749 4902 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8d84757-ad27-4177-be9f-d7d351e771e2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.849759 4902 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8d84757-ad27-4177-be9f-d7d351e771e2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.883156 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.895549 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d8d84757-ad27-4177-be9f-d7d351e771e2" (UID: "d8d84757-ad27-4177-be9f-d7d351e771e2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.944618 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8d84757-ad27-4177-be9f-d7d351e771e2" (UID: "d8d84757-ad27-4177-be9f-d7d351e771e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.956904 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.956947 4902 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.018706 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-config-data" (OuterVolumeSpecName: "config-data") pod "d8d84757-ad27-4177-be9f-d7d351e771e2" (UID: "d8d84757-ad27-4177-be9f-d7d351e771e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.058321 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.363847 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"644ddd93-5cca-4483-b62d-548f6a863d72","Type":"ContainerStarted","Data":"d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4"} Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.386567 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8d84757-ad27-4177-be9f-d7d351e771e2","Type":"ContainerDied","Data":"93b4216849acc7e83ad93b11dfedacb592b887f2e39ca3b5b2c28470072e2c3e"} Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.386615 4902 scope.go:117] "RemoveContainer" containerID="83d78b65070bf1a4a82cd01725c6ee07441aaf87ba91a13a89576279ece3cbbe" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.386831 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.391671 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" event={"ID":"9c09608f-53ce-4d79-85d0-75bf0e552380","Type":"ContainerStarted","Data":"58e43d8b58cb6c7891b30fdbcfbbaedb613b0110edddfc00c5eeec2b0d50db94"} Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.391987 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.435144 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" podStartSLOduration=3.435127696 podStartE2EDuration="3.435127696s" podCreationTimestamp="2026-01-21 14:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:54:00.416306463 +0000 UTC m=+1202.493139492" watchObservedRunningTime="2026-01-21 14:54:00.435127696 +0000 UTC m=+1202.511960715" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.446088 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.457751 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.472373 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:00 crc kubenswrapper[4902]: E0121 14:54:00.472758 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="sg-core" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.472771 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="sg-core" Jan 21 14:54:00 crc kubenswrapper[4902]: E0121 14:54:00.472786 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="proxy-httpd" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.472792 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="proxy-httpd" Jan 21 14:54:00 crc kubenswrapper[4902]: E0121 14:54:00.472812 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="ceilometer-notification-agent" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.472818 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="ceilometer-notification-agent" Jan 21 14:54:00 crc kubenswrapper[4902]: E0121 14:54:00.472836 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="ceilometer-central-agent" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.472842 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="ceilometer-central-agent" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.473009 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="ceilometer-central-agent" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.473021 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="ceilometer-notification-agent" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.473031 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="sg-core" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.473065 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="proxy-httpd" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.474666 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.480032 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.480309 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.484482 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-scripts\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.484525 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.484581 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.484684 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e729f055-6d31-4994-8561-fbefd5aba351-run-httpd\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.484711 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-config-data\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.484839 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e729f055-6d31-4994-8561-fbefd5aba351-log-httpd\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.484877 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gszw2\" (UniqueName: \"kubernetes.io/projected/e729f055-6d31-4994-8561-fbefd5aba351-kube-api-access-gszw2\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.487700 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.541140 4902 scope.go:117] "RemoveContainer" containerID="81c864035291d1070b465dc09d84ebe0421ec1a192cefda871ae7e174cd7d5cb" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.585630 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e729f055-6d31-4994-8561-fbefd5aba351-run-httpd\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.585725 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-config-data\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.586163 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e729f055-6d31-4994-8561-fbefd5aba351-run-httpd\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.586608 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e729f055-6d31-4994-8561-fbefd5aba351-log-httpd\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.586669 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gszw2\" (UniqueName: \"kubernetes.io/projected/e729f055-6d31-4994-8561-fbefd5aba351-kube-api-access-gszw2\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.586734 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-scripts\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.586791 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.586844 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.586874 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e729f055-6d31-4994-8561-fbefd5aba351-log-httpd\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.590073 4902 scope.go:117] "RemoveContainer" containerID="ae5941c9751c3e64a04d8b1b8d712d2bab081ccec45d54e0c30b1caa64b2df4d" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.595685 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.596269 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-scripts\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.596896 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.597004 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-config-data\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.603864 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gszw2\" (UniqueName: \"kubernetes.io/projected/e729f055-6d31-4994-8561-fbefd5aba351-kube-api-access-gszw2\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.667706 4902 scope.go:117] "RemoveContainer" containerID="51f743b434cb645595957492e22a74e7b49af55c8dc22934431bf87a6f8fd041" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.861309 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:01 crc kubenswrapper[4902]: I0121 14:54:01.401866 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"644ddd93-5cca-4483-b62d-548f6a863d72","Type":"ContainerStarted","Data":"4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7"} Jan 21 14:54:01 crc kubenswrapper[4902]: I0121 14:54:01.402447 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 14:54:01 crc kubenswrapper[4902]: I0121 14:54:01.402114 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="644ddd93-5cca-4483-b62d-548f6a863d72" containerName="cinder-api" containerID="cri-o://4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7" gracePeriod=30 Jan 21 14:54:01 crc kubenswrapper[4902]: I0121 14:54:01.402008 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="644ddd93-5cca-4483-b62d-548f6a863d72" containerName="cinder-api-log" containerID="cri-o://d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4" gracePeriod=30 Jan 21 14:54:01 crc kubenswrapper[4902]: I0121 14:54:01.405257 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14","Type":"ContainerStarted","Data":"174cb307680316a6f5706c69b676f7998050e192675c67de1f05b839419fa871"} Jan 21 14:54:01 crc kubenswrapper[4902]: I0121 14:54:01.405499 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14","Type":"ContainerStarted","Data":"fde7aac6a44c88fcbc975cf724be75aaa4036c1671f2b89f5e2108d9cd43b508"} Jan 21 14:54:01 crc kubenswrapper[4902]: I0121 14:54:01.428490 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.428462169 podStartE2EDuration="4.428462169s" podCreationTimestamp="2026-01-21 14:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:54:01.423653483 +0000 UTC m=+1203.500486512" watchObservedRunningTime="2026-01-21 14:54:01.428462169 +0000 UTC m=+1203.505295198" Jan 21 14:54:01 crc kubenswrapper[4902]: I0121 14:54:01.449083 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.38857455 podStartE2EDuration="4.449065832s" podCreationTimestamp="2026-01-21 14:53:57 +0000 UTC" firstStartedPulling="2026-01-21 14:53:58.536462612 +0000 UTC m=+1200.613295641" lastFinishedPulling="2026-01-21 14:53:59.596953894 +0000 UTC m=+1201.673786923" observedRunningTime="2026-01-21 14:54:01.447283582 +0000 UTC m=+1203.524116611" watchObservedRunningTime="2026-01-21 14:54:01.449065832 +0000 UTC m=+1203.525898861" Jan 21 14:54:01 crc kubenswrapper[4902]: I0121 14:54:01.492426 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.047883 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.234085 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-config-data-custom\") pod \"644ddd93-5cca-4483-b62d-548f6a863d72\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.234157 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-combined-ca-bundle\") pod \"644ddd93-5cca-4483-b62d-548f6a863d72\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.234217 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrr4g\" (UniqueName: \"kubernetes.io/projected/644ddd93-5cca-4483-b62d-548f6a863d72-kube-api-access-hrr4g\") pod \"644ddd93-5cca-4483-b62d-548f6a863d72\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.234256 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/644ddd93-5cca-4483-b62d-548f6a863d72-etc-machine-id\") pod \"644ddd93-5cca-4483-b62d-548f6a863d72\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.234333 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/644ddd93-5cca-4483-b62d-548f6a863d72-logs\") pod \"644ddd93-5cca-4483-b62d-548f6a863d72\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.234374 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-config-data\") pod \"644ddd93-5cca-4483-b62d-548f6a863d72\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.234408 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-scripts\") pod \"644ddd93-5cca-4483-b62d-548f6a863d72\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.234419 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/644ddd93-5cca-4483-b62d-548f6a863d72-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "644ddd93-5cca-4483-b62d-548f6a863d72" (UID: "644ddd93-5cca-4483-b62d-548f6a863d72"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.234816 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/644ddd93-5cca-4483-b62d-548f6a863d72-logs" (OuterVolumeSpecName: "logs") pod "644ddd93-5cca-4483-b62d-548f6a863d72" (UID: "644ddd93-5cca-4483-b62d-548f6a863d72"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.235080 4902 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/644ddd93-5cca-4483-b62d-548f6a863d72-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.235104 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/644ddd93-5cca-4483-b62d-548f6a863d72-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.239696 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-scripts" (OuterVolumeSpecName: "scripts") pod "644ddd93-5cca-4483-b62d-548f6a863d72" (UID: "644ddd93-5cca-4483-b62d-548f6a863d72"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.239705 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644ddd93-5cca-4483-b62d-548f6a863d72-kube-api-access-hrr4g" (OuterVolumeSpecName: "kube-api-access-hrr4g") pod "644ddd93-5cca-4483-b62d-548f6a863d72" (UID: "644ddd93-5cca-4483-b62d-548f6a863d72"). InnerVolumeSpecName "kube-api-access-hrr4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.239986 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "644ddd93-5cca-4483-b62d-548f6a863d72" (UID: "644ddd93-5cca-4483-b62d-548f6a863d72"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.264378 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "644ddd93-5cca-4483-b62d-548f6a863d72" (UID: "644ddd93-5cca-4483-b62d-548f6a863d72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.287243 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-config-data" (OuterVolumeSpecName: "config-data") pod "644ddd93-5cca-4483-b62d-548f6a863d72" (UID: "644ddd93-5cca-4483-b62d-548f6a863d72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.305501 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" path="/var/lib/kubelet/pods/d8d84757-ad27-4177-be9f-d7d351e771e2/volumes" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.336706 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.336745 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.336758 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.336772 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.336785 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrr4g\" (UniqueName: \"kubernetes.io/projected/644ddd93-5cca-4483-b62d-548f6a863d72-kube-api-access-hrr4g\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.421115 4902 generic.go:334] "Generic (PLEG): container finished" podID="644ddd93-5cca-4483-b62d-548f6a863d72" containerID="4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7" exitCode=0 Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.421149 4902 generic.go:334] "Generic (PLEG): container finished" podID="644ddd93-5cca-4483-b62d-548f6a863d72" containerID="d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4" exitCode=143 Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.421268 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.421537 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"644ddd93-5cca-4483-b62d-548f6a863d72","Type":"ContainerDied","Data":"4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7"} Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.421598 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"644ddd93-5cca-4483-b62d-548f6a863d72","Type":"ContainerDied","Data":"d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4"} Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.421617 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"644ddd93-5cca-4483-b62d-548f6a863d72","Type":"ContainerDied","Data":"cc0cb71f6492ec85d79117f468c95c7c85fc7030893dc36c8264cffcce5d3dad"} Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.421626 4902 scope.go:117] "RemoveContainer" containerID="4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.424293 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e729f055-6d31-4994-8561-fbefd5aba351","Type":"ContainerStarted","Data":"e69baea6eee432132f0068671e39127960be033916e49020781e5d192b5eaecc"} Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.460374 4902 scope.go:117] "RemoveContainer" containerID="d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.470798 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.479484 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.486828 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:54:02 crc kubenswrapper[4902]: E0121 14:54:02.487356 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644ddd93-5cca-4483-b62d-548f6a863d72" containerName="cinder-api-log" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.487377 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="644ddd93-5cca-4483-b62d-548f6a863d72" containerName="cinder-api-log" Jan 21 14:54:02 crc kubenswrapper[4902]: E0121 14:54:02.487402 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644ddd93-5cca-4483-b62d-548f6a863d72" containerName="cinder-api" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.487412 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="644ddd93-5cca-4483-b62d-548f6a863d72" containerName="cinder-api" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.492075 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="644ddd93-5cca-4483-b62d-548f6a863d72" containerName="cinder-api-log" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.492113 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="644ddd93-5cca-4483-b62d-548f6a863d72" containerName="cinder-api" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.493978 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.500123 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.500372 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.500552 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.506295 4902 scope.go:117] "RemoveContainer" containerID="4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7" Jan 21 14:54:02 crc kubenswrapper[4902]: E0121 14:54:02.508667 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7\": container with ID starting with 4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7 not found: ID does not exist" containerID="4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.508732 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7"} err="failed to get container status \"4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7\": rpc error: code = NotFound desc = could not find container \"4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7\": container with ID starting with 4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7 not found: ID does not exist" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.508759 4902 scope.go:117] "RemoveContainer" containerID="d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4" Jan 21 14:54:02 crc kubenswrapper[4902]: E0121 14:54:02.509264 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4\": container with ID starting with d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4 not found: ID does not exist" containerID="d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.509311 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4"} err="failed to get container status \"d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4\": rpc error: code = NotFound desc = could not find container \"d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4\": container with ID starting with d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4 not found: ID does not exist" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.509345 4902 scope.go:117] "RemoveContainer" containerID="4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.509794 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7"} err="failed to get container status \"4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7\": rpc error: code = NotFound desc = could not find container \"4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7\": container with ID starting with 4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7 not found: ID does not exist" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.509815 4902 scope.go:117] "RemoveContainer" containerID="d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.510163 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4"} err="failed to get container status \"d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4\": rpc error: code = NotFound desc = could not find container \"d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4\": container with ID starting with d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4 not found: ID does not exist" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.520538 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.652796 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.653072 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.653149 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.653208 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-scripts\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.653228 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nm9x\" (UniqueName: \"kubernetes.io/projected/db4d047b-49f4-4b55-a053-081f1be632b7-kube-api-access-6nm9x\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.653245 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-config-data-custom\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.653303 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-config-data\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.653332 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db4d047b-49f4-4b55-a053-081f1be632b7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.653385 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db4d047b-49f4-4b55-a053-081f1be632b7-logs\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.755402 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db4d047b-49f4-4b55-a053-081f1be632b7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.755486 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db4d047b-49f4-4b55-a053-081f1be632b7-logs\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.755570 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.755608 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.755675 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.755745 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db4d047b-49f4-4b55-a053-081f1be632b7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.756002 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db4d047b-49f4-4b55-a053-081f1be632b7-logs\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.756455 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-scripts\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.756491 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nm9x\" (UniqueName: \"kubernetes.io/projected/db4d047b-49f4-4b55-a053-081f1be632b7-kube-api-access-6nm9x\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.756512 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-config-data-custom\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.756543 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-config-data\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.760375 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-config-data\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.760425 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.761113 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-scripts\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.768272 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.774745 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.776742 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-config-data-custom\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.781196 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nm9x\" (UniqueName: \"kubernetes.io/projected/db4d047b-49f4-4b55-a053-081f1be632b7-kube-api-access-6nm9x\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.823146 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.995848 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 14:54:03 crc kubenswrapper[4902]: I0121 14:54:03.285706 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:54:03 crc kubenswrapper[4902]: I0121 14:54:03.435727 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"db4d047b-49f4-4b55-a053-081f1be632b7","Type":"ContainerStarted","Data":"cf192cd4c08d4018b743f3dc19c0686fe97811bb3b64651346fb935eec9339db"} Jan 21 14:54:03 crc kubenswrapper[4902]: I0121 14:54:03.437548 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e729f055-6d31-4994-8561-fbefd5aba351","Type":"ContainerStarted","Data":"cb93f6564b507552595aee7a7e6446eef24375fdbc49027675bd228983bf00c6"} Jan 21 14:54:03 crc kubenswrapper[4902]: I0121 14:54:03.437589 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e729f055-6d31-4994-8561-fbefd5aba351","Type":"ContainerStarted","Data":"4d2c354c316e2909a7d5339fb8d71148e96c6d6c6e258b111839778343d895fd"} Jan 21 14:54:04 crc kubenswrapper[4902]: I0121 14:54:04.321488 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="644ddd93-5cca-4483-b62d-548f6a863d72" path="/var/lib/kubelet/pods/644ddd93-5cca-4483-b62d-548f6a863d72/volumes" Jan 21 14:54:04 crc kubenswrapper[4902]: I0121 14:54:04.451847 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e729f055-6d31-4994-8561-fbefd5aba351","Type":"ContainerStarted","Data":"80d76f992bb8bfbe9c73393b52cc22c46d041fdcc2adb3e649bc943a55253ef8"} Jan 21 14:54:04 crc kubenswrapper[4902]: I0121 14:54:04.453602 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"db4d047b-49f4-4b55-a053-081f1be632b7","Type":"ContainerStarted","Data":"d81b469d4bfe4317399c28b768091ee1e4d32b1ffeb38b5ab40fde67bdde4b7f"} Jan 21 14:54:04 crc kubenswrapper[4902]: I0121 14:54:04.602311 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:54:04 crc kubenswrapper[4902]: I0121 14:54:04.816766 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:54:04 crc kubenswrapper[4902]: I0121 14:54:04.881817 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6445cbf9c4-z4mzt"] Jan 21 14:54:04 crc kubenswrapper[4902]: I0121 14:54:04.882185 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6445cbf9c4-z4mzt" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerName="barbican-api" containerID="cri-o://f1c482b66b7a7732a3644035350a84867090e3f9916167da7455775012f80692" gracePeriod=30 Jan 21 14:54:04 crc kubenswrapper[4902]: I0121 14:54:04.882369 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6445cbf9c4-z4mzt" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerName="barbican-api-log" containerID="cri-o://23ea5f83fdd14483bee45b939eae36b3f6796ce08c5e8747aa8cd9acb4874d6c" gracePeriod=30 Jan 21 14:54:05 crc kubenswrapper[4902]: I0121 14:54:05.582329 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"db4d047b-49f4-4b55-a053-081f1be632b7","Type":"ContainerStarted","Data":"4c05b52bed8146e4b813b72bd57efca7be3d0268ea82de7f8102940d78d0f674"} Jan 21 14:54:05 crc kubenswrapper[4902]: I0121 14:54:05.583623 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 14:54:05 crc kubenswrapper[4902]: I0121 14:54:05.589869 4902 generic.go:334] "Generic (PLEG): container finished" podID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerID="23ea5f83fdd14483bee45b939eae36b3f6796ce08c5e8747aa8cd9acb4874d6c" exitCode=143 Jan 21 14:54:05 crc kubenswrapper[4902]: I0121 14:54:05.590793 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6445cbf9c4-z4mzt" event={"ID":"b8579d67-5e61-40f2-9725-b695f7d7bb81","Type":"ContainerDied","Data":"23ea5f83fdd14483bee45b939eae36b3f6796ce08c5e8747aa8cd9acb4874d6c"} Jan 21 14:54:05 crc kubenswrapper[4902]: I0121 14:54:05.606177 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.606162934 podStartE2EDuration="3.606162934s" podCreationTimestamp="2026-01-21 14:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:54:05.601779259 +0000 UTC m=+1207.678612288" watchObservedRunningTime="2026-01-21 14:54:05.606162934 +0000 UTC m=+1207.682995963" Jan 21 14:54:06 crc kubenswrapper[4902]: I0121 14:54:06.610587 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e729f055-6d31-4994-8561-fbefd5aba351","Type":"ContainerStarted","Data":"51096ea975abaac8a431582dfeae27fd25ef6ce3f6de1cc2cf3f257b1bb51809"} Jan 21 14:54:06 crc kubenswrapper[4902]: I0121 14:54:06.640475 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.829894052 podStartE2EDuration="6.640455115s" podCreationTimestamp="2026-01-21 14:54:00 +0000 UTC" firstStartedPulling="2026-01-21 14:54:01.557273255 +0000 UTC m=+1203.634106284" lastFinishedPulling="2026-01-21 14:54:05.367834318 +0000 UTC m=+1207.444667347" observedRunningTime="2026-01-21 14:54:06.639030844 +0000 UTC m=+1208.715863883" watchObservedRunningTime="2026-01-21 14:54:06.640455115 +0000 UTC m=+1208.717288144" Jan 21 14:54:06 crc kubenswrapper[4902]: I0121 14:54:06.989957 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:54:07 crc kubenswrapper[4902]: I0121 14:54:07.633755 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 14:54:07 crc kubenswrapper[4902]: I0121 14:54:07.876544 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 14:54:07 crc kubenswrapper[4902]: I0121 14:54:07.877703 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 14:54:07 crc kubenswrapper[4902]: I0121 14:54:07.881563 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 21 14:54:07 crc kubenswrapper[4902]: I0121 14:54:07.884393 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-58xqz" Jan 21 14:54:07 crc kubenswrapper[4902]: I0121 14:54:07.892296 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 14:54:07 crc kubenswrapper[4902]: I0121 14:54:07.910764 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.037032 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-openstack-config-secret\") pod \"openstackclient\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.037201 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5x67\" (UniqueName: \"kubernetes.io/projected/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-kube-api-access-s5x67\") pod \"openstackclient\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.037251 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.037393 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-openstack-config\") pod \"openstackclient\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.082228 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.139136 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5x67\" (UniqueName: \"kubernetes.io/projected/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-kube-api-access-s5x67\") pod \"openstackclient\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.139207 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.139258 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-openstack-config\") pod \"openstackclient\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.139338 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-openstack-config-secret\") pod \"openstackclient\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.139668 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6445cbf9c4-z4mzt" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:53984->10.217.0.156:9311: read: connection reset by peer" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.139824 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6445cbf9c4-z4mzt" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:53996->10.217.0.156:9311: read: connection reset by peer" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.140430 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-openstack-config\") pod \"openstackclient\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.145849 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-openstack-config-secret\") pod \"openstackclient\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.163779 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.177460 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5x67\" (UniqueName: \"kubernetes.io/projected/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-kube-api-access-s5x67\") pod \"openstackclient\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.197308 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-g578w"] Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.197675 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" podUID="95d16264-79d8-49aa-92aa-9f95f6f88ee5" containerName="dnsmasq-dns" containerID="cri-o://44d81e25a8e35f144214a1d8f5c1fb7aa5a894a9f7d53ffd55c93106fb15e4c3" gracePeriod=10 Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.223636 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.698960 4902 generic.go:334] "Generic (PLEG): container finished" podID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerID="f1c482b66b7a7732a3644035350a84867090e3f9916167da7455775012f80692" exitCode=0 Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.699613 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6445cbf9c4-z4mzt" event={"ID":"b8579d67-5e61-40f2-9725-b695f7d7bb81","Type":"ContainerDied","Data":"f1c482b66b7a7732a3644035350a84867090e3f9916167da7455775012f80692"} Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.756363 4902 generic.go:334] "Generic (PLEG): container finished" podID="95d16264-79d8-49aa-92aa-9f95f6f88ee5" containerID="44d81e25a8e35f144214a1d8f5c1fb7aa5a894a9f7d53ffd55c93106fb15e4c3" exitCode=0 Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.756555 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" event={"ID":"95d16264-79d8-49aa-92aa-9f95f6f88ee5","Type":"ContainerDied","Data":"44d81e25a8e35f144214a1d8f5c1fb7aa5a894a9f7d53ffd55c93106fb15e4c3"} Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.863481 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.965619 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.045737 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.091073 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-config-data\") pod \"b8579d67-5e61-40f2-9725-b695f7d7bb81\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.091457 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-config-data-custom\") pod \"b8579d67-5e61-40f2-9725-b695f7d7bb81\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.091545 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8579d67-5e61-40f2-9725-b695f7d7bb81-logs\") pod \"b8579d67-5e61-40f2-9725-b695f7d7bb81\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.091797 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-combined-ca-bundle\") pod \"b8579d67-5e61-40f2-9725-b695f7d7bb81\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.091852 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vz6l\" (UniqueName: \"kubernetes.io/projected/b8579d67-5e61-40f2-9725-b695f7d7bb81-kube-api-access-8vz6l\") pod \"b8579d67-5e61-40f2-9725-b695f7d7bb81\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.095649 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8579d67-5e61-40f2-9725-b695f7d7bb81-logs" (OuterVolumeSpecName: "logs") pod "b8579d67-5e61-40f2-9725-b695f7d7bb81" (UID: "b8579d67-5e61-40f2-9725-b695f7d7bb81"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.101254 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b8579d67-5e61-40f2-9725-b695f7d7bb81" (UID: "b8579d67-5e61-40f2-9725-b695f7d7bb81"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.103586 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8579d67-5e61-40f2-9725-b695f7d7bb81-kube-api-access-8vz6l" (OuterVolumeSpecName: "kube-api-access-8vz6l") pod "b8579d67-5e61-40f2-9725-b695f7d7bb81" (UID: "b8579d67-5e61-40f2-9725-b695f7d7bb81"). InnerVolumeSpecName "kube-api-access-8vz6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.142173 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8579d67-5e61-40f2-9725-b695f7d7bb81" (UID: "b8579d67-5e61-40f2-9725-b695f7d7bb81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.191456 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-config-data" (OuterVolumeSpecName: "config-data") pod "b8579d67-5e61-40f2-9725-b695f7d7bb81" (UID: "b8579d67-5e61-40f2-9725-b695f7d7bb81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.196256 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.196308 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.196322 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8579d67-5e61-40f2-9725-b695f7d7bb81-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.196337 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.196350 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vz6l\" (UniqueName: \"kubernetes.io/projected/b8579d67-5e61-40f2-9725-b695f7d7bb81-kube-api-access-8vz6l\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.305771 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.320299 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.400083 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxthz\" (UniqueName: \"kubernetes.io/projected/95d16264-79d8-49aa-92aa-9f95f6f88ee5-kube-api-access-lxthz\") pod \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.400251 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-dns-svc\") pod \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.400381 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-dns-swift-storage-0\") pod \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.400457 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-ovsdbserver-nb\") pod \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.400531 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-config\") pod \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.400598 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-ovsdbserver-sb\") pod \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.438233 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95d16264-79d8-49aa-92aa-9f95f6f88ee5-kube-api-access-lxthz" (OuterVolumeSpecName: "kube-api-access-lxthz") pod "95d16264-79d8-49aa-92aa-9f95f6f88ee5" (UID: "95d16264-79d8-49aa-92aa-9f95f6f88ee5"). InnerVolumeSpecName "kube-api-access-lxthz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.502991 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "95d16264-79d8-49aa-92aa-9f95f6f88ee5" (UID: "95d16264-79d8-49aa-92aa-9f95f6f88ee5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.504450 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.504563 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxthz\" (UniqueName: \"kubernetes.io/projected/95d16264-79d8-49aa-92aa-9f95f6f88ee5-kube-api-access-lxthz\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.527846 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-config" (OuterVolumeSpecName: "config") pod "95d16264-79d8-49aa-92aa-9f95f6f88ee5" (UID: "95d16264-79d8-49aa-92aa-9f95f6f88ee5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.547622 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "95d16264-79d8-49aa-92aa-9f95f6f88ee5" (UID: "95d16264-79d8-49aa-92aa-9f95f6f88ee5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.554683 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "95d16264-79d8-49aa-92aa-9f95f6f88ee5" (UID: "95d16264-79d8-49aa-92aa-9f95f6f88ee5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.564567 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "95d16264-79d8-49aa-92aa-9f95f6f88ee5" (UID: "95d16264-79d8-49aa-92aa-9f95f6f88ee5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.606016 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.606071 4902 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.606086 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.606101 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.767814 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6445cbf9c4-z4mzt" event={"ID":"b8579d67-5e61-40f2-9725-b695f7d7bb81","Type":"ContainerDied","Data":"b50cfe39dc085f1d64312e802e7695d3392dba0e54502997f28533561b89173b"} Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.767894 4902 scope.go:117] "RemoveContainer" containerID="f1c482b66b7a7732a3644035350a84867090e3f9916167da7455775012f80692" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.768087 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.782739 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b14dfbd1-cf80-4ba8-9372-ca5767f5d689","Type":"ContainerStarted","Data":"e3a77e759e3882d3e95a1ef76b5a499badcc1c22a11d9265cd99602a2c5102a4"} Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.788922 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" containerName="cinder-scheduler" containerID="cri-o://fde7aac6a44c88fcbc975cf724be75aaa4036c1671f2b89f5e2108d9cd43b508" gracePeriod=30 Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.789284 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.791190 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" containerName="probe" containerID="cri-o://174cb307680316a6f5706c69b676f7998050e192675c67de1f05b839419fa871" gracePeriod=30 Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.791295 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" event={"ID":"95d16264-79d8-49aa-92aa-9f95f6f88ee5","Type":"ContainerDied","Data":"3301d34c981a80c2f15194742c15041f8dd34876ffe78731a10b3a3473a032e9"} Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.835283 4902 scope.go:117] "RemoveContainer" containerID="23ea5f83fdd14483bee45b939eae36b3f6796ce08c5e8747aa8cd9acb4874d6c" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.839274 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6445cbf9c4-z4mzt"] Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.867097 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6445cbf9c4-z4mzt"] Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.894177 4902 scope.go:117] "RemoveContainer" containerID="44d81e25a8e35f144214a1d8f5c1fb7aa5a894a9f7d53ffd55c93106fb15e4c3" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.897236 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-g578w"] Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.906517 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-g578w"] Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.966468 4902 scope.go:117] "RemoveContainer" containerID="1bf0f0d44db0898ad571fc3dc44b938eda2883d14d10f909c94ababfd0ac149c" Jan 21 14:54:10 crc kubenswrapper[4902]: I0121 14:54:10.307207 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95d16264-79d8-49aa-92aa-9f95f6f88ee5" path="/var/lib/kubelet/pods/95d16264-79d8-49aa-92aa-9f95f6f88ee5/volumes" Jan 21 14:54:10 crc kubenswrapper[4902]: I0121 14:54:10.307966 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" path="/var/lib/kubelet/pods/b8579d67-5e61-40f2-9725-b695f7d7bb81/volumes" Jan 21 14:54:11 crc kubenswrapper[4902]: I0121 14:54:11.563338 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:54:11 crc kubenswrapper[4902]: I0121 14:54:11.638458 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:54:11 crc kubenswrapper[4902]: I0121 14:54:11.655778 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:54:11 crc kubenswrapper[4902]: I0121 14:54:11.827882 4902 generic.go:334] "Generic (PLEG): container finished" podID="ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" containerID="174cb307680316a6f5706c69b676f7998050e192675c67de1f05b839419fa871" exitCode=0 Jan 21 14:54:11 crc kubenswrapper[4902]: I0121 14:54:11.828793 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14","Type":"ContainerDied","Data":"174cb307680316a6f5706c69b676f7998050e192675c67de1f05b839419fa871"} Jan 21 14:54:12 crc kubenswrapper[4902]: I0121 14:54:12.852549 4902 generic.go:334] "Generic (PLEG): container finished" podID="ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" containerID="fde7aac6a44c88fcbc975cf724be75aaa4036c1671f2b89f5e2108d9cd43b508" exitCode=0 Jan 21 14:54:12 crc kubenswrapper[4902]: I0121 14:54:12.852780 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14","Type":"ContainerDied","Data":"fde7aac6a44c88fcbc975cf724be75aaa4036c1671f2b89f5e2108d9cd43b508"} Jan 21 14:54:12 crc kubenswrapper[4902]: I0121 14:54:12.942305 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 14:54:12 crc kubenswrapper[4902]: I0121 14:54:12.979273 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msnnk\" (UniqueName: \"kubernetes.io/projected/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-kube-api-access-msnnk\") pod \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " Jan 21 14:54:12 crc kubenswrapper[4902]: I0121 14:54:12.979350 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-config-data-custom\") pod \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " Jan 21 14:54:12 crc kubenswrapper[4902]: I0121 14:54:12.979391 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-etc-machine-id\") pod \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " Jan 21 14:54:12 crc kubenswrapper[4902]: I0121 14:54:12.979448 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-scripts\") pod \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " Jan 21 14:54:12 crc kubenswrapper[4902]: I0121 14:54:12.979493 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-config-data\") pod \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " Jan 21 14:54:12 crc kubenswrapper[4902]: I0121 14:54:12.979669 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-combined-ca-bundle\") pod \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " Jan 21 14:54:12 crc kubenswrapper[4902]: I0121 14:54:12.982186 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" (UID: "ca715fd9-410d-4675-bbc0-3cfc6a3e2b14"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:54:12 crc kubenswrapper[4902]: I0121 14:54:12.986719 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" (UID: "ca715fd9-410d-4675-bbc0-3cfc6a3e2b14"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:12 crc kubenswrapper[4902]: I0121 14:54:12.992235 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-kube-api-access-msnnk" (OuterVolumeSpecName: "kube-api-access-msnnk") pod "ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" (UID: "ca715fd9-410d-4675-bbc0-3cfc6a3e2b14"). InnerVolumeSpecName "kube-api-access-msnnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:12.999115 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-scripts" (OuterVolumeSpecName: "scripts") pod "ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" (UID: "ca715fd9-410d-4675-bbc0-3cfc6a3e2b14"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.082500 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msnnk\" (UniqueName: \"kubernetes.io/projected/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-kube-api-access-msnnk\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.082538 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.082550 4902 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.082564 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.125386 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" (UID: "ca715fd9-410d-4675-bbc0-3cfc6a3e2b14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.131071 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-config-data" (OuterVolumeSpecName: "config-data") pod "ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" (UID: "ca715fd9-410d-4675-bbc0-3cfc6a3e2b14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.186407 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.186612 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.878704 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14","Type":"ContainerDied","Data":"12b1ce4108345896bb01a54435bff262c712e46700037435ba35ac6fcf91daeb"} Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.878760 4902 scope.go:117] "RemoveContainer" containerID="174cb307680316a6f5706c69b676f7998050e192675c67de1f05b839419fa871" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.878916 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.916532 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.920375 4902 scope.go:117] "RemoveContainer" containerID="fde7aac6a44c88fcbc975cf724be75aaa4036c1671f2b89f5e2108d9cd43b508" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.928013 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.958113 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:54:13 crc kubenswrapper[4902]: E0121 14:54:13.958560 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerName="barbican-api" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.958579 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerName="barbican-api" Jan 21 14:54:13 crc kubenswrapper[4902]: E0121 14:54:13.958608 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d16264-79d8-49aa-92aa-9f95f6f88ee5" containerName="dnsmasq-dns" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.958619 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d16264-79d8-49aa-92aa-9f95f6f88ee5" containerName="dnsmasq-dns" Jan 21 14:54:13 crc kubenswrapper[4902]: E0121 14:54:13.958629 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" containerName="probe" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.958638 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" containerName="probe" Jan 21 14:54:13 crc kubenswrapper[4902]: E0121 14:54:13.958653 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d16264-79d8-49aa-92aa-9f95f6f88ee5" containerName="init" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.958660 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d16264-79d8-49aa-92aa-9f95f6f88ee5" containerName="init" Jan 21 14:54:13 crc kubenswrapper[4902]: E0121 14:54:13.958671 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerName="barbican-api-log" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.958679 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerName="barbican-api-log" Jan 21 14:54:13 crc kubenswrapper[4902]: E0121 14:54:13.958694 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" containerName="cinder-scheduler" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.958702 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" containerName="cinder-scheduler" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.958888 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" containerName="probe" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.958905 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerName="barbican-api-log" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.958918 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerName="barbican-api" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.958938 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="95d16264-79d8-49aa-92aa-9f95f6f88ee5" containerName="dnsmasq-dns" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.958957 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" containerName="cinder-scheduler" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.960128 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.965638 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.969800 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.000500 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.000588 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-scripts\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.000642 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-config-data\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.000691 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x88d\" (UniqueName: \"kubernetes.io/projected/58b4678d-e59b-49d1-b06e-338a42a0e51e-kube-api-access-9x88d\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.000860 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.000954 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58b4678d-e59b-49d1-b06e-338a42a0e51e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.102725 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58b4678d-e59b-49d1-b06e-338a42a0e51e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.102819 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.102884 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-scripts\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.102927 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-config-data\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.102982 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x88d\" (UniqueName: \"kubernetes.io/projected/58b4678d-e59b-49d1-b06e-338a42a0e51e-kube-api-access-9x88d\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.103135 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.104582 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58b4678d-e59b-49d1-b06e-338a42a0e51e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.111409 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-config-data\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.112846 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.126219 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x88d\" (UniqueName: \"kubernetes.io/projected/58b4678d-e59b-49d1-b06e-338a42a0e51e-kube-api-access-9x88d\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.126226 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-scripts\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.132620 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.299610 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.305575 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" path="/var/lib/kubelet/pods/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14/volumes" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.757005 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.890965 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"58b4678d-e59b-49d1-b06e-338a42a0e51e","Type":"ContainerStarted","Data":"83d1b2eb20981f2a9a2a1eda26c8252ba222ee4a68dd3f7546c40138c8e10370"} Jan 21 14:54:15 crc kubenswrapper[4902]: I0121 14:54:15.033372 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:54:15 crc kubenswrapper[4902]: I0121 14:54:15.092892 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-569676bc6b-gw28h"] Jan 21 14:54:15 crc kubenswrapper[4902]: I0121 14:54:15.093165 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-569676bc6b-gw28h" podUID="b2e9efdb-8b95-4082-8a1d-8b5a987b2516" containerName="neutron-api" containerID="cri-o://a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30" gracePeriod=30 Jan 21 14:54:15 crc kubenswrapper[4902]: I0121 14:54:15.093551 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-569676bc6b-gw28h" podUID="b2e9efdb-8b95-4082-8a1d-8b5a987b2516" containerName="neutron-httpd" containerID="cri-o://6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69" gracePeriod=30 Jan 21 14:54:15 crc kubenswrapper[4902]: I0121 14:54:15.657083 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 21 14:54:15 crc kubenswrapper[4902]: I0121 14:54:15.914131 4902 generic.go:334] "Generic (PLEG): container finished" podID="b2e9efdb-8b95-4082-8a1d-8b5a987b2516" containerID="6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69" exitCode=0 Jan 21 14:54:15 crc kubenswrapper[4902]: I0121 14:54:15.914201 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-569676bc6b-gw28h" event={"ID":"b2e9efdb-8b95-4082-8a1d-8b5a987b2516","Type":"ContainerDied","Data":"6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69"} Jan 21 14:54:15 crc kubenswrapper[4902]: I0121 14:54:15.918293 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"58b4678d-e59b-49d1-b06e-338a42a0e51e","Type":"ContainerStarted","Data":"669110a27652bb9b7b8004db550a35eb0dceaedaf48edf3ca2483cc2449bc57c"} Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.307674 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-54bc9cbc97-hx966"] Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.309280 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-54bc9cbc97-hx966"] Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.309378 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.315779 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.316805 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.316962 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.477920 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-public-tls-certs\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.478241 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3389852b-01f7-4dc9-b7c2-73c858ba1268-run-httpd\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.478266 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msjhn\" (UniqueName: \"kubernetes.io/projected/3389852b-01f7-4dc9-b7c2-73c858ba1268-kube-api-access-msjhn\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.478292 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3389852b-01f7-4dc9-b7c2-73c858ba1268-log-httpd\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.479550 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-config-data\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.479623 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-internal-tls-certs\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.479676 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-combined-ca-bundle\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.479707 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3389852b-01f7-4dc9-b7c2-73c858ba1268-etc-swift\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.580921 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-public-tls-certs\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.580965 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3389852b-01f7-4dc9-b7c2-73c858ba1268-run-httpd\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.580994 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msjhn\" (UniqueName: \"kubernetes.io/projected/3389852b-01f7-4dc9-b7c2-73c858ba1268-kube-api-access-msjhn\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.581010 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3389852b-01f7-4dc9-b7c2-73c858ba1268-log-httpd\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.581033 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-config-data\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.581078 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-internal-tls-certs\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.581128 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-combined-ca-bundle\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.581160 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3389852b-01f7-4dc9-b7c2-73c858ba1268-etc-swift\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.581702 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3389852b-01f7-4dc9-b7c2-73c858ba1268-run-httpd\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.584080 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3389852b-01f7-4dc9-b7c2-73c858ba1268-log-httpd\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.586629 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-public-tls-certs\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.587532 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-combined-ca-bundle\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.587950 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-internal-tls-certs\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.589481 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3389852b-01f7-4dc9-b7c2-73c858ba1268-etc-swift\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.599850 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-config-data\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.599941 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msjhn\" (UniqueName: \"kubernetes.io/projected/3389852b-01f7-4dc9-b7c2-73c858ba1268-kube-api-access-msjhn\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.657616 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.930976 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"58b4678d-e59b-49d1-b06e-338a42a0e51e","Type":"ContainerStarted","Data":"c7dbc8dbff5390b63de46436cbdf0b7cd9f0cbbc930ab3a08d07d477a6d55001"} Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.957645 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.957622465 podStartE2EDuration="3.957622465s" podCreationTimestamp="2026-01-21 14:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:54:16.951637946 +0000 UTC m=+1219.028470985" watchObservedRunningTime="2026-01-21 14:54:16.957622465 +0000 UTC m=+1219.034455494" Jan 21 14:54:17 crc kubenswrapper[4902]: I0121 14:54:17.276891 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-54bc9cbc97-hx966"] Jan 21 14:54:17 crc kubenswrapper[4902]: I0121 14:54:17.769641 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:54:17 crc kubenswrapper[4902]: I0121 14:54:17.770216 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:54:17 crc kubenswrapper[4902]: I0121 14:54:17.943135 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54bc9cbc97-hx966" event={"ID":"3389852b-01f7-4dc9-b7c2-73c858ba1268","Type":"ContainerStarted","Data":"03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398"} Jan 21 14:54:17 crc kubenswrapper[4902]: I0121 14:54:17.943188 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54bc9cbc97-hx966" event={"ID":"3389852b-01f7-4dc9-b7c2-73c858ba1268","Type":"ContainerStarted","Data":"e83ca63bcfd9328da7616c6b5c09b31fc0bd4751ea531f09a2e1f38c1a7f3d76"} Jan 21 14:54:18 crc kubenswrapper[4902]: I0121 14:54:18.953425 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54bc9cbc97-hx966" event={"ID":"3389852b-01f7-4dc9-b7c2-73c858ba1268","Type":"ContainerStarted","Data":"2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb"} Jan 21 14:54:18 crc kubenswrapper[4902]: I0121 14:54:18.953891 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:18 crc kubenswrapper[4902]: I0121 14:54:18.953911 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:18 crc kubenswrapper[4902]: I0121 14:54:18.988567 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-54bc9cbc97-hx966" podStartSLOduration=2.988538383 podStartE2EDuration="2.988538383s" podCreationTimestamp="2026-01-21 14:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:54:18.981534794 +0000 UTC m=+1221.058367823" watchObservedRunningTime="2026-01-21 14:54:18.988538383 +0000 UTC m=+1221.065371412" Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.254595 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.254945 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="ceilometer-central-agent" containerID="cri-o://4d2c354c316e2909a7d5339fb8d71148e96c6d6c6e258b111839778343d895fd" gracePeriod=30 Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.254989 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="ceilometer-notification-agent" containerID="cri-o://cb93f6564b507552595aee7a7e6446eef24375fdbc49027675bd228983bf00c6" gracePeriod=30 Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.254985 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="sg-core" containerID="cri-o://80d76f992bb8bfbe9c73393b52cc22c46d041fdcc2adb3e649bc943a55253ef8" gracePeriod=30 Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.255108 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="proxy-httpd" containerID="cri-o://51096ea975abaac8a431582dfeae27fd25ef6ce3f6de1cc2cf3f257b1bb51809" gracePeriod=30 Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.264514 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.164:3000/\": EOF" Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.300070 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.970604 4902 generic.go:334] "Generic (PLEG): container finished" podID="e729f055-6d31-4994-8561-fbefd5aba351" containerID="51096ea975abaac8a431582dfeae27fd25ef6ce3f6de1cc2cf3f257b1bb51809" exitCode=0 Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.970645 4902 generic.go:334] "Generic (PLEG): container finished" podID="e729f055-6d31-4994-8561-fbefd5aba351" containerID="80d76f992bb8bfbe9c73393b52cc22c46d041fdcc2adb3e649bc943a55253ef8" exitCode=2 Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.970658 4902 generic.go:334] "Generic (PLEG): container finished" podID="e729f055-6d31-4994-8561-fbefd5aba351" containerID="4d2c354c316e2909a7d5339fb8d71148e96c6d6c6e258b111839778343d895fd" exitCode=0 Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.970674 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e729f055-6d31-4994-8561-fbefd5aba351","Type":"ContainerDied","Data":"51096ea975abaac8a431582dfeae27fd25ef6ce3f6de1cc2cf3f257b1bb51809"} Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.970725 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e729f055-6d31-4994-8561-fbefd5aba351","Type":"ContainerDied","Data":"80d76f992bb8bfbe9c73393b52cc22c46d041fdcc2adb3e649bc943a55253ef8"} Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.970742 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e729f055-6d31-4994-8561-fbefd5aba351","Type":"ContainerDied","Data":"4d2c354c316e2909a7d5339fb8d71148e96c6d6c6e258b111839778343d895fd"} Jan 21 14:54:21 crc kubenswrapper[4902]: I0121 14:54:21.995636 4902 generic.go:334] "Generic (PLEG): container finished" podID="e729f055-6d31-4994-8561-fbefd5aba351" containerID="cb93f6564b507552595aee7a7e6446eef24375fdbc49027675bd228983bf00c6" exitCode=0 Jan 21 14:54:21 crc kubenswrapper[4902]: I0121 14:54:21.996254 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e729f055-6d31-4994-8561-fbefd5aba351","Type":"ContainerDied","Data":"cb93f6564b507552595aee7a7e6446eef24375fdbc49027675bd228983bf00c6"} Jan 21 14:54:24 crc kubenswrapper[4902]: I0121 14:54:24.526006 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.853697 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.925075 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-scripts\") pod \"e729f055-6d31-4994-8561-fbefd5aba351\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.925180 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e729f055-6d31-4994-8561-fbefd5aba351-run-httpd\") pod \"e729f055-6d31-4994-8561-fbefd5aba351\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.925240 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e729f055-6d31-4994-8561-fbefd5aba351-log-httpd\") pod \"e729f055-6d31-4994-8561-fbefd5aba351\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.925294 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-sg-core-conf-yaml\") pod \"e729f055-6d31-4994-8561-fbefd5aba351\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.925350 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-combined-ca-bundle\") pod \"e729f055-6d31-4994-8561-fbefd5aba351\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.925378 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gszw2\" (UniqueName: \"kubernetes.io/projected/e729f055-6d31-4994-8561-fbefd5aba351-kube-api-access-gszw2\") pod \"e729f055-6d31-4994-8561-fbefd5aba351\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.925411 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-config-data\") pod \"e729f055-6d31-4994-8561-fbefd5aba351\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.925840 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e729f055-6d31-4994-8561-fbefd5aba351-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e729f055-6d31-4994-8561-fbefd5aba351" (UID: "e729f055-6d31-4994-8561-fbefd5aba351"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.929197 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e729f055-6d31-4994-8561-fbefd5aba351-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e729f055-6d31-4994-8561-fbefd5aba351" (UID: "e729f055-6d31-4994-8561-fbefd5aba351"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.929538 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-scripts" (OuterVolumeSpecName: "scripts") pod "e729f055-6d31-4994-8561-fbefd5aba351" (UID: "e729f055-6d31-4994-8561-fbefd5aba351"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.929728 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e729f055-6d31-4994-8561-fbefd5aba351-kube-api-access-gszw2" (OuterVolumeSpecName: "kube-api-access-gszw2") pod "e729f055-6d31-4994-8561-fbefd5aba351" (UID: "e729f055-6d31-4994-8561-fbefd5aba351"). InnerVolumeSpecName "kube-api-access-gszw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.979786 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e729f055-6d31-4994-8561-fbefd5aba351" (UID: "e729f055-6d31-4994-8561-fbefd5aba351"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.028548 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gszw2\" (UniqueName: \"kubernetes.io/projected/e729f055-6d31-4994-8561-fbefd5aba351-kube-api-access-gszw2\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.028588 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.028604 4902 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e729f055-6d31-4994-8561-fbefd5aba351-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.028616 4902 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e729f055-6d31-4994-8561-fbefd5aba351-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.028629 4902 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.050190 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e729f055-6d31-4994-8561-fbefd5aba351" (UID: "e729f055-6d31-4994-8561-fbefd5aba351"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.091079 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-config-data" (OuterVolumeSpecName: "config-data") pod "e729f055-6d31-4994-8561-fbefd5aba351" (UID: "e729f055-6d31-4994-8561-fbefd5aba351"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.130097 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.130126 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.139135 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.231303 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slg7l\" (UniqueName: \"kubernetes.io/projected/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-kube-api-access-slg7l\") pod \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.231367 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-combined-ca-bundle\") pod \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.231418 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-httpd-config\") pod \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.231441 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-config\") pod \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.231486 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-ovndb-tls-certs\") pod \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.235478 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b2e9efdb-8b95-4082-8a1d-8b5a987b2516" (UID: "b2e9efdb-8b95-4082-8a1d-8b5a987b2516"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.236778 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-kube-api-access-slg7l" (OuterVolumeSpecName: "kube-api-access-slg7l") pod "b2e9efdb-8b95-4082-8a1d-8b5a987b2516" (UID: "b2e9efdb-8b95-4082-8a1d-8b5a987b2516"). InnerVolumeSpecName "kube-api-access-slg7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.290026 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-config" (OuterVolumeSpecName: "config") pod "b2e9efdb-8b95-4082-8a1d-8b5a987b2516" (UID: "b2e9efdb-8b95-4082-8a1d-8b5a987b2516"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.297749 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2e9efdb-8b95-4082-8a1d-8b5a987b2516" (UID: "b2e9efdb-8b95-4082-8a1d-8b5a987b2516"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.309234 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b2e9efdb-8b95-4082-8a1d-8b5a987b2516" (UID: "b2e9efdb-8b95-4082-8a1d-8b5a987b2516"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.333508 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.333569 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.333582 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.333612 4902 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.333626 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slg7l\" (UniqueName: \"kubernetes.io/projected/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-kube-api-access-slg7l\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.426417 4902 generic.go:334] "Generic (PLEG): container finished" podID="b2e9efdb-8b95-4082-8a1d-8b5a987b2516" containerID="a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30" exitCode=0 Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.426751 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-569676bc6b-gw28h" event={"ID":"b2e9efdb-8b95-4082-8a1d-8b5a987b2516","Type":"ContainerDied","Data":"a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30"} Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.426789 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-569676bc6b-gw28h" event={"ID":"b2e9efdb-8b95-4082-8a1d-8b5a987b2516","Type":"ContainerDied","Data":"d6d295ac44d5e84b8146c28859766bda166d60fe457d50975c21ca70126c4bc2"} Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.426800 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.426814 4902 scope.go:117] "RemoveContainer" containerID="6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.429841 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b14dfbd1-cf80-4ba8-9372-ca5767f5d689","Type":"ContainerStarted","Data":"c3136e4ad34aa1ed13927876b4fa6d4fd6063bfd87d28178ef6e44c14e4cf73a"} Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.443079 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e729f055-6d31-4994-8561-fbefd5aba351","Type":"ContainerDied","Data":"e69baea6eee432132f0068671e39127960be033916e49020781e5d192b5eaecc"} Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.443217 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.454297 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.207483727 podStartE2EDuration="19.454281733s" podCreationTimestamp="2026-01-21 14:54:07 +0000 UTC" firstStartedPulling="2026-01-21 14:54:09.319224308 +0000 UTC m=+1211.396057337" lastFinishedPulling="2026-01-21 14:54:25.566022314 +0000 UTC m=+1227.642855343" observedRunningTime="2026-01-21 14:54:26.449735054 +0000 UTC m=+1228.526568083" watchObservedRunningTime="2026-01-21 14:54:26.454281733 +0000 UTC m=+1228.531114762" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.533737 4902 scope.go:117] "RemoveContainer" containerID="a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.539087 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-569676bc6b-gw28h"] Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.554953 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-569676bc6b-gw28h"] Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.563071 4902 scope.go:117] "RemoveContainer" containerID="6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69" Jan 21 14:54:26 crc kubenswrapper[4902]: E0121 14:54:26.563666 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69\": container with ID starting with 6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69 not found: ID does not exist" containerID="6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.563707 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69"} err="failed to get container status \"6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69\": rpc error: code = NotFound desc = could not find container \"6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69\": container with ID starting with 6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69 not found: ID does not exist" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.563732 4902 scope.go:117] "RemoveContainer" containerID="a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30" Jan 21 14:54:26 crc kubenswrapper[4902]: E0121 14:54:26.564460 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30\": container with ID starting with a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30 not found: ID does not exist" containerID="a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.564486 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30"} err="failed to get container status \"a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30\": rpc error: code = NotFound desc = could not find container \"a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30\": container with ID starting with a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30 not found: ID does not exist" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.564502 4902 scope.go:117] "RemoveContainer" containerID="51096ea975abaac8a431582dfeae27fd25ef6ce3f6de1cc2cf3f257b1bb51809" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.568315 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.578668 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.588782 4902 scope.go:117] "RemoveContainer" containerID="80d76f992bb8bfbe9c73393b52cc22c46d041fdcc2adb3e649bc943a55253ef8" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.593323 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:26 crc kubenswrapper[4902]: E0121 14:54:26.593786 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="sg-core" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.593809 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="sg-core" Jan 21 14:54:26 crc kubenswrapper[4902]: E0121 14:54:26.593845 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="ceilometer-central-agent" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.593855 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="ceilometer-central-agent" Jan 21 14:54:26 crc kubenswrapper[4902]: E0121 14:54:26.593874 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2e9efdb-8b95-4082-8a1d-8b5a987b2516" containerName="neutron-httpd" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.593881 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2e9efdb-8b95-4082-8a1d-8b5a987b2516" containerName="neutron-httpd" Jan 21 14:54:26 crc kubenswrapper[4902]: E0121 14:54:26.593896 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2e9efdb-8b95-4082-8a1d-8b5a987b2516" containerName="neutron-api" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.593904 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2e9efdb-8b95-4082-8a1d-8b5a987b2516" containerName="neutron-api" Jan 21 14:54:26 crc kubenswrapper[4902]: E0121 14:54:26.593921 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="ceilometer-notification-agent" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.593930 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="ceilometer-notification-agent" Jan 21 14:54:26 crc kubenswrapper[4902]: E0121 14:54:26.593950 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="proxy-httpd" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.593958 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="proxy-httpd" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.594184 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="ceilometer-notification-agent" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.594203 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="proxy-httpd" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.594213 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2e9efdb-8b95-4082-8a1d-8b5a987b2516" containerName="neutron-api" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.594227 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="ceilometer-central-agent" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.594238 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2e9efdb-8b95-4082-8a1d-8b5a987b2516" containerName="neutron-httpd" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.594251 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="sg-core" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.596186 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.598362 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.598744 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.610734 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.610750 4902 scope.go:117] "RemoveContainer" containerID="cb93f6564b507552595aee7a7e6446eef24375fdbc49027675bd228983bf00c6" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.635313 4902 scope.go:117] "RemoveContainer" containerID="4d2c354c316e2909a7d5339fb8d71148e96c6d6c6e258b111839778343d895fd" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.637366 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-scripts\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.637538 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmm5r\" (UniqueName: \"kubernetes.io/projected/ff47e21a-75a1-4d66-b599-725966fa456e-kube-api-access-wmm5r\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.637605 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.637638 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff47e21a-75a1-4d66-b599-725966fa456e-log-httpd\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.637723 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.637786 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff47e21a-75a1-4d66-b599-725966fa456e-run-httpd\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.637885 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-config-data\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.663624 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.668631 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.738979 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-config-data\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.739212 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-scripts\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.739308 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmm5r\" (UniqueName: \"kubernetes.io/projected/ff47e21a-75a1-4d66-b599-725966fa456e-kube-api-access-wmm5r\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.739347 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.739374 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff47e21a-75a1-4d66-b599-725966fa456e-log-httpd\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.739394 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.739443 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff47e21a-75a1-4d66-b599-725966fa456e-run-httpd\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.743105 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff47e21a-75a1-4d66-b599-725966fa456e-log-httpd\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.745114 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff47e21a-75a1-4d66-b599-725966fa456e-run-httpd\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.747938 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-scripts\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.750779 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.756024 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.761034 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-config-data\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.767859 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmm5r\" (UniqueName: \"kubernetes.io/projected/ff47e21a-75a1-4d66-b599-725966fa456e-kube-api-access-wmm5r\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.923547 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:27 crc kubenswrapper[4902]: I0121 14:54:27.428213 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:27 crc kubenswrapper[4902]: I0121 14:54:27.454648 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff47e21a-75a1-4d66-b599-725966fa456e","Type":"ContainerStarted","Data":"01acb75f5aa7a52c23b6938805bfcfd86387b55bac88b2a0c34ff3a7e37b8a51"} Jan 21 14:54:28 crc kubenswrapper[4902]: I0121 14:54:28.305905 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2e9efdb-8b95-4082-8a1d-8b5a987b2516" path="/var/lib/kubelet/pods/b2e9efdb-8b95-4082-8a1d-8b5a987b2516/volumes" Jan 21 14:54:28 crc kubenswrapper[4902]: I0121 14:54:28.306536 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e729f055-6d31-4994-8561-fbefd5aba351" path="/var/lib/kubelet/pods/e729f055-6d31-4994-8561-fbefd5aba351/volumes" Jan 21 14:54:31 crc kubenswrapper[4902]: I0121 14:54:31.502836 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff47e21a-75a1-4d66-b599-725966fa456e","Type":"ContainerStarted","Data":"9488c0012fa14d74a2416c6470b9e3bb2fd6546e005a8aa4d68294c775bb7bf3"} Jan 21 14:54:31 crc kubenswrapper[4902]: I0121 14:54:31.503363 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff47e21a-75a1-4d66-b599-725966fa456e","Type":"ContainerStarted","Data":"2529bfc2f37257b9b0cf5337629fa086f24c54088c21213974bcbbd9f5d66189"} Jan 21 14:54:32 crc kubenswrapper[4902]: I0121 14:54:32.139494 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:32 crc kubenswrapper[4902]: I0121 14:54:32.513873 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff47e21a-75a1-4d66-b599-725966fa456e","Type":"ContainerStarted","Data":"608e1b4097af22dfa5d4ac8e16f96f4559ccfac69b91f2f6b0c474e5b5b3009a"} Jan 21 14:54:33 crc kubenswrapper[4902]: I0121 14:54:33.529908 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff47e21a-75a1-4d66-b599-725966fa456e","Type":"ContainerStarted","Data":"d65b8776fa29099f87b966a2054c9cd54a03d0fce6f73cb30c4f261bbc860619"} Jan 21 14:54:33 crc kubenswrapper[4902]: I0121 14:54:33.530220 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="proxy-httpd" containerID="cri-o://d65b8776fa29099f87b966a2054c9cd54a03d0fce6f73cb30c4f261bbc860619" gracePeriod=30 Jan 21 14:54:33 crc kubenswrapper[4902]: I0121 14:54:33.530243 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="ceilometer-notification-agent" containerID="cri-o://9488c0012fa14d74a2416c6470b9e3bb2fd6546e005a8aa4d68294c775bb7bf3" gracePeriod=30 Jan 21 14:54:33 crc kubenswrapper[4902]: I0121 14:54:33.530258 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="sg-core" containerID="cri-o://608e1b4097af22dfa5d4ac8e16f96f4559ccfac69b91f2f6b0c474e5b5b3009a" gracePeriod=30 Jan 21 14:54:33 crc kubenswrapper[4902]: I0121 14:54:33.530255 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 14:54:33 crc kubenswrapper[4902]: I0121 14:54:33.530189 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="ceilometer-central-agent" containerID="cri-o://2529bfc2f37257b9b0cf5337629fa086f24c54088c21213974bcbbd9f5d66189" gracePeriod=30 Jan 21 14:54:33 crc kubenswrapper[4902]: I0121 14:54:33.577724 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.822673118 podStartE2EDuration="7.577701869s" podCreationTimestamp="2026-01-21 14:54:26 +0000 UTC" firstStartedPulling="2026-01-21 14:54:27.440172175 +0000 UTC m=+1229.517005204" lastFinishedPulling="2026-01-21 14:54:33.195200926 +0000 UTC m=+1235.272033955" observedRunningTime="2026-01-21 14:54:33.555427775 +0000 UTC m=+1235.632260804" watchObservedRunningTime="2026-01-21 14:54:33.577701869 +0000 UTC m=+1235.654534898" Jan 21 14:54:34 crc kubenswrapper[4902]: I0121 14:54:34.541538 4902 generic.go:334] "Generic (PLEG): container finished" podID="ff47e21a-75a1-4d66-b599-725966fa456e" containerID="d65b8776fa29099f87b966a2054c9cd54a03d0fce6f73cb30c4f261bbc860619" exitCode=0 Jan 21 14:54:34 crc kubenswrapper[4902]: I0121 14:54:34.541899 4902 generic.go:334] "Generic (PLEG): container finished" podID="ff47e21a-75a1-4d66-b599-725966fa456e" containerID="608e1b4097af22dfa5d4ac8e16f96f4559ccfac69b91f2f6b0c474e5b5b3009a" exitCode=2 Jan 21 14:54:34 crc kubenswrapper[4902]: I0121 14:54:34.541910 4902 generic.go:334] "Generic (PLEG): container finished" podID="ff47e21a-75a1-4d66-b599-725966fa456e" containerID="9488c0012fa14d74a2416c6470b9e3bb2fd6546e005a8aa4d68294c775bb7bf3" exitCode=0 Jan 21 14:54:34 crc kubenswrapper[4902]: I0121 14:54:34.541620 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff47e21a-75a1-4d66-b599-725966fa456e","Type":"ContainerDied","Data":"d65b8776fa29099f87b966a2054c9cd54a03d0fce6f73cb30c4f261bbc860619"} Jan 21 14:54:34 crc kubenswrapper[4902]: I0121 14:54:34.541952 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff47e21a-75a1-4d66-b599-725966fa456e","Type":"ContainerDied","Data":"608e1b4097af22dfa5d4ac8e16f96f4559ccfac69b91f2f6b0c474e5b5b3009a"} Jan 21 14:54:34 crc kubenswrapper[4902]: I0121 14:54:34.541970 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff47e21a-75a1-4d66-b599-725966fa456e","Type":"ContainerDied","Data":"9488c0012fa14d74a2416c6470b9e3bb2fd6546e005a8aa4d68294c775bb7bf3"} Jan 21 14:54:35 crc kubenswrapper[4902]: I0121 14:54:35.249792 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:54:35 crc kubenswrapper[4902]: I0121 14:54:35.250103 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="30ff158a-452e-4180-b99e-9a171035d794" containerName="glance-log" containerID="cri-o://5409eefc8bdd22f49ffbcce65cbb54882443f5c301c1ffa55abf84f9c6380456" gracePeriod=30 Jan 21 14:54:35 crc kubenswrapper[4902]: I0121 14:54:35.250190 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="30ff158a-452e-4180-b99e-9a171035d794" containerName="glance-httpd" containerID="cri-o://71cca2f7cf5320c189b79d957584fa123f879c7a9cb4707bef6dd5f5eb455d19" gracePeriod=30 Jan 21 14:54:35 crc kubenswrapper[4902]: I0121 14:54:35.564324 4902 generic.go:334] "Generic (PLEG): container finished" podID="30ff158a-452e-4180-b99e-9a171035d794" containerID="5409eefc8bdd22f49ffbcce65cbb54882443f5c301c1ffa55abf84f9c6380456" exitCode=143 Jan 21 14:54:35 crc kubenswrapper[4902]: I0121 14:54:35.564441 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"30ff158a-452e-4180-b99e-9a171035d794","Type":"ContainerDied","Data":"5409eefc8bdd22f49ffbcce65cbb54882443f5c301c1ffa55abf84f9c6380456"} Jan 21 14:54:36 crc kubenswrapper[4902]: I0121 14:54:36.801510 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:54:36 crc kubenswrapper[4902]: I0121 14:54:36.802675 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" containerName="glance-log" containerID="cri-o://42387b53645327b4ee53a9ee8c0b9dee11eb438a26b3f114231094d19f35ba72" gracePeriod=30 Jan 21 14:54:36 crc kubenswrapper[4902]: I0121 14:54:36.802967 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" containerName="glance-httpd" containerID="cri-o://14edd557813066a0cf1d74f214913ca1b420c477db44eb9e034dfe2ede5a72df" gracePeriod=30 Jan 21 14:54:37 crc kubenswrapper[4902]: I0121 14:54:37.584478 4902 generic.go:334] "Generic (PLEG): container finished" podID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" containerID="42387b53645327b4ee53a9ee8c0b9dee11eb438a26b3f114231094d19f35ba72" exitCode=143 Jan 21 14:54:37 crc kubenswrapper[4902]: I0121 14:54:37.584596 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5048d12c-b66b-4f2f-a706-0e2978b5f0db","Type":"ContainerDied","Data":"42387b53645327b4ee53a9ee8c0b9dee11eb438a26b3f114231094d19f35ba72"} Jan 21 14:54:38 crc kubenswrapper[4902]: I0121 14:54:38.611141 4902 generic.go:334] "Generic (PLEG): container finished" podID="30ff158a-452e-4180-b99e-9a171035d794" containerID="71cca2f7cf5320c189b79d957584fa123f879c7a9cb4707bef6dd5f5eb455d19" exitCode=0 Jan 21 14:54:38 crc kubenswrapper[4902]: I0121 14:54:38.611225 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"30ff158a-452e-4180-b99e-9a171035d794","Type":"ContainerDied","Data":"71cca2f7cf5320c189b79d957584fa123f879c7a9cb4707bef6dd5f5eb455d19"} Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.125952 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.319598 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-public-tls-certs\") pod \"30ff158a-452e-4180-b99e-9a171035d794\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.319658 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-combined-ca-bundle\") pod \"30ff158a-452e-4180-b99e-9a171035d794\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.319801 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"30ff158a-452e-4180-b99e-9a171035d794\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.319894 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/30ff158a-452e-4180-b99e-9a171035d794-httpd-run\") pod \"30ff158a-452e-4180-b99e-9a171035d794\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.319983 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-config-data\") pod \"30ff158a-452e-4180-b99e-9a171035d794\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.320116 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-scripts\") pod \"30ff158a-452e-4180-b99e-9a171035d794\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.320220 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x9kr\" (UniqueName: \"kubernetes.io/projected/30ff158a-452e-4180-b99e-9a171035d794-kube-api-access-5x9kr\") pod \"30ff158a-452e-4180-b99e-9a171035d794\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.320258 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30ff158a-452e-4180-b99e-9a171035d794-logs\") pod \"30ff158a-452e-4180-b99e-9a171035d794\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.320351 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30ff158a-452e-4180-b99e-9a171035d794-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "30ff158a-452e-4180-b99e-9a171035d794" (UID: "30ff158a-452e-4180-b99e-9a171035d794"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.320868 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/30ff158a-452e-4180-b99e-9a171035d794-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.320866 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30ff158a-452e-4180-b99e-9a171035d794-logs" (OuterVolumeSpecName: "logs") pod "30ff158a-452e-4180-b99e-9a171035d794" (UID: "30ff158a-452e-4180-b99e-9a171035d794"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.327258 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30ff158a-452e-4180-b99e-9a171035d794-kube-api-access-5x9kr" (OuterVolumeSpecName: "kube-api-access-5x9kr") pod "30ff158a-452e-4180-b99e-9a171035d794" (UID: "30ff158a-452e-4180-b99e-9a171035d794"). InnerVolumeSpecName "kube-api-access-5x9kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.327310 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-scripts" (OuterVolumeSpecName: "scripts") pod "30ff158a-452e-4180-b99e-9a171035d794" (UID: "30ff158a-452e-4180-b99e-9a171035d794"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.327975 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "30ff158a-452e-4180-b99e-9a171035d794" (UID: "30ff158a-452e-4180-b99e-9a171035d794"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.349015 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30ff158a-452e-4180-b99e-9a171035d794" (UID: "30ff158a-452e-4180-b99e-9a171035d794"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.374544 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "30ff158a-452e-4180-b99e-9a171035d794" (UID: "30ff158a-452e-4180-b99e-9a171035d794"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.388015 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-config-data" (OuterVolumeSpecName: "config-data") pod "30ff158a-452e-4180-b99e-9a171035d794" (UID: "30ff158a-452e-4180-b99e-9a171035d794"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.423105 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x9kr\" (UniqueName: \"kubernetes.io/projected/30ff158a-452e-4180-b99e-9a171035d794-kube-api-access-5x9kr\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.423150 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30ff158a-452e-4180-b99e-9a171035d794-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.423167 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.423180 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.423223 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.423238 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.423250 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.446780 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.524636 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.621920 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"30ff158a-452e-4180-b99e-9a171035d794","Type":"ContainerDied","Data":"fd2671c5a041bc1da6743eacf4aa7bb033c883d2faa27ca05b2e9c42b04bf8ce"} Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.621982 4902 scope.go:117] "RemoveContainer" containerID="71cca2f7cf5320c189b79d957584fa123f879c7a9cb4707bef6dd5f5eb455d19" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.622020 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.666672 4902 scope.go:117] "RemoveContainer" containerID="5409eefc8bdd22f49ffbcce65cbb54882443f5c301c1ffa55abf84f9c6380456" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.668006 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.721885 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.754337 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:54:39 crc kubenswrapper[4902]: E0121 14:54:39.754934 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30ff158a-452e-4180-b99e-9a171035d794" containerName="glance-log" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.754967 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="30ff158a-452e-4180-b99e-9a171035d794" containerName="glance-log" Jan 21 14:54:39 crc kubenswrapper[4902]: E0121 14:54:39.754991 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30ff158a-452e-4180-b99e-9a171035d794" containerName="glance-httpd" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.754998 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="30ff158a-452e-4180-b99e-9a171035d794" containerName="glance-httpd" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.755267 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="30ff158a-452e-4180-b99e-9a171035d794" containerName="glance-httpd" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.755293 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="30ff158a-452e-4180-b99e-9a171035d794" containerName="glance-log" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.756725 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.759882 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.761414 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.763666 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.931334 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.931412 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.931442 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-scripts\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.931474 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.931497 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-config-data\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.931526 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-logs\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.931857 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.931948 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtwj7\" (UniqueName: \"kubernetes.io/projected/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-kube-api-access-xtwj7\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.957382 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.148:9292/healthcheck\": read tcp 10.217.0.2:49572->10.217.0.148:9292: read: connection reset by peer" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.958167 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.148:9292/healthcheck\": read tcp 10.217.0.2:49568->10.217.0.148:9292: read: connection reset by peer" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.034829 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.035013 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtwj7\" (UniqueName: \"kubernetes.io/projected/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-kube-api-access-xtwj7\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.035542 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.035655 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.035690 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-scripts\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.035778 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.035797 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.035813 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-config-data\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.035941 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-logs\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.036542 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-logs\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.036897 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.043032 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.043139 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.043273 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-config-data\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.043347 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-scripts\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.057094 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtwj7\" (UniqueName: \"kubernetes.io/projected/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-kube-api-access-xtwj7\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.069938 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.076309 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.314848 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30ff158a-452e-4180-b99e-9a171035d794" path="/var/lib/kubelet/pods/30ff158a-452e-4180-b99e-9a171035d794/volumes" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.637876 4902 generic.go:334] "Generic (PLEG): container finished" podID="ff47e21a-75a1-4d66-b599-725966fa456e" containerID="2529bfc2f37257b9b0cf5337629fa086f24c54088c21213974bcbbd9f5d66189" exitCode=0 Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.638174 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff47e21a-75a1-4d66-b599-725966fa456e","Type":"ContainerDied","Data":"2529bfc2f37257b9b0cf5337629fa086f24c54088c21213974bcbbd9f5d66189"} Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.640205 4902 generic.go:334] "Generic (PLEG): container finished" podID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" containerID="14edd557813066a0cf1d74f214913ca1b420c477db44eb9e034dfe2ede5a72df" exitCode=0 Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.640251 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5048d12c-b66b-4f2f-a706-0e2978b5f0db","Type":"ContainerDied","Data":"14edd557813066a0cf1d74f214913ca1b420c477db44eb9e034dfe2ede5a72df"} Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.850686 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.954828 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-config-data\") pod \"ff47e21a-75a1-4d66-b599-725966fa456e\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.954920 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff47e21a-75a1-4d66-b599-725966fa456e-log-httpd\") pod \"ff47e21a-75a1-4d66-b599-725966fa456e\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.955025 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-scripts\") pod \"ff47e21a-75a1-4d66-b599-725966fa456e\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.955069 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-combined-ca-bundle\") pod \"ff47e21a-75a1-4d66-b599-725966fa456e\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.955107 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff47e21a-75a1-4d66-b599-725966fa456e-run-httpd\") pod \"ff47e21a-75a1-4d66-b599-725966fa456e\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.955123 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-sg-core-conf-yaml\") pod \"ff47e21a-75a1-4d66-b599-725966fa456e\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.955147 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmm5r\" (UniqueName: \"kubernetes.io/projected/ff47e21a-75a1-4d66-b599-725966fa456e-kube-api-access-wmm5r\") pod \"ff47e21a-75a1-4d66-b599-725966fa456e\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.956411 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff47e21a-75a1-4d66-b599-725966fa456e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ff47e21a-75a1-4d66-b599-725966fa456e" (UID: "ff47e21a-75a1-4d66-b599-725966fa456e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.957006 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff47e21a-75a1-4d66-b599-725966fa456e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ff47e21a-75a1-4d66-b599-725966fa456e" (UID: "ff47e21a-75a1-4d66-b599-725966fa456e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.962494 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-scripts" (OuterVolumeSpecName: "scripts") pod "ff47e21a-75a1-4d66-b599-725966fa456e" (UID: "ff47e21a-75a1-4d66-b599-725966fa456e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.994278 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff47e21a-75a1-4d66-b599-725966fa456e-kube-api-access-wmm5r" (OuterVolumeSpecName: "kube-api-access-wmm5r") pod "ff47e21a-75a1-4d66-b599-725966fa456e" (UID: "ff47e21a-75a1-4d66-b599-725966fa456e"). InnerVolumeSpecName "kube-api-access-wmm5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.004271 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ff47e21a-75a1-4d66-b599-725966fa456e" (UID: "ff47e21a-75a1-4d66-b599-725966fa456e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.057552 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.057594 4902 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff47e21a-75a1-4d66-b599-725966fa456e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.057606 4902 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.057621 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmm5r\" (UniqueName: \"kubernetes.io/projected/ff47e21a-75a1-4d66-b599-725966fa456e-kube-api-access-wmm5r\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.057633 4902 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff47e21a-75a1-4d66-b599-725966fa456e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.085203 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff47e21a-75a1-4d66-b599-725966fa456e" (UID: "ff47e21a-75a1-4d66-b599-725966fa456e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.085585 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.150140 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-config-data" (OuterVolumeSpecName: "config-data") pod "ff47e21a-75a1-4d66-b599-725966fa456e" (UID: "ff47e21a-75a1-4d66-b599-725966fa456e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.159546 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.159584 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.260812 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.261150 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-config-data\") pod \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.261189 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-combined-ca-bundle\") pod \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.261231 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-scripts\") pod \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.261280 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tphvc\" (UniqueName: \"kubernetes.io/projected/5048d12c-b66b-4f2f-a706-0e2978b5f0db-kube-api-access-tphvc\") pod \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.261328 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5048d12c-b66b-4f2f-a706-0e2978b5f0db-httpd-run\") pod \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.261413 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5048d12c-b66b-4f2f-a706-0e2978b5f0db-logs\") pod \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.261442 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-internal-tls-certs\") pod \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.263714 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5048d12c-b66b-4f2f-a706-0e2978b5f0db-logs" (OuterVolumeSpecName: "logs") pod "5048d12c-b66b-4f2f-a706-0e2978b5f0db" (UID: "5048d12c-b66b-4f2f-a706-0e2978b5f0db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.263898 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5048d12c-b66b-4f2f-a706-0e2978b5f0db-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5048d12c-b66b-4f2f-a706-0e2978b5f0db" (UID: "5048d12c-b66b-4f2f-a706-0e2978b5f0db"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.266441 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "5048d12c-b66b-4f2f-a706-0e2978b5f0db" (UID: "5048d12c-b66b-4f2f-a706-0e2978b5f0db"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.266921 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5048d12c-b66b-4f2f-a706-0e2978b5f0db-kube-api-access-tphvc" (OuterVolumeSpecName: "kube-api-access-tphvc") pod "5048d12c-b66b-4f2f-a706-0e2978b5f0db" (UID: "5048d12c-b66b-4f2f-a706-0e2978b5f0db"). InnerVolumeSpecName "kube-api-access-tphvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.269166 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-scripts" (OuterVolumeSpecName: "scripts") pod "5048d12c-b66b-4f2f-a706-0e2978b5f0db" (UID: "5048d12c-b66b-4f2f-a706-0e2978b5f0db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.290586 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5048d12c-b66b-4f2f-a706-0e2978b5f0db" (UID: "5048d12c-b66b-4f2f-a706-0e2978b5f0db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.325074 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5048d12c-b66b-4f2f-a706-0e2978b5f0db" (UID: "5048d12c-b66b-4f2f-a706-0e2978b5f0db"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.337956 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-config-data" (OuterVolumeSpecName: "config-data") pod "5048d12c-b66b-4f2f-a706-0e2978b5f0db" (UID: "5048d12c-b66b-4f2f-a706-0e2978b5f0db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.363896 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.363943 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.363957 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.363971 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.363982 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tphvc\" (UniqueName: \"kubernetes.io/projected/5048d12c-b66b-4f2f-a706-0e2978b5f0db-kube-api-access-tphvc\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.363994 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5048d12c-b66b-4f2f-a706-0e2978b5f0db-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.364006 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5048d12c-b66b-4f2f-a706-0e2978b5f0db-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.364018 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.385948 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.467403 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.651480 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5048d12c-b66b-4f2f-a706-0e2978b5f0db","Type":"ContainerDied","Data":"8a331ab6e6d4779bd2c4ccae990c6e7b561e92f584c35ef4a58e44ff1375f620"} Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.651539 4902 scope.go:117] "RemoveContainer" containerID="14edd557813066a0cf1d74f214913ca1b420c477db44eb9e034dfe2ede5a72df" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.651675 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.666455 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff47e21a-75a1-4d66-b599-725966fa456e","Type":"ContainerDied","Data":"01acb75f5aa7a52c23b6938805bfcfd86387b55bac88b2a0c34ff3a7e37b8a51"} Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.666534 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.700811 4902 scope.go:117] "RemoveContainer" containerID="42387b53645327b4ee53a9ee8c0b9dee11eb438a26b3f114231094d19f35ba72" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.703131 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.720093 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.734082 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.737233 4902 scope.go:117] "RemoveContainer" containerID="d65b8776fa29099f87b966a2054c9cd54a03d0fce6f73cb30c4f261bbc860619" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.745059 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.759588 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:54:41 crc kubenswrapper[4902]: E0121 14:54:41.759932 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="sg-core" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.759944 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="sg-core" Jan 21 14:54:41 crc kubenswrapper[4902]: E0121 14:54:41.759959 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" containerName="glance-log" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.759965 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" containerName="glance-log" Jan 21 14:54:41 crc kubenswrapper[4902]: E0121 14:54:41.759974 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="ceilometer-notification-agent" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.759981 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="ceilometer-notification-agent" Jan 21 14:54:41 crc kubenswrapper[4902]: E0121 14:54:41.759999 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" containerName="glance-httpd" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.760005 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" containerName="glance-httpd" Jan 21 14:54:41 crc kubenswrapper[4902]: E0121 14:54:41.760026 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="ceilometer-central-agent" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.760033 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="ceilometer-central-agent" Jan 21 14:54:41 crc kubenswrapper[4902]: E0121 14:54:41.760060 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="proxy-httpd" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.760067 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="proxy-httpd" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.760222 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" containerName="glance-log" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.760232 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="proxy-httpd" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.760241 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" containerName="glance-httpd" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.760250 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="sg-core" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.760269 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="ceilometer-notification-agent" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.760278 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="ceilometer-central-agent" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.761093 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.768582 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.769210 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.779249 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.788827 4902 scope.go:117] "RemoveContainer" containerID="608e1b4097af22dfa5d4ac8e16f96f4559ccfac69b91f2f6b0c474e5b5b3009a" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.795330 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.795449 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.799614 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.808687 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.856135 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.869861 4902 scope.go:117] "RemoveContainer" containerID="9488c0012fa14d74a2416c6470b9e3bb2fd6546e005a8aa4d68294c775bb7bf3" Jan 21 14:54:41 crc kubenswrapper[4902]: W0121 14:54:41.882651 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff41f7d4_e15a_4fc3_afd9_5d86fe05768f.slice/crio-fab7af2822b1e0c413efff882a4ddbb2ff2b86596095fd2bcec07bee48c5bf19 WatchSource:0}: Error finding container fab7af2822b1e0c413efff882a4ddbb2ff2b86596095fd2bcec07bee48c5bf19: Status 404 returned error can't find the container with id fab7af2822b1e0c413efff882a4ddbb2ff2b86596095fd2bcec07bee48c5bf19 Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.883857 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.884067 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4168bc0-26cf-4786-9e28-95647462c372-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.884114 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.884209 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.884278 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn54b\" (UniqueName: \"kubernetes.io/projected/c4168bc0-26cf-4786-9e28-95647462c372-kube-api-access-kn54b\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.884416 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.884438 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4168bc0-26cf-4786-9e28-95647462c372-logs\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.884473 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.890703 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.949258 4902 scope.go:117] "RemoveContainer" containerID="2529bfc2f37257b9b0cf5337629fa086f24c54088c21213974bcbbd9f5d66189" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.987986 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988030 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4168bc0-26cf-4786-9e28-95647462c372-logs\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988097 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-scripts\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988133 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988189 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988217 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4168bc0-26cf-4786-9e28-95647462c372-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988239 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62f98b44-f071-4c67-a176-1033550150c4-log-httpd\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988256 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvsbg\" (UniqueName: \"kubernetes.io/projected/62f98b44-f071-4c67-a176-1033550150c4-kube-api-access-fvsbg\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988284 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988297 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-config-data\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988321 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988367 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62f98b44-f071-4c67-a176-1033550150c4-run-httpd\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988387 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn54b\" (UniqueName: \"kubernetes.io/projected/c4168bc0-26cf-4786-9e28-95647462c372-kube-api-access-kn54b\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988402 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988426 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988507 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4168bc0-26cf-4786-9e28-95647462c372-logs\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988653 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988881 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4168bc0-26cf-4786-9e28-95647462c372-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.992512 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.993963 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.994873 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.999059 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.010360 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn54b\" (UniqueName: \"kubernetes.io/projected/c4168bc0-26cf-4786-9e28-95647462c372-kube-api-access-kn54b\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.030690 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.089619 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-scripts\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.089967 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62f98b44-f071-4c67-a176-1033550150c4-log-httpd\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.090002 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvsbg\" (UniqueName: \"kubernetes.io/projected/62f98b44-f071-4c67-a176-1033550150c4-kube-api-access-fvsbg\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.090033 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-config-data\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.090144 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62f98b44-f071-4c67-a176-1033550150c4-run-httpd\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.090163 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.090185 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.091261 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62f98b44-f071-4c67-a176-1033550150c4-run-httpd\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.091256 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62f98b44-f071-4c67-a176-1033550150c4-log-httpd\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.093741 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-scripts\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.094452 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.094844 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.095577 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-config-data\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.111781 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.113778 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvsbg\" (UniqueName: \"kubernetes.io/projected/62f98b44-f071-4c67-a176-1033550150c4-kube-api-access-fvsbg\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.124404 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.314244 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" path="/var/lib/kubelet/pods/5048d12c-b66b-4f2f-a706-0e2978b5f0db/volumes" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.314879 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" path="/var/lib/kubelet/pods/ff47e21a-75a1-4d66-b599-725966fa456e/volumes" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.680349 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f","Type":"ContainerStarted","Data":"11db3a976cf5ea9322be5da7913baf9b9709079192d4b3c588596ad2459819bd"} Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.680690 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f","Type":"ContainerStarted","Data":"fab7af2822b1e0c413efff882a4ddbb2ff2b86596095fd2bcec07bee48c5bf19"} Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.735436 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.752922 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:54:42 crc kubenswrapper[4902]: W0121 14:54:42.758404 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62f98b44_f071_4c67_a176_1033550150c4.slice/crio-288e100dc9d7775cb50fbd2d74baf68d2b7ce9f194f99cce744864657ed7517e WatchSource:0}: Error finding container 288e100dc9d7775cb50fbd2d74baf68d2b7ce9f194f99cce744864657ed7517e: Status 404 returned error can't find the container with id 288e100dc9d7775cb50fbd2d74baf68d2b7ce9f194f99cce744864657ed7517e Jan 21 14:54:42 crc kubenswrapper[4902]: W0121 14:54:42.765063 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4168bc0_26cf_4786_9e28_95647462c372.slice/crio-7f3690d2641b9d3eb31fce9c2db367653c8289ff406af6ce68593f803e401401 WatchSource:0}: Error finding container 7f3690d2641b9d3eb31fce9c2db367653c8289ff406af6ce68593f803e401401: Status 404 returned error can't find the container with id 7f3690d2641b9d3eb31fce9c2db367653c8289ff406af6ce68593f803e401401 Jan 21 14:54:43 crc kubenswrapper[4902]: I0121 14:54:43.690269 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4168bc0-26cf-4786-9e28-95647462c372","Type":"ContainerStarted","Data":"baf5060a9be38be6557c2e269eeef0d7067b99a8ffc55de9fabcd6c3d7fd4375"} Jan 21 14:54:43 crc kubenswrapper[4902]: I0121 14:54:43.690621 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4168bc0-26cf-4786-9e28-95647462c372","Type":"ContainerStarted","Data":"7f3690d2641b9d3eb31fce9c2db367653c8289ff406af6ce68593f803e401401"} Jan 21 14:54:43 crc kubenswrapper[4902]: I0121 14:54:43.692276 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62f98b44-f071-4c67-a176-1033550150c4","Type":"ContainerStarted","Data":"288e100dc9d7775cb50fbd2d74baf68d2b7ce9f194f99cce744864657ed7517e"} Jan 21 14:54:43 crc kubenswrapper[4902]: I0121 14:54:43.694106 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f","Type":"ContainerStarted","Data":"29a7ab7f1ceb1b7248d2507a5eb6085cbee233d8230ecf775819b6f6ce78389e"} Jan 21 14:54:43 crc kubenswrapper[4902]: I0121 14:54:43.716072 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.716055756 podStartE2EDuration="4.716055756s" podCreationTimestamp="2026-01-21 14:54:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:54:43.711894145 +0000 UTC m=+1245.788727184" watchObservedRunningTime="2026-01-21 14:54:43.716055756 +0000 UTC m=+1245.792888785" Jan 21 14:54:44 crc kubenswrapper[4902]: I0121 14:54:44.723773 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62f98b44-f071-4c67-a176-1033550150c4","Type":"ContainerStarted","Data":"34b69c1a0b66c8657c3d6821b2c584dbfa27e44857ee98ee3f20ff8b752bed3a"} Jan 21 14:54:44 crc kubenswrapper[4902]: I0121 14:54:44.728657 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4168bc0-26cf-4786-9e28-95647462c372","Type":"ContainerStarted","Data":"635d235f3800b93dc934010299b8ed6cf8c1efd38064d7aecd2aa2faa2ae46a0"} Jan 21 14:54:44 crc kubenswrapper[4902]: I0121 14:54:44.758291 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.758267977 podStartE2EDuration="3.758267977s" podCreationTimestamp="2026-01-21 14:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:54:44.747656014 +0000 UTC m=+1246.824489033" watchObservedRunningTime="2026-01-21 14:54:44.758267977 +0000 UTC m=+1246.835101006" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.381812 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-vcplz"] Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.383117 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vcplz" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.400701 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vcplz"] Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.456961 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7b47\" (UniqueName: \"kubernetes.io/projected/035bb03b-fb8e-4b30-a30f-bfde97b03291-kube-api-access-g7b47\") pod \"nova-api-db-create-vcplz\" (UID: \"035bb03b-fb8e-4b30-a30f-bfde97b03291\") " pod="openstack/nova-api-db-create-vcplz" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.457114 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/035bb03b-fb8e-4b30-a30f-bfde97b03291-operator-scripts\") pod \"nova-api-db-create-vcplz\" (UID: \"035bb03b-fb8e-4b30-a30f-bfde97b03291\") " pod="openstack/nova-api-db-create-vcplz" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.490412 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-6f87-account-create-update-w85cg"] Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.491462 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6f87-account-create-update-w85cg" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.503569 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6f87-account-create-update-w85cg"] Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.506967 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.566189 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7b47\" (UniqueName: \"kubernetes.io/projected/035bb03b-fb8e-4b30-a30f-bfde97b03291-kube-api-access-g7b47\") pod \"nova-api-db-create-vcplz\" (UID: \"035bb03b-fb8e-4b30-a30f-bfde97b03291\") " pod="openstack/nova-api-db-create-vcplz" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.566297 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/035bb03b-fb8e-4b30-a30f-bfde97b03291-operator-scripts\") pod \"nova-api-db-create-vcplz\" (UID: \"035bb03b-fb8e-4b30-a30f-bfde97b03291\") " pod="openstack/nova-api-db-create-vcplz" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.566320 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab17d58c-9dc5-4a20-8ca7-3d06256080c3-operator-scripts\") pod \"nova-api-6f87-account-create-update-w85cg\" (UID: \"ab17d58c-9dc5-4a20-8ca7-3d06256080c3\") " pod="openstack/nova-api-6f87-account-create-update-w85cg" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.566354 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ttmx\" (UniqueName: \"kubernetes.io/projected/ab17d58c-9dc5-4a20-8ca7-3d06256080c3-kube-api-access-9ttmx\") pod \"nova-api-6f87-account-create-update-w85cg\" (UID: \"ab17d58c-9dc5-4a20-8ca7-3d06256080c3\") " pod="openstack/nova-api-6f87-account-create-update-w85cg" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.567149 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/035bb03b-fb8e-4b30-a30f-bfde97b03291-operator-scripts\") pod \"nova-api-db-create-vcplz\" (UID: \"035bb03b-fb8e-4b30-a30f-bfde97b03291\") " pod="openstack/nova-api-db-create-vcplz" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.582005 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-rkcxd"] Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.583149 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rkcxd" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.597908 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rkcxd"] Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.600535 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7b47\" (UniqueName: \"kubernetes.io/projected/035bb03b-fb8e-4b30-a30f-bfde97b03291-kube-api-access-g7b47\") pod \"nova-api-db-create-vcplz\" (UID: \"035bb03b-fb8e-4b30-a30f-bfde97b03291\") " pod="openstack/nova-api-db-create-vcplz" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.667756 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ttmx\" (UniqueName: \"kubernetes.io/projected/ab17d58c-9dc5-4a20-8ca7-3d06256080c3-kube-api-access-9ttmx\") pod \"nova-api-6f87-account-create-update-w85cg\" (UID: \"ab17d58c-9dc5-4a20-8ca7-3d06256080c3\") " pod="openstack/nova-api-6f87-account-create-update-w85cg" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.667849 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb2fcf0-980e-418a-b776-ec7836101d6b-operator-scripts\") pod \"nova-cell0-db-create-rkcxd\" (UID: \"acb2fcf0-980e-418a-b776-ec7836101d6b\") " pod="openstack/nova-cell0-db-create-rkcxd" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.667878 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6z82\" (UniqueName: \"kubernetes.io/projected/acb2fcf0-980e-418a-b776-ec7836101d6b-kube-api-access-c6z82\") pod \"nova-cell0-db-create-rkcxd\" (UID: \"acb2fcf0-980e-418a-b776-ec7836101d6b\") " pod="openstack/nova-cell0-db-create-rkcxd" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.673382 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab17d58c-9dc5-4a20-8ca7-3d06256080c3-operator-scripts\") pod \"nova-api-6f87-account-create-update-w85cg\" (UID: \"ab17d58c-9dc5-4a20-8ca7-3d06256080c3\") " pod="openstack/nova-api-6f87-account-create-update-w85cg" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.674240 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab17d58c-9dc5-4a20-8ca7-3d06256080c3-operator-scripts\") pod \"nova-api-6f87-account-create-update-w85cg\" (UID: \"ab17d58c-9dc5-4a20-8ca7-3d06256080c3\") " pod="openstack/nova-api-6f87-account-create-update-w85cg" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.689621 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-lwq2z"] Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.691153 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lwq2z" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.700146 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ttmx\" (UniqueName: \"kubernetes.io/projected/ab17d58c-9dc5-4a20-8ca7-3d06256080c3-kube-api-access-9ttmx\") pod \"nova-api-6f87-account-create-update-w85cg\" (UID: \"ab17d58c-9dc5-4a20-8ca7-3d06256080c3\") " pod="openstack/nova-api-6f87-account-create-update-w85cg" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.707379 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-lwq2z"] Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.708730 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vcplz" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.724378 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7df7-account-create-update-lmnmw"] Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.725740 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.729312 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.736180 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7df7-account-create-update-lmnmw"] Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.742455 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62f98b44-f071-4c67-a176-1033550150c4","Type":"ContainerStarted","Data":"87f4452a0aa396a56b842123857ff8f77a41868b23cb4fe3ecb0ab8734644f5a"} Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.775746 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd96v\" (UniqueName: \"kubernetes.io/projected/3dfed335-1a3f-4e42-b593-e5958039dadc-kube-api-access-xd96v\") pod \"nova-cell1-db-create-lwq2z\" (UID: \"3dfed335-1a3f-4e42-b593-e5958039dadc\") " pod="openstack/nova-cell1-db-create-lwq2z" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.776068 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb2fcf0-980e-418a-b776-ec7836101d6b-operator-scripts\") pod \"nova-cell0-db-create-rkcxd\" (UID: \"acb2fcf0-980e-418a-b776-ec7836101d6b\") " pod="openstack/nova-cell0-db-create-rkcxd" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.779305 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dfed335-1a3f-4e42-b593-e5958039dadc-operator-scripts\") pod \"nova-cell1-db-create-lwq2z\" (UID: \"3dfed335-1a3f-4e42-b593-e5958039dadc\") " pod="openstack/nova-cell1-db-create-lwq2z" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.779411 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6z82\" (UniqueName: \"kubernetes.io/projected/acb2fcf0-980e-418a-b776-ec7836101d6b-kube-api-access-c6z82\") pod \"nova-cell0-db-create-rkcxd\" (UID: \"acb2fcf0-980e-418a-b776-ec7836101d6b\") " pod="openstack/nova-cell0-db-create-rkcxd" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.779437 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb2fcf0-980e-418a-b776-ec7836101d6b-operator-scripts\") pod \"nova-cell0-db-create-rkcxd\" (UID: \"acb2fcf0-980e-418a-b776-ec7836101d6b\") " pod="openstack/nova-cell0-db-create-rkcxd" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.807521 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6z82\" (UniqueName: \"kubernetes.io/projected/acb2fcf0-980e-418a-b776-ec7836101d6b-kube-api-access-c6z82\") pod \"nova-cell0-db-create-rkcxd\" (UID: \"acb2fcf0-980e-418a-b776-ec7836101d6b\") " pod="openstack/nova-cell0-db-create-rkcxd" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.822319 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-7ca2-account-create-update-tz26x"] Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.828336 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.829433 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6f87-account-create-update-w85cg" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.837121 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7ca2-account-create-update-tz26x"] Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.837450 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.881812 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2gb7\" (UniqueName: \"kubernetes.io/projected/249b1461-ed19-4572-b1e6-c5c44cfa9145-kube-api-access-z2gb7\") pod \"nova-cell0-7df7-account-create-update-lmnmw\" (UID: \"249b1461-ed19-4572-b1e6-c5c44cfa9145\") " pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.881893 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd96v\" (UniqueName: \"kubernetes.io/projected/3dfed335-1a3f-4e42-b593-e5958039dadc-kube-api-access-xd96v\") pod \"nova-cell1-db-create-lwq2z\" (UID: \"3dfed335-1a3f-4e42-b593-e5958039dadc\") " pod="openstack/nova-cell1-db-create-lwq2z" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.881915 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/249b1461-ed19-4572-b1e6-c5c44cfa9145-operator-scripts\") pod \"nova-cell0-7df7-account-create-update-lmnmw\" (UID: \"249b1461-ed19-4572-b1e6-c5c44cfa9145\") " pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.881949 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dfed335-1a3f-4e42-b593-e5958039dadc-operator-scripts\") pod \"nova-cell1-db-create-lwq2z\" (UID: \"3dfed335-1a3f-4e42-b593-e5958039dadc\") " pod="openstack/nova-cell1-db-create-lwq2z" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.884689 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dfed335-1a3f-4e42-b593-e5958039dadc-operator-scripts\") pod \"nova-cell1-db-create-lwq2z\" (UID: \"3dfed335-1a3f-4e42-b593-e5958039dadc\") " pod="openstack/nova-cell1-db-create-lwq2z" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.908649 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd96v\" (UniqueName: \"kubernetes.io/projected/3dfed335-1a3f-4e42-b593-e5958039dadc-kube-api-access-xd96v\") pod \"nova-cell1-db-create-lwq2z\" (UID: \"3dfed335-1a3f-4e42-b593-e5958039dadc\") " pod="openstack/nova-cell1-db-create-lwq2z" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.983744 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjctc\" (UniqueName: \"kubernetes.io/projected/6baf26e6-f197-4ae1-b7a5-40a1147e3276-kube-api-access-vjctc\") pod \"nova-cell1-7ca2-account-create-update-tz26x\" (UID: \"6baf26e6-f197-4ae1-b7a5-40a1147e3276\") " pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.983808 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/249b1461-ed19-4572-b1e6-c5c44cfa9145-operator-scripts\") pod \"nova-cell0-7df7-account-create-update-lmnmw\" (UID: \"249b1461-ed19-4572-b1e6-c5c44cfa9145\") " pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.984228 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2gb7\" (UniqueName: \"kubernetes.io/projected/249b1461-ed19-4572-b1e6-c5c44cfa9145-kube-api-access-z2gb7\") pod \"nova-cell0-7df7-account-create-update-lmnmw\" (UID: \"249b1461-ed19-4572-b1e6-c5c44cfa9145\") " pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.984348 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6baf26e6-f197-4ae1-b7a5-40a1147e3276-operator-scripts\") pod \"nova-cell1-7ca2-account-create-update-tz26x\" (UID: \"6baf26e6-f197-4ae1-b7a5-40a1147e3276\") " pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.984691 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/249b1461-ed19-4572-b1e6-c5c44cfa9145-operator-scripts\") pod \"nova-cell0-7df7-account-create-update-lmnmw\" (UID: \"249b1461-ed19-4572-b1e6-c5c44cfa9145\") " pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.011489 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rkcxd" Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.014503 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2gb7\" (UniqueName: \"kubernetes.io/projected/249b1461-ed19-4572-b1e6-c5c44cfa9145-kube-api-access-z2gb7\") pod \"nova-cell0-7df7-account-create-update-lmnmw\" (UID: \"249b1461-ed19-4572-b1e6-c5c44cfa9145\") " pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.046675 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lwq2z" Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.086093 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6baf26e6-f197-4ae1-b7a5-40a1147e3276-operator-scripts\") pod \"nova-cell1-7ca2-account-create-update-tz26x\" (UID: \"6baf26e6-f197-4ae1-b7a5-40a1147e3276\") " pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.086143 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjctc\" (UniqueName: \"kubernetes.io/projected/6baf26e6-f197-4ae1-b7a5-40a1147e3276-kube-api-access-vjctc\") pod \"nova-cell1-7ca2-account-create-update-tz26x\" (UID: \"6baf26e6-f197-4ae1-b7a5-40a1147e3276\") " pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.087268 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6baf26e6-f197-4ae1-b7a5-40a1147e3276-operator-scripts\") pod \"nova-cell1-7ca2-account-create-update-tz26x\" (UID: \"6baf26e6-f197-4ae1-b7a5-40a1147e3276\") " pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.138764 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.225085 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjctc\" (UniqueName: \"kubernetes.io/projected/6baf26e6-f197-4ae1-b7a5-40a1147e3276-kube-api-access-vjctc\") pod \"nova-cell1-7ca2-account-create-update-tz26x\" (UID: \"6baf26e6-f197-4ae1-b7a5-40a1147e3276\") " pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.326663 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vcplz"] Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.456585 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.496899 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6f87-account-create-update-w85cg"] Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.769693 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7df7-account-create-update-lmnmw"] Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.790183 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vcplz" event={"ID":"035bb03b-fb8e-4b30-a30f-bfde97b03291","Type":"ContainerStarted","Data":"70aa2cf0840fc5f93cbebf841da43d8a387c82a6f9fae61768e764946c976710"} Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.790222 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vcplz" event={"ID":"035bb03b-fb8e-4b30-a30f-bfde97b03291","Type":"ContainerStarted","Data":"790e70b4679b9c689a659e471e0ae68223287ae93a05a9119c36b9badf4b2802"} Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.805151 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-vcplz" podStartSLOduration=1.805131776 podStartE2EDuration="1.805131776s" podCreationTimestamp="2026-01-21 14:54:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:54:46.802991659 +0000 UTC m=+1248.879824688" watchObservedRunningTime="2026-01-21 14:54:46.805131776 +0000 UTC m=+1248.881964815" Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.816799 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62f98b44-f071-4c67-a176-1033550150c4","Type":"ContainerStarted","Data":"1efb79b1ee63c60deba157f0a412f37783a993f9bcdb9443447b3f3f4120a6da"} Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.819374 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6f87-account-create-update-w85cg" event={"ID":"ab17d58c-9dc5-4a20-8ca7-3d06256080c3","Type":"ContainerStarted","Data":"21b2e92c10a22b6aaa2cb8e856bbc1e0f6bd360696dcd90517a4f77ba803ad6c"} Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.850202 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rkcxd"] Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.890371 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-lwq2z"] Jan 21 14:54:46 crc kubenswrapper[4902]: W0121 14:54:46.951784 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3dfed335_1a3f_4e42_b593_e5958039dadc.slice/crio-99d347ec436ffb4d87dc2fd1fd62a807257c4da75f65f961edab01bd197a4a4d WatchSource:0}: Error finding container 99d347ec436ffb4d87dc2fd1fd62a807257c4da75f65f961edab01bd197a4a4d: Status 404 returned error can't find the container with id 99d347ec436ffb4d87dc2fd1fd62a807257c4da75f65f961edab01bd197a4a4d Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.032742 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7ca2-account-create-update-tz26x"] Jan 21 14:54:47 crc kubenswrapper[4902]: W0121 14:54:47.041059 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6baf26e6_f197_4ae1_b7a5_40a1147e3276.slice/crio-267ecb3bc9a537fdb17a975468da1b4d571e614a6cdf56cc7b325f1ea8497bd1 WatchSource:0}: Error finding container 267ecb3bc9a537fdb17a975468da1b4d571e614a6cdf56cc7b325f1ea8497bd1: Status 404 returned error can't find the container with id 267ecb3bc9a537fdb17a975468da1b4d571e614a6cdf56cc7b325f1ea8497bd1 Jan 21 14:54:47 crc kubenswrapper[4902]: E0121 14:54:47.391598 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab17d58c_9dc5_4a20_8ca7_3d06256080c3.slice/crio-conmon-7ba71046f87bc5f37e174f3f4e4802a75f487d3b7ef216e3060c7e05c5b07755.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab17d58c_9dc5_4a20_8ca7_3d06256080c3.slice/crio-7ba71046f87bc5f37e174f3f4e4802a75f487d3b7ef216e3060c7e05c5b07755.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod249b1461_ed19_4572_b1e6_c5c44cfa9145.slice/crio-616e23f05d0b14c7f93dad0c321acc148cd9b2f70ea9019e00391345fff5c7ec.scope\": RecentStats: unable to find data in memory cache]" Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.770078 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.770351 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.829914 4902 generic.go:334] "Generic (PLEG): container finished" podID="3dfed335-1a3f-4e42-b593-e5958039dadc" containerID="40c9945717c6eed6957b84780ec6e3c2301b7187e2ec047124eab88f68c26607" exitCode=0 Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.829994 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lwq2z" event={"ID":"3dfed335-1a3f-4e42-b593-e5958039dadc","Type":"ContainerDied","Data":"40c9945717c6eed6957b84780ec6e3c2301b7187e2ec047124eab88f68c26607"} Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.830119 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lwq2z" event={"ID":"3dfed335-1a3f-4e42-b593-e5958039dadc","Type":"ContainerStarted","Data":"99d347ec436ffb4d87dc2fd1fd62a807257c4da75f65f961edab01bd197a4a4d"} Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.831818 4902 generic.go:334] "Generic (PLEG): container finished" podID="035bb03b-fb8e-4b30-a30f-bfde97b03291" containerID="70aa2cf0840fc5f93cbebf841da43d8a387c82a6f9fae61768e764946c976710" exitCode=0 Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.831909 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vcplz" event={"ID":"035bb03b-fb8e-4b30-a30f-bfde97b03291","Type":"ContainerDied","Data":"70aa2cf0840fc5f93cbebf841da43d8a387c82a6f9fae61768e764946c976710"} Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.833329 4902 generic.go:334] "Generic (PLEG): container finished" podID="acb2fcf0-980e-418a-b776-ec7836101d6b" containerID="670dee5a8d2ff2f59f49370b068ca6bd9c9b2aa28c545aa7b4fee5f803108537" exitCode=0 Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.833413 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rkcxd" event={"ID":"acb2fcf0-980e-418a-b776-ec7836101d6b","Type":"ContainerDied","Data":"670dee5a8d2ff2f59f49370b068ca6bd9c9b2aa28c545aa7b4fee5f803108537"} Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.833464 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rkcxd" event={"ID":"acb2fcf0-980e-418a-b776-ec7836101d6b","Type":"ContainerStarted","Data":"7b64a5e748d791311599d32f08da92ac54de356948a0feec51f5f71dca33fe52"} Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.838546 4902 generic.go:334] "Generic (PLEG): container finished" podID="6baf26e6-f197-4ae1-b7a5-40a1147e3276" containerID="184ed0c03e177484d5129302f45e661a1a2c46bd5bca5080444db5e2821f6ed4" exitCode=0 Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.838636 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" event={"ID":"6baf26e6-f197-4ae1-b7a5-40a1147e3276","Type":"ContainerDied","Data":"184ed0c03e177484d5129302f45e661a1a2c46bd5bca5080444db5e2821f6ed4"} Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.838694 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" event={"ID":"6baf26e6-f197-4ae1-b7a5-40a1147e3276","Type":"ContainerStarted","Data":"267ecb3bc9a537fdb17a975468da1b4d571e614a6cdf56cc7b325f1ea8497bd1"} Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.841098 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62f98b44-f071-4c67-a176-1033550150c4","Type":"ContainerStarted","Data":"2747adc30292da8bb9ef5316d34e01fb5b9994182a7d3c00899398caa602de5d"} Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.841250 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.842684 4902 generic.go:334] "Generic (PLEG): container finished" podID="249b1461-ed19-4572-b1e6-c5c44cfa9145" containerID="616e23f05d0b14c7f93dad0c321acc148cd9b2f70ea9019e00391345fff5c7ec" exitCode=0 Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.842718 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" event={"ID":"249b1461-ed19-4572-b1e6-c5c44cfa9145","Type":"ContainerDied","Data":"616e23f05d0b14c7f93dad0c321acc148cd9b2f70ea9019e00391345fff5c7ec"} Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.842748 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" event={"ID":"249b1461-ed19-4572-b1e6-c5c44cfa9145","Type":"ContainerStarted","Data":"b9c98a428978a8f247f96df66315edc95b73d5b134f4da3dfc12049bd1aa9848"} Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.848414 4902 generic.go:334] "Generic (PLEG): container finished" podID="ab17d58c-9dc5-4a20-8ca7-3d06256080c3" containerID="7ba71046f87bc5f37e174f3f4e4802a75f487d3b7ef216e3060c7e05c5b07755" exitCode=0 Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.848470 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6f87-account-create-update-w85cg" event={"ID":"ab17d58c-9dc5-4a20-8ca7-3d06256080c3","Type":"ContainerDied","Data":"7ba71046f87bc5f37e174f3f4e4802a75f487d3b7ef216e3060c7e05c5b07755"} Jan 21 14:54:48 crc kubenswrapper[4902]: I0121 14:54:48.061388 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.412573071 podStartE2EDuration="7.061371836s" podCreationTimestamp="2026-01-21 14:54:41 +0000 UTC" firstStartedPulling="2026-01-21 14:54:42.761333549 +0000 UTC m=+1244.838166578" lastFinishedPulling="2026-01-21 14:54:47.410132314 +0000 UTC m=+1249.486965343" observedRunningTime="2026-01-21 14:54:48.029694041 +0000 UTC m=+1250.106527070" watchObservedRunningTime="2026-01-21 14:54:48.061371836 +0000 UTC m=+1250.138204865" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.278846 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6f87-account-create-update-w85cg" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.397344 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ttmx\" (UniqueName: \"kubernetes.io/projected/ab17d58c-9dc5-4a20-8ca7-3d06256080c3-kube-api-access-9ttmx\") pod \"ab17d58c-9dc5-4a20-8ca7-3d06256080c3\" (UID: \"ab17d58c-9dc5-4a20-8ca7-3d06256080c3\") " Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.397414 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab17d58c-9dc5-4a20-8ca7-3d06256080c3-operator-scripts\") pod \"ab17d58c-9dc5-4a20-8ca7-3d06256080c3\" (UID: \"ab17d58c-9dc5-4a20-8ca7-3d06256080c3\") " Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.400474 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab17d58c-9dc5-4a20-8ca7-3d06256080c3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab17d58c-9dc5-4a20-8ca7-3d06256080c3" (UID: "ab17d58c-9dc5-4a20-8ca7-3d06256080c3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.409291 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab17d58c-9dc5-4a20-8ca7-3d06256080c3-kube-api-access-9ttmx" (OuterVolumeSpecName: "kube-api-access-9ttmx") pod "ab17d58c-9dc5-4a20-8ca7-3d06256080c3" (UID: "ab17d58c-9dc5-4a20-8ca7-3d06256080c3"). InnerVolumeSpecName "kube-api-access-9ttmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.501261 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ttmx\" (UniqueName: \"kubernetes.io/projected/ab17d58c-9dc5-4a20-8ca7-3d06256080c3-kube-api-access-9ttmx\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.501294 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab17d58c-9dc5-4a20-8ca7-3d06256080c3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.571317 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vcplz" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.580317 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.590333 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.596980 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rkcxd" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.602376 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6baf26e6-f197-4ae1-b7a5-40a1147e3276-operator-scripts\") pod \"6baf26e6-f197-4ae1-b7a5-40a1147e3276\" (UID: \"6baf26e6-f197-4ae1-b7a5-40a1147e3276\") " Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.602402 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2gb7\" (UniqueName: \"kubernetes.io/projected/249b1461-ed19-4572-b1e6-c5c44cfa9145-kube-api-access-z2gb7\") pod \"249b1461-ed19-4572-b1e6-c5c44cfa9145\" (UID: \"249b1461-ed19-4572-b1e6-c5c44cfa9145\") " Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.602429 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/035bb03b-fb8e-4b30-a30f-bfde97b03291-operator-scripts\") pod \"035bb03b-fb8e-4b30-a30f-bfde97b03291\" (UID: \"035bb03b-fb8e-4b30-a30f-bfde97b03291\") " Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.602465 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjctc\" (UniqueName: \"kubernetes.io/projected/6baf26e6-f197-4ae1-b7a5-40a1147e3276-kube-api-access-vjctc\") pod \"6baf26e6-f197-4ae1-b7a5-40a1147e3276\" (UID: \"6baf26e6-f197-4ae1-b7a5-40a1147e3276\") " Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.602495 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6z82\" (UniqueName: \"kubernetes.io/projected/acb2fcf0-980e-418a-b776-ec7836101d6b-kube-api-access-c6z82\") pod \"acb2fcf0-980e-418a-b776-ec7836101d6b\" (UID: \"acb2fcf0-980e-418a-b776-ec7836101d6b\") " Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.602524 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7b47\" (UniqueName: \"kubernetes.io/projected/035bb03b-fb8e-4b30-a30f-bfde97b03291-kube-api-access-g7b47\") pod \"035bb03b-fb8e-4b30-a30f-bfde97b03291\" (UID: \"035bb03b-fb8e-4b30-a30f-bfde97b03291\") " Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.602564 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb2fcf0-980e-418a-b776-ec7836101d6b-operator-scripts\") pod \"acb2fcf0-980e-418a-b776-ec7836101d6b\" (UID: \"acb2fcf0-980e-418a-b776-ec7836101d6b\") " Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.602588 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/249b1461-ed19-4572-b1e6-c5c44cfa9145-operator-scripts\") pod \"249b1461-ed19-4572-b1e6-c5c44cfa9145\" (UID: \"249b1461-ed19-4572-b1e6-c5c44cfa9145\") " Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.603448 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/249b1461-ed19-4572-b1e6-c5c44cfa9145-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "249b1461-ed19-4572-b1e6-c5c44cfa9145" (UID: "249b1461-ed19-4572-b1e6-c5c44cfa9145"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.604401 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6baf26e6-f197-4ae1-b7a5-40a1147e3276-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6baf26e6-f197-4ae1-b7a5-40a1147e3276" (UID: "6baf26e6-f197-4ae1-b7a5-40a1147e3276"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.604830 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/035bb03b-fb8e-4b30-a30f-bfde97b03291-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "035bb03b-fb8e-4b30-a30f-bfde97b03291" (UID: "035bb03b-fb8e-4b30-a30f-bfde97b03291"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.604922 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acb2fcf0-980e-418a-b776-ec7836101d6b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "acb2fcf0-980e-418a-b776-ec7836101d6b" (UID: "acb2fcf0-980e-418a-b776-ec7836101d6b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.608278 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acb2fcf0-980e-418a-b776-ec7836101d6b-kube-api-access-c6z82" (OuterVolumeSpecName: "kube-api-access-c6z82") pod "acb2fcf0-980e-418a-b776-ec7836101d6b" (UID: "acb2fcf0-980e-418a-b776-ec7836101d6b"). InnerVolumeSpecName "kube-api-access-c6z82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.608622 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/035bb03b-fb8e-4b30-a30f-bfde97b03291-kube-api-access-g7b47" (OuterVolumeSpecName: "kube-api-access-g7b47") pod "035bb03b-fb8e-4b30-a30f-bfde97b03291" (UID: "035bb03b-fb8e-4b30-a30f-bfde97b03291"). InnerVolumeSpecName "kube-api-access-g7b47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.609188 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/249b1461-ed19-4572-b1e6-c5c44cfa9145-kube-api-access-z2gb7" (OuterVolumeSpecName: "kube-api-access-z2gb7") pod "249b1461-ed19-4572-b1e6-c5c44cfa9145" (UID: "249b1461-ed19-4572-b1e6-c5c44cfa9145"). InnerVolumeSpecName "kube-api-access-z2gb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.609759 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lwq2z" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.615763 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6baf26e6-f197-4ae1-b7a5-40a1147e3276-kube-api-access-vjctc" (OuterVolumeSpecName: "kube-api-access-vjctc") pod "6baf26e6-f197-4ae1-b7a5-40a1147e3276" (UID: "6baf26e6-f197-4ae1-b7a5-40a1147e3276"). InnerVolumeSpecName "kube-api-access-vjctc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.703732 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd96v\" (UniqueName: \"kubernetes.io/projected/3dfed335-1a3f-4e42-b593-e5958039dadc-kube-api-access-xd96v\") pod \"3dfed335-1a3f-4e42-b593-e5958039dadc\" (UID: \"3dfed335-1a3f-4e42-b593-e5958039dadc\") " Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.703898 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dfed335-1a3f-4e42-b593-e5958039dadc-operator-scripts\") pod \"3dfed335-1a3f-4e42-b593-e5958039dadc\" (UID: \"3dfed335-1a3f-4e42-b593-e5958039dadc\") " Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.704520 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7b47\" (UniqueName: \"kubernetes.io/projected/035bb03b-fb8e-4b30-a30f-bfde97b03291-kube-api-access-g7b47\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.704561 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb2fcf0-980e-418a-b776-ec7836101d6b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.704576 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/249b1461-ed19-4572-b1e6-c5c44cfa9145-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.704567 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dfed335-1a3f-4e42-b593-e5958039dadc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3dfed335-1a3f-4e42-b593-e5958039dadc" (UID: "3dfed335-1a3f-4e42-b593-e5958039dadc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.704588 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6baf26e6-f197-4ae1-b7a5-40a1147e3276-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.704636 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2gb7\" (UniqueName: \"kubernetes.io/projected/249b1461-ed19-4572-b1e6-c5c44cfa9145-kube-api-access-z2gb7\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.704649 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/035bb03b-fb8e-4b30-a30f-bfde97b03291-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.704660 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjctc\" (UniqueName: \"kubernetes.io/projected/6baf26e6-f197-4ae1-b7a5-40a1147e3276-kube-api-access-vjctc\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.704670 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6z82\" (UniqueName: \"kubernetes.io/projected/acb2fcf0-980e-418a-b776-ec7836101d6b-kube-api-access-c6z82\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.706527 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dfed335-1a3f-4e42-b593-e5958039dadc-kube-api-access-xd96v" (OuterVolumeSpecName: "kube-api-access-xd96v") pod "3dfed335-1a3f-4e42-b593-e5958039dadc" (UID: "3dfed335-1a3f-4e42-b593-e5958039dadc"). InnerVolumeSpecName "kube-api-access-xd96v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.806459 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd96v\" (UniqueName: \"kubernetes.io/projected/3dfed335-1a3f-4e42-b593-e5958039dadc-kube-api-access-xd96v\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.806486 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dfed335-1a3f-4e42-b593-e5958039dadc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.866834 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vcplz" event={"ID":"035bb03b-fb8e-4b30-a30f-bfde97b03291","Type":"ContainerDied","Data":"790e70b4679b9c689a659e471e0ae68223287ae93a05a9119c36b9badf4b2802"} Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.866865 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vcplz" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.866874 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="790e70b4679b9c689a659e471e0ae68223287ae93a05a9119c36b9badf4b2802" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.868367 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rkcxd" event={"ID":"acb2fcf0-980e-418a-b776-ec7836101d6b","Type":"ContainerDied","Data":"7b64a5e748d791311599d32f08da92ac54de356948a0feec51f5f71dca33fe52"} Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.868406 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b64a5e748d791311599d32f08da92ac54de356948a0feec51f5f71dca33fe52" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.868463 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rkcxd" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.869694 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" event={"ID":"6baf26e6-f197-4ae1-b7a5-40a1147e3276","Type":"ContainerDied","Data":"267ecb3bc9a537fdb17a975468da1b4d571e614a6cdf56cc7b325f1ea8497bd1"} Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.869716 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.869733 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="267ecb3bc9a537fdb17a975468da1b4d571e614a6cdf56cc7b325f1ea8497bd1" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.871098 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" event={"ID":"249b1461-ed19-4572-b1e6-c5c44cfa9145","Type":"ContainerDied","Data":"b9c98a428978a8f247f96df66315edc95b73d5b134f4da3dfc12049bd1aa9848"} Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.871143 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9c98a428978a8f247f96df66315edc95b73d5b134f4da3dfc12049bd1aa9848" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.871119 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.872442 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6f87-account-create-update-w85cg" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.872421 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6f87-account-create-update-w85cg" event={"ID":"ab17d58c-9dc5-4a20-8ca7-3d06256080c3","Type":"ContainerDied","Data":"21b2e92c10a22b6aaa2cb8e856bbc1e0f6bd360696dcd90517a4f77ba803ad6c"} Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.872699 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21b2e92c10a22b6aaa2cb8e856bbc1e0f6bd360696dcd90517a4f77ba803ad6c" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.873834 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lwq2z" event={"ID":"3dfed335-1a3f-4e42-b593-e5958039dadc","Type":"ContainerDied","Data":"99d347ec436ffb4d87dc2fd1fd62a807257c4da75f65f961edab01bd197a4a4d"} Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.873863 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99d347ec436ffb4d87dc2fd1fd62a807257c4da75f65f961edab01bd197a4a4d" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.874021 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lwq2z" Jan 21 14:54:50 crc kubenswrapper[4902]: I0121 14:54:50.076960 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 14:54:50 crc kubenswrapper[4902]: I0121 14:54:50.077389 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 14:54:50 crc kubenswrapper[4902]: I0121 14:54:50.109107 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 14:54:50 crc kubenswrapper[4902]: I0121 14:54:50.121509 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 14:54:50 crc kubenswrapper[4902]: I0121 14:54:50.882077 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 14:54:50 crc kubenswrapper[4902]: I0121 14:54:50.882133 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.129946 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2kxkv"] Jan 21 14:54:51 crc kubenswrapper[4902]: E0121 14:54:51.130352 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="249b1461-ed19-4572-b1e6-c5c44cfa9145" containerName="mariadb-account-create-update" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.130375 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="249b1461-ed19-4572-b1e6-c5c44cfa9145" containerName="mariadb-account-create-update" Jan 21 14:54:51 crc kubenswrapper[4902]: E0121 14:54:51.130410 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="035bb03b-fb8e-4b30-a30f-bfde97b03291" containerName="mariadb-database-create" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.130419 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="035bb03b-fb8e-4b30-a30f-bfde97b03291" containerName="mariadb-database-create" Jan 21 14:54:51 crc kubenswrapper[4902]: E0121 14:54:51.130441 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dfed335-1a3f-4e42-b593-e5958039dadc" containerName="mariadb-database-create" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.130449 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dfed335-1a3f-4e42-b593-e5958039dadc" containerName="mariadb-database-create" Jan 21 14:54:51 crc kubenswrapper[4902]: E0121 14:54:51.130470 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6baf26e6-f197-4ae1-b7a5-40a1147e3276" containerName="mariadb-account-create-update" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.130478 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="6baf26e6-f197-4ae1-b7a5-40a1147e3276" containerName="mariadb-account-create-update" Jan 21 14:54:51 crc kubenswrapper[4902]: E0121 14:54:51.130499 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acb2fcf0-980e-418a-b776-ec7836101d6b" containerName="mariadb-database-create" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.130508 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="acb2fcf0-980e-418a-b776-ec7836101d6b" containerName="mariadb-database-create" Jan 21 14:54:51 crc kubenswrapper[4902]: E0121 14:54:51.130524 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab17d58c-9dc5-4a20-8ca7-3d06256080c3" containerName="mariadb-account-create-update" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.130532 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab17d58c-9dc5-4a20-8ca7-3d06256080c3" containerName="mariadb-account-create-update" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.130719 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="acb2fcf0-980e-418a-b776-ec7836101d6b" containerName="mariadb-database-create" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.130744 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="035bb03b-fb8e-4b30-a30f-bfde97b03291" containerName="mariadb-database-create" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.130752 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dfed335-1a3f-4e42-b593-e5958039dadc" containerName="mariadb-database-create" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.130763 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="6baf26e6-f197-4ae1-b7a5-40a1147e3276" containerName="mariadb-account-create-update" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.130775 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab17d58c-9dc5-4a20-8ca7-3d06256080c3" containerName="mariadb-account-create-update" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.130786 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="249b1461-ed19-4572-b1e6-c5c44cfa9145" containerName="mariadb-account-create-update" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.131349 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.134727 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4sv6b" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.134891 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.134893 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.148564 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2kxkv"] Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.231260 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-config-data\") pod \"nova-cell0-conductor-db-sync-2kxkv\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.231303 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-scripts\") pod \"nova-cell0-conductor-db-sync-2kxkv\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.231334 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2kxkv\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.231577 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr5w8\" (UniqueName: \"kubernetes.io/projected/f6ab900b-a76f-495c-a309-f597e2d835a8-kube-api-access-vr5w8\") pod \"nova-cell0-conductor-db-sync-2kxkv\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.333705 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-config-data\") pod \"nova-cell0-conductor-db-sync-2kxkv\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.333753 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-scripts\") pod \"nova-cell0-conductor-db-sync-2kxkv\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.333790 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2kxkv\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.333851 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr5w8\" (UniqueName: \"kubernetes.io/projected/f6ab900b-a76f-495c-a309-f597e2d835a8-kube-api-access-vr5w8\") pod \"nova-cell0-conductor-db-sync-2kxkv\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.340586 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-scripts\") pod \"nova-cell0-conductor-db-sync-2kxkv\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.342740 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-config-data\") pod \"nova-cell0-conductor-db-sync-2kxkv\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.343667 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2kxkv\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.373680 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr5w8\" (UniqueName: \"kubernetes.io/projected/f6ab900b-a76f-495c-a309-f597e2d835a8-kube-api-access-vr5w8\") pod \"nova-cell0-conductor-db-sync-2kxkv\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.453449 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:52 crc kubenswrapper[4902]: I0121 14:54:52.019366 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2kxkv"] Jan 21 14:54:52 crc kubenswrapper[4902]: I0121 14:54:52.113101 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:52 crc kubenswrapper[4902]: I0121 14:54:52.113155 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:52 crc kubenswrapper[4902]: I0121 14:54:52.157623 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:52 crc kubenswrapper[4902]: I0121 14:54:52.171891 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:52 crc kubenswrapper[4902]: I0121 14:54:52.907210 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2kxkv" event={"ID":"f6ab900b-a76f-495c-a309-f597e2d835a8","Type":"ContainerStarted","Data":"7a95c2bf8aaa1b14521dd5e9e1895d33696aae4fd5473b52aeb0bdb216066121"} Jan 21 14:54:52 crc kubenswrapper[4902]: I0121 14:54:52.907452 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:52 crc kubenswrapper[4902]: I0121 14:54:52.907614 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:53 crc kubenswrapper[4902]: I0121 14:54:53.367708 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 14:54:53 crc kubenswrapper[4902]: I0121 14:54:53.367839 4902 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:54:53 crc kubenswrapper[4902]: I0121 14:54:53.528088 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 14:54:54 crc kubenswrapper[4902]: I0121 14:54:54.920834 4902 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:54:54 crc kubenswrapper[4902]: I0121 14:54:54.921135 4902 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:54:54 crc kubenswrapper[4902]: I0121 14:54:54.975921 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:54 crc kubenswrapper[4902]: I0121 14:54:54.976266 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="ceilometer-central-agent" containerID="cri-o://34b69c1a0b66c8657c3d6821b2c584dbfa27e44857ee98ee3f20ff8b752bed3a" gracePeriod=30 Jan 21 14:54:54 crc kubenswrapper[4902]: I0121 14:54:54.976291 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="proxy-httpd" containerID="cri-o://2747adc30292da8bb9ef5316d34e01fb5b9994182a7d3c00899398caa602de5d" gracePeriod=30 Jan 21 14:54:54 crc kubenswrapper[4902]: I0121 14:54:54.976346 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="sg-core" containerID="cri-o://1efb79b1ee63c60deba157f0a412f37783a993f9bcdb9443447b3f3f4120a6da" gracePeriod=30 Jan 21 14:54:54 crc kubenswrapper[4902]: I0121 14:54:54.976332 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="ceilometer-notification-agent" containerID="cri-o://87f4452a0aa396a56b842123857ff8f77a41868b23cb4fe3ecb0ab8734644f5a" gracePeriod=30 Jan 21 14:54:55 crc kubenswrapper[4902]: I0121 14:54:55.168488 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:55 crc kubenswrapper[4902]: I0121 14:54:55.198219 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:55 crc kubenswrapper[4902]: I0121 14:54:55.932424 4902 generic.go:334] "Generic (PLEG): container finished" podID="62f98b44-f071-4c67-a176-1033550150c4" containerID="2747adc30292da8bb9ef5316d34e01fb5b9994182a7d3c00899398caa602de5d" exitCode=0 Jan 21 14:54:55 crc kubenswrapper[4902]: I0121 14:54:55.932654 4902 generic.go:334] "Generic (PLEG): container finished" podID="62f98b44-f071-4c67-a176-1033550150c4" containerID="1efb79b1ee63c60deba157f0a412f37783a993f9bcdb9443447b3f3f4120a6da" exitCode=2 Jan 21 14:54:55 crc kubenswrapper[4902]: I0121 14:54:55.932496 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62f98b44-f071-4c67-a176-1033550150c4","Type":"ContainerDied","Data":"2747adc30292da8bb9ef5316d34e01fb5b9994182a7d3c00899398caa602de5d"} Jan 21 14:54:55 crc kubenswrapper[4902]: I0121 14:54:55.932701 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62f98b44-f071-4c67-a176-1033550150c4","Type":"ContainerDied","Data":"1efb79b1ee63c60deba157f0a412f37783a993f9bcdb9443447b3f3f4120a6da"} Jan 21 14:54:56 crc kubenswrapper[4902]: I0121 14:54:56.956255 4902 generic.go:334] "Generic (PLEG): container finished" podID="62f98b44-f071-4c67-a176-1033550150c4" containerID="87f4452a0aa396a56b842123857ff8f77a41868b23cb4fe3ecb0ab8734644f5a" exitCode=0 Jan 21 14:54:56 crc kubenswrapper[4902]: I0121 14:54:56.956490 4902 generic.go:334] "Generic (PLEG): container finished" podID="62f98b44-f071-4c67-a176-1033550150c4" containerID="34b69c1a0b66c8657c3d6821b2c584dbfa27e44857ee98ee3f20ff8b752bed3a" exitCode=0 Jan 21 14:54:56 crc kubenswrapper[4902]: I0121 14:54:56.956337 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62f98b44-f071-4c67-a176-1033550150c4","Type":"ContainerDied","Data":"87f4452a0aa396a56b842123857ff8f77a41868b23cb4fe3ecb0ab8734644f5a"} Jan 21 14:54:56 crc kubenswrapper[4902]: I0121 14:54:56.956593 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62f98b44-f071-4c67-a176-1033550150c4","Type":"ContainerDied","Data":"34b69c1a0b66c8657c3d6821b2c584dbfa27e44857ee98ee3f20ff8b752bed3a"} Jan 21 14:55:03 crc kubenswrapper[4902]: I0121 14:55:03.985989 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.033295 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62f98b44-f071-4c67-a176-1033550150c4","Type":"ContainerDied","Data":"288e100dc9d7775cb50fbd2d74baf68d2b7ce9f194f99cce744864657ed7517e"} Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.033331 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.033582 4902 scope.go:117] "RemoveContainer" containerID="2747adc30292da8bb9ef5316d34e01fb5b9994182a7d3c00899398caa602de5d" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.059704 4902 scope.go:117] "RemoveContainer" containerID="1efb79b1ee63c60deba157f0a412f37783a993f9bcdb9443447b3f3f4120a6da" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.070505 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvsbg\" (UniqueName: \"kubernetes.io/projected/62f98b44-f071-4c67-a176-1033550150c4-kube-api-access-fvsbg\") pod \"62f98b44-f071-4c67-a176-1033550150c4\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.070572 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-scripts\") pod \"62f98b44-f071-4c67-a176-1033550150c4\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.070622 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62f98b44-f071-4c67-a176-1033550150c4-run-httpd\") pod \"62f98b44-f071-4c67-a176-1033550150c4\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.070648 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-combined-ca-bundle\") pod \"62f98b44-f071-4c67-a176-1033550150c4\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.070690 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-config-data\") pod \"62f98b44-f071-4c67-a176-1033550150c4\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.070744 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-sg-core-conf-yaml\") pod \"62f98b44-f071-4c67-a176-1033550150c4\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.070764 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62f98b44-f071-4c67-a176-1033550150c4-log-httpd\") pod \"62f98b44-f071-4c67-a176-1033550150c4\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.071481 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62f98b44-f071-4c67-a176-1033550150c4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "62f98b44-f071-4c67-a176-1033550150c4" (UID: "62f98b44-f071-4c67-a176-1033550150c4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.071758 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62f98b44-f071-4c67-a176-1033550150c4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "62f98b44-f071-4c67-a176-1033550150c4" (UID: "62f98b44-f071-4c67-a176-1033550150c4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.089255 4902 scope.go:117] "RemoveContainer" containerID="87f4452a0aa396a56b842123857ff8f77a41868b23cb4fe3ecb0ab8734644f5a" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.089275 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-scripts" (OuterVolumeSpecName: "scripts") pod "62f98b44-f071-4c67-a176-1033550150c4" (UID: "62f98b44-f071-4c67-a176-1033550150c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.089473 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62f98b44-f071-4c67-a176-1033550150c4-kube-api-access-fvsbg" (OuterVolumeSpecName: "kube-api-access-fvsbg") pod "62f98b44-f071-4c67-a176-1033550150c4" (UID: "62f98b44-f071-4c67-a176-1033550150c4"). InnerVolumeSpecName "kube-api-access-fvsbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.097584 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "62f98b44-f071-4c67-a176-1033550150c4" (UID: "62f98b44-f071-4c67-a176-1033550150c4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.118444 4902 scope.go:117] "RemoveContainer" containerID="34b69c1a0b66c8657c3d6821b2c584dbfa27e44857ee98ee3f20ff8b752bed3a" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.146873 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62f98b44-f071-4c67-a176-1033550150c4" (UID: "62f98b44-f071-4c67-a176-1033550150c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.165833 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-config-data" (OuterVolumeSpecName: "config-data") pod "62f98b44-f071-4c67-a176-1033550150c4" (UID: "62f98b44-f071-4c67-a176-1033550150c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.172943 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvsbg\" (UniqueName: \"kubernetes.io/projected/62f98b44-f071-4c67-a176-1033550150c4-kube-api-access-fvsbg\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.172980 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.172990 4902 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62f98b44-f071-4c67-a176-1033550150c4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.173002 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.173011 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.173019 4902 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.173027 4902 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62f98b44-f071-4c67-a176-1033550150c4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.372071 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.386465 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.400337 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:04 crc kubenswrapper[4902]: E0121 14:55:04.414721 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="sg-core" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.414765 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="sg-core" Jan 21 14:55:04 crc kubenswrapper[4902]: E0121 14:55:04.414778 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="ceilometer-notification-agent" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.414784 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="ceilometer-notification-agent" Jan 21 14:55:04 crc kubenswrapper[4902]: E0121 14:55:04.414811 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="proxy-httpd" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.414817 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="proxy-httpd" Jan 21 14:55:04 crc kubenswrapper[4902]: E0121 14:55:04.414826 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="ceilometer-central-agent" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.414832 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="ceilometer-central-agent" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.415082 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="ceilometer-notification-agent" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.415099 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="ceilometer-central-agent" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.415110 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="sg-core" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.415120 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="proxy-httpd" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.416779 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.416869 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.422922 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.423304 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.579969 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.580028 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8167d9b9-ec38-488f-90e8-d5e11a6b75be-log-httpd\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.580078 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-scripts\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.580094 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsdxx\" (UniqueName: \"kubernetes.io/projected/8167d9b9-ec38-488f-90e8-d5e11a6b75be-kube-api-access-wsdxx\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.580125 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.580141 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8167d9b9-ec38-488f-90e8-d5e11a6b75be-run-httpd\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.580166 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-config-data\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.681621 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.681695 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8167d9b9-ec38-488f-90e8-d5e11a6b75be-log-httpd\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.681745 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-scripts\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.681766 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsdxx\" (UniqueName: \"kubernetes.io/projected/8167d9b9-ec38-488f-90e8-d5e11a6b75be-kube-api-access-wsdxx\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.681808 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.681825 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8167d9b9-ec38-488f-90e8-d5e11a6b75be-run-httpd\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.681860 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-config-data\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.682856 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8167d9b9-ec38-488f-90e8-d5e11a6b75be-run-httpd\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.682863 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8167d9b9-ec38-488f-90e8-d5e11a6b75be-log-httpd\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.686914 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-scripts\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.687252 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-config-data\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.688383 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.691150 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.705295 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsdxx\" (UniqueName: \"kubernetes.io/projected/8167d9b9-ec38-488f-90e8-d5e11a6b75be-kube-api-access-wsdxx\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.734286 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:55:05 crc kubenswrapper[4902]: I0121 14:55:05.047192 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2kxkv" event={"ID":"f6ab900b-a76f-495c-a309-f597e2d835a8","Type":"ContainerStarted","Data":"e91c9182d83789cb593143e414372ebd78fcb513ff497dbf59abde2ed01e0281"} Jan 21 14:55:05 crc kubenswrapper[4902]: I0121 14:55:05.066983 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-2kxkv" podStartSLOduration=2.254203863 podStartE2EDuration="14.066963304s" podCreationTimestamp="2026-01-21 14:54:51 +0000 UTC" firstStartedPulling="2026-01-21 14:54:52.023642728 +0000 UTC m=+1254.100475757" lastFinishedPulling="2026-01-21 14:55:03.836402169 +0000 UTC m=+1265.913235198" observedRunningTime="2026-01-21 14:55:05.061436167 +0000 UTC m=+1267.138269186" watchObservedRunningTime="2026-01-21 14:55:05.066963304 +0000 UTC m=+1267.143796333" Jan 21 14:55:05 crc kubenswrapper[4902]: I0121 14:55:05.197753 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:05 crc kubenswrapper[4902]: W0121 14:55:05.208445 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8167d9b9_ec38_488f_90e8_d5e11a6b75be.slice/crio-46febc8a2b0ea55e7f385549716696e2304cee994189d6c31ce4c2f325ad134b WatchSource:0}: Error finding container 46febc8a2b0ea55e7f385549716696e2304cee994189d6c31ce4c2f325ad134b: Status 404 returned error can't find the container with id 46febc8a2b0ea55e7f385549716696e2304cee994189d6c31ce4c2f325ad134b Jan 21 14:55:06 crc kubenswrapper[4902]: I0121 14:55:06.058644 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8167d9b9-ec38-488f-90e8-d5e11a6b75be","Type":"ContainerStarted","Data":"ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1"} Jan 21 14:55:06 crc kubenswrapper[4902]: I0121 14:55:06.058922 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8167d9b9-ec38-488f-90e8-d5e11a6b75be","Type":"ContainerStarted","Data":"46febc8a2b0ea55e7f385549716696e2304cee994189d6c31ce4c2f325ad134b"} Jan 21 14:55:06 crc kubenswrapper[4902]: I0121 14:55:06.307726 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62f98b44-f071-4c67-a176-1033550150c4" path="/var/lib/kubelet/pods/62f98b44-f071-4c67-a176-1033550150c4/volumes" Jan 21 14:55:07 crc kubenswrapper[4902]: I0121 14:55:07.076925 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8167d9b9-ec38-488f-90e8-d5e11a6b75be","Type":"ContainerStarted","Data":"f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e"} Jan 21 14:55:09 crc kubenswrapper[4902]: I0121 14:55:09.118666 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8167d9b9-ec38-488f-90e8-d5e11a6b75be","Type":"ContainerStarted","Data":"8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35"} Jan 21 14:55:10 crc kubenswrapper[4902]: I0121 14:55:10.133854 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8167d9b9-ec38-488f-90e8-d5e11a6b75be","Type":"ContainerStarted","Data":"409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3"} Jan 21 14:55:10 crc kubenswrapper[4902]: I0121 14:55:10.134192 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 14:55:10 crc kubenswrapper[4902]: I0121 14:55:10.162957 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.008105399 podStartE2EDuration="6.162936148s" podCreationTimestamp="2026-01-21 14:55:04 +0000 UTC" firstStartedPulling="2026-01-21 14:55:05.21111288 +0000 UTC m=+1267.287945909" lastFinishedPulling="2026-01-21 14:55:09.365943629 +0000 UTC m=+1271.442776658" observedRunningTime="2026-01-21 14:55:10.159676461 +0000 UTC m=+1272.236509500" watchObservedRunningTime="2026-01-21 14:55:10.162936148 +0000 UTC m=+1272.239769177" Jan 21 14:55:17 crc kubenswrapper[4902]: I0121 14:55:17.210237 4902 generic.go:334] "Generic (PLEG): container finished" podID="f6ab900b-a76f-495c-a309-f597e2d835a8" containerID="e91c9182d83789cb593143e414372ebd78fcb513ff497dbf59abde2ed01e0281" exitCode=0 Jan 21 14:55:17 crc kubenswrapper[4902]: I0121 14:55:17.210397 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2kxkv" event={"ID":"f6ab900b-a76f-495c-a309-f597e2d835a8","Type":"ContainerDied","Data":"e91c9182d83789cb593143e414372ebd78fcb513ff497dbf59abde2ed01e0281"} Jan 21 14:55:17 crc kubenswrapper[4902]: I0121 14:55:17.770398 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:55:17 crc kubenswrapper[4902]: I0121 14:55:17.770719 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:55:17 crc kubenswrapper[4902]: I0121 14:55:17.770847 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:55:17 crc kubenswrapper[4902]: I0121 14:55:17.771743 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0203ec0a15ee1aa92f4eb3d8e44c0e52d1043afb244cf40caae4761f1f1ee369"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:55:17 crc kubenswrapper[4902]: I0121 14:55:17.771899 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://0203ec0a15ee1aa92f4eb3d8e44c0e52d1043afb244cf40caae4761f1f1ee369" gracePeriod=600 Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.223249 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="0203ec0a15ee1aa92f4eb3d8e44c0e52d1043afb244cf40caae4761f1f1ee369" exitCode=0 Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.223306 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"0203ec0a15ee1aa92f4eb3d8e44c0e52d1043afb244cf40caae4761f1f1ee369"} Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.223611 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"faf0ff0caeac282dde2bef565f9dbd539a4c5633dd4c8ba54b6bd0e6704b0a61"} Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.223676 4902 scope.go:117] "RemoveContainer" containerID="f9ca57ec1458d1c5cf7c9248bedd6ee378b9620abbe566738ff33d6096aeb8f1" Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.557946 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.732446 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-config-data\") pod \"f6ab900b-a76f-495c-a309-f597e2d835a8\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.732525 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-combined-ca-bundle\") pod \"f6ab900b-a76f-495c-a309-f597e2d835a8\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.732752 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr5w8\" (UniqueName: \"kubernetes.io/projected/f6ab900b-a76f-495c-a309-f597e2d835a8-kube-api-access-vr5w8\") pod \"f6ab900b-a76f-495c-a309-f597e2d835a8\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.732784 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-scripts\") pod \"f6ab900b-a76f-495c-a309-f597e2d835a8\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.743680 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-scripts" (OuterVolumeSpecName: "scripts") pod "f6ab900b-a76f-495c-a309-f597e2d835a8" (UID: "f6ab900b-a76f-495c-a309-f597e2d835a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.745994 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ab900b-a76f-495c-a309-f597e2d835a8-kube-api-access-vr5w8" (OuterVolumeSpecName: "kube-api-access-vr5w8") pod "f6ab900b-a76f-495c-a309-f597e2d835a8" (UID: "f6ab900b-a76f-495c-a309-f597e2d835a8"). InnerVolumeSpecName "kube-api-access-vr5w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.761781 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-config-data" (OuterVolumeSpecName: "config-data") pod "f6ab900b-a76f-495c-a309-f597e2d835a8" (UID: "f6ab900b-a76f-495c-a309-f597e2d835a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.762355 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6ab900b-a76f-495c-a309-f597e2d835a8" (UID: "f6ab900b-a76f-495c-a309-f597e2d835a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.834455 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr5w8\" (UniqueName: \"kubernetes.io/projected/f6ab900b-a76f-495c-a309-f597e2d835a8-kube-api-access-vr5w8\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.834652 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.834732 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.834787 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.237232 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.237220 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2kxkv" event={"ID":"f6ab900b-a76f-495c-a309-f597e2d835a8","Type":"ContainerDied","Data":"7a95c2bf8aaa1b14521dd5e9e1895d33696aae4fd5473b52aeb0bdb216066121"} Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.237428 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a95c2bf8aaa1b14521dd5e9e1895d33696aae4fd5473b52aeb0bdb216066121" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.368448 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 14:55:19 crc kubenswrapper[4902]: E0121 14:55:19.369076 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ab900b-a76f-495c-a309-f597e2d835a8" containerName="nova-cell0-conductor-db-sync" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.369172 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ab900b-a76f-495c-a309-f597e2d835a8" containerName="nova-cell0-conductor-db-sync" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.369401 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ab900b-a76f-495c-a309-f597e2d835a8" containerName="nova-cell0-conductor-db-sync" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.370018 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.372960 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4sv6b" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.372968 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.422398 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.446720 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.446990 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.447125 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww8pt\" (UniqueName: \"kubernetes.io/projected/359a818e-1c34-4dfd-bb59-0e72280a85a0-kube-api-access-ww8pt\") pod \"nova-cell0-conductor-0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.548984 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww8pt\" (UniqueName: \"kubernetes.io/projected/359a818e-1c34-4dfd-bb59-0e72280a85a0-kube-api-access-ww8pt\") pod \"nova-cell0-conductor-0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.549090 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.549210 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.555299 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.555311 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.575699 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww8pt\" (UniqueName: \"kubernetes.io/projected/359a818e-1c34-4dfd-bb59-0e72280a85a0-kube-api-access-ww8pt\") pod \"nova-cell0-conductor-0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.701389 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:20 crc kubenswrapper[4902]: I0121 14:55:20.144350 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 14:55:20 crc kubenswrapper[4902]: I0121 14:55:20.254777 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"359a818e-1c34-4dfd-bb59-0e72280a85a0","Type":"ContainerStarted","Data":"4ffea13c5b1ca8a19fa0ab7ab117654ce080a9b7f7c854db7559f017b9ca3c40"} Jan 21 14:55:21 crc kubenswrapper[4902]: I0121 14:55:21.265272 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"359a818e-1c34-4dfd-bb59-0e72280a85a0","Type":"ContainerStarted","Data":"a133e1c90783c83c710a2eea26c02cb7d28a759bac5a441a7e04e3644c54f5fe"} Jan 21 14:55:21 crc kubenswrapper[4902]: I0121 14:55:21.265625 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:21 crc kubenswrapper[4902]: I0121 14:55:21.293924 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.293904114 podStartE2EDuration="2.293904114s" podCreationTimestamp="2026-01-21 14:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:55:21.282772087 +0000 UTC m=+1283.359605116" watchObservedRunningTime="2026-01-21 14:55:21.293904114 +0000 UTC m=+1283.370737163" Jan 21 14:55:29 crc kubenswrapper[4902]: I0121 14:55:29.748436 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.260825 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-hlnnm"] Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.262240 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.265080 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.266291 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.276465 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hlnnm"] Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.336487 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-config-data\") pod \"nova-cell0-cell-mapping-hlnnm\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.336534 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xdvq\" (UniqueName: \"kubernetes.io/projected/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-kube-api-access-2xdvq\") pod \"nova-cell0-cell-mapping-hlnnm\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.336573 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-scripts\") pod \"nova-cell0-cell-mapping-hlnnm\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.336679 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hlnnm\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.438657 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-config-data\") pod \"nova-cell0-cell-mapping-hlnnm\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.438979 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xdvq\" (UniqueName: \"kubernetes.io/projected/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-kube-api-access-2xdvq\") pod \"nova-cell0-cell-mapping-hlnnm\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.439021 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-scripts\") pod \"nova-cell0-cell-mapping-hlnnm\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.439098 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hlnnm\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.445718 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-scripts\") pod \"nova-cell0-cell-mapping-hlnnm\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.449602 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-config-data\") pod \"nova-cell0-cell-mapping-hlnnm\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.453745 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hlnnm\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.483314 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.485076 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.487370 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.488846 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.492632 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.495019 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.498855 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xdvq\" (UniqueName: \"kubernetes.io/projected/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-kube-api-access-2xdvq\") pod \"nova-cell0-cell-mapping-hlnnm\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.499254 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.626611 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.633763 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09716208-ecef-418b-b04b-fcfad53e017d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.633835 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xcdf\" (UniqueName: \"kubernetes.io/projected/09716208-ecef-418b-b04b-fcfad53e017d-kube-api-access-9xcdf\") pod \"nova-api-0\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.633890 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09716208-ecef-418b-b04b-fcfad53e017d-config-data\") pod \"nova-api-0\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.633937 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8ede08-fd8e-4922-901c-9767821d918d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.633978 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d8ede08-fd8e-4922-901c-9767821d918d-config-data\") pod \"nova-metadata-0\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.634060 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6msrn\" (UniqueName: \"kubernetes.io/projected/3d8ede08-fd8e-4922-901c-9767821d918d-kube-api-access-6msrn\") pod \"nova-metadata-0\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.634137 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d8ede08-fd8e-4922-901c-9767821d918d-logs\") pod \"nova-metadata-0\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.634175 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09716208-ecef-418b-b04b-fcfad53e017d-logs\") pod \"nova-api-0\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.665264 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.743498 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09716208-ecef-418b-b04b-fcfad53e017d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.743785 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xcdf\" (UniqueName: \"kubernetes.io/projected/09716208-ecef-418b-b04b-fcfad53e017d-kube-api-access-9xcdf\") pod \"nova-api-0\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.743857 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09716208-ecef-418b-b04b-fcfad53e017d-config-data\") pod \"nova-api-0\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.743920 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8ede08-fd8e-4922-901c-9767821d918d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.743969 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d8ede08-fd8e-4922-901c-9767821d918d-config-data\") pod \"nova-metadata-0\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.744067 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6msrn\" (UniqueName: \"kubernetes.io/projected/3d8ede08-fd8e-4922-901c-9767821d918d-kube-api-access-6msrn\") pod \"nova-metadata-0\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.744155 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d8ede08-fd8e-4922-901c-9767821d918d-logs\") pod \"nova-metadata-0\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.744198 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09716208-ecef-418b-b04b-fcfad53e017d-logs\") pod \"nova-api-0\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.746159 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09716208-ecef-418b-b04b-fcfad53e017d-logs\") pod \"nova-api-0\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.749683 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d8ede08-fd8e-4922-901c-9767821d918d-logs\") pod \"nova-metadata-0\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.760368 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09716208-ecef-418b-b04b-fcfad53e017d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.761093 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09716208-ecef-418b-b04b-fcfad53e017d-config-data\") pod \"nova-api-0\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.762834 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d8ede08-fd8e-4922-901c-9767821d918d-config-data\") pod \"nova-metadata-0\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.782852 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8ede08-fd8e-4922-901c-9767821d918d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.783897 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6msrn\" (UniqueName: \"kubernetes.io/projected/3d8ede08-fd8e-4922-901c-9767821d918d-kube-api-access-6msrn\") pod \"nova-metadata-0\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.805775 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xcdf\" (UniqueName: \"kubernetes.io/projected/09716208-ecef-418b-b04b-fcfad53e017d-kube-api-access-9xcdf\") pod \"nova-api-0\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.849852 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.871762 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.879091 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.880267 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.918753 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.921757 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.935501 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.936680 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.940866 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.947565 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rzjx\" (UniqueName: \"kubernetes.io/projected/cabfbeed-c979-4978-bdeb-68ac2c9023a1-kube-api-access-7rzjx\") pod \"nova-scheduler-0\" (UID: \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.947610 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7593ca7-9aeb-4763-8bc3-964147d459ce-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7593ca7-9aeb-4763-8bc3-964147d459ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.947631 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7593ca7-9aeb-4763-8bc3-964147d459ce-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7593ca7-9aeb-4763-8bc3-964147d459ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.947662 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnv6n\" (UniqueName: \"kubernetes.io/projected/c7593ca7-9aeb-4763-8bc3-964147d459ce-kube-api-access-fnv6n\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7593ca7-9aeb-4763-8bc3-964147d459ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.947687 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cabfbeed-c979-4978-bdeb-68ac2c9023a1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.947775 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cabfbeed-c979-4978-bdeb-68ac2c9023a1-config-data\") pod \"nova-scheduler-0\" (UID: \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.948906 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.968370 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-2m4b6"] Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.970756 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.988275 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-2m4b6"] Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.049882 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.049938 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rzjx\" (UniqueName: \"kubernetes.io/projected/cabfbeed-c979-4978-bdeb-68ac2c9023a1-kube-api-access-7rzjx\") pod \"nova-scheduler-0\" (UID: \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.049976 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7593ca7-9aeb-4763-8bc3-964147d459ce-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7593ca7-9aeb-4763-8bc3-964147d459ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.049992 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7593ca7-9aeb-4763-8bc3-964147d459ce-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7593ca7-9aeb-4763-8bc3-964147d459ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.050010 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqp69\" (UniqueName: \"kubernetes.io/projected/38f0c216-daa1-42c6-9105-11ad7d5fc686-kube-api-access-zqp69\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.050029 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnv6n\" (UniqueName: \"kubernetes.io/projected/c7593ca7-9aeb-4763-8bc3-964147d459ce-kube-api-access-fnv6n\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7593ca7-9aeb-4763-8bc3-964147d459ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.050068 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cabfbeed-c979-4978-bdeb-68ac2c9023a1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.050095 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.050154 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.050170 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-config\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.050190 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cabfbeed-c979-4978-bdeb-68ac2c9023a1-config-data\") pod \"nova-scheduler-0\" (UID: \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.050204 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.060727 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7593ca7-9aeb-4763-8bc3-964147d459ce-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7593ca7-9aeb-4763-8bc3-964147d459ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.061076 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cabfbeed-c979-4978-bdeb-68ac2c9023a1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.062571 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cabfbeed-c979-4978-bdeb-68ac2c9023a1-config-data\") pod \"nova-scheduler-0\" (UID: \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.062641 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7593ca7-9aeb-4763-8bc3-964147d459ce-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7593ca7-9aeb-4763-8bc3-964147d459ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.069757 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rzjx\" (UniqueName: \"kubernetes.io/projected/cabfbeed-c979-4978-bdeb-68ac2c9023a1-kube-api-access-7rzjx\") pod \"nova-scheduler-0\" (UID: \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.078906 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnv6n\" (UniqueName: \"kubernetes.io/projected/c7593ca7-9aeb-4763-8bc3-964147d459ce-kube-api-access-fnv6n\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7593ca7-9aeb-4763-8bc3-964147d459ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.153006 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.153100 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.153124 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-config\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.153147 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.153203 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.153269 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqp69\" (UniqueName: \"kubernetes.io/projected/38f0c216-daa1-42c6-9105-11ad7d5fc686-kube-api-access-zqp69\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.155162 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.155321 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-config\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.155778 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.156092 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.156313 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.186482 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqp69\" (UniqueName: \"kubernetes.io/projected/38f0c216-daa1-42c6-9105-11ad7d5fc686-kube-api-access-zqp69\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.254945 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.278105 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.299544 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.326324 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hlnnm"] Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.391892 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hlnnm" event={"ID":"86cb92f1-5dde-4389-a5c8-1c0f76b1478d","Type":"ContainerStarted","Data":"7fb36f9b5a0756160aa3e55d39cd3770f84453c05c73ff6f985ac88cc53732b4"} Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.488817 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.566144 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.636742 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.857348 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lrj4d"] Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.858554 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.863129 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.863185 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.866000 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lrj4d"] Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.934001 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:55:31 crc kubenswrapper[4902]: W0121 14:55:31.937656 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcabfbeed_c979_4978_bdeb_68ac2c9023a1.slice/crio-d489edc5237f4ac81854560a301bc41de7cb9dc499684a0dffb569a687d5db5d WatchSource:0}: Error finding container d489edc5237f4ac81854560a301bc41de7cb9dc499684a0dffb569a687d5db5d: Status 404 returned error can't find the container with id d489edc5237f4ac81854560a301bc41de7cb9dc499684a0dffb569a687d5db5d Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.998800 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hbvf\" (UniqueName: \"kubernetes.io/projected/169597ed-1e1f-490a-8d17-0d6520ae39d1-kube-api-access-7hbvf\") pod \"nova-cell1-conductor-db-sync-lrj4d\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.998940 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lrj4d\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.998971 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-scripts\") pod \"nova-cell1-conductor-db-sync-lrj4d\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.999000 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-config-data\") pod \"nova-cell1-conductor-db-sync-lrj4d\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:32 crc kubenswrapper[4902]: W0121 14:55:32.057423 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38f0c216_daa1_42c6_9105_11ad7d5fc686.slice/crio-39dee8363ceb2e4e3e0e527468730e2f88866ca83679486319418b5875a94a82 WatchSource:0}: Error finding container 39dee8363ceb2e4e3e0e527468730e2f88866ca83679486319418b5875a94a82: Status 404 returned error can't find the container with id 39dee8363ceb2e4e3e0e527468730e2f88866ca83679486319418b5875a94a82 Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.065931 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-2m4b6"] Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.074837 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.100630 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hbvf\" (UniqueName: \"kubernetes.io/projected/169597ed-1e1f-490a-8d17-0d6520ae39d1-kube-api-access-7hbvf\") pod \"nova-cell1-conductor-db-sync-lrj4d\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.100869 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lrj4d\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.100954 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-scripts\") pod \"nova-cell1-conductor-db-sync-lrj4d\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.102521 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-config-data\") pod \"nova-cell1-conductor-db-sync-lrj4d\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.104391 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-scripts\") pod \"nova-cell1-conductor-db-sync-lrj4d\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.107535 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lrj4d\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.108679 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-config-data\") pod \"nova-cell1-conductor-db-sync-lrj4d\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.116875 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hbvf\" (UniqueName: \"kubernetes.io/projected/169597ed-1e1f-490a-8d17-0d6520ae39d1-kube-api-access-7hbvf\") pod \"nova-cell1-conductor-db-sync-lrj4d\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.181819 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.409393 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d8ede08-fd8e-4922-901c-9767821d918d","Type":"ContainerStarted","Data":"023c89d0fbc7b0fc5efc71aecbfa4d8d80f6d918ae1bd1523efe2599e8dc31eb"} Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.411529 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hlnnm" event={"ID":"86cb92f1-5dde-4389-a5c8-1c0f76b1478d","Type":"ContainerStarted","Data":"80f1113ebae178430104e31cb438bfd4b8237fd75e17bfe92c4d153d21a7d7b4"} Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.413893 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09716208-ecef-418b-b04b-fcfad53e017d","Type":"ContainerStarted","Data":"34f32efff8c1f6aabcc0c5371906f0935d5fd2d86c65b2814e1ce5ed501c9460"} Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.415768 4902 generic.go:334] "Generic (PLEG): container finished" podID="38f0c216-daa1-42c6-9105-11ad7d5fc686" containerID="e33abee5a4d9568ebeebc43b93ca969e5d4b5cadc5a0cff7461433d918dfb71d" exitCode=0 Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.415815 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" event={"ID":"38f0c216-daa1-42c6-9105-11ad7d5fc686","Type":"ContainerDied","Data":"e33abee5a4d9568ebeebc43b93ca969e5d4b5cadc5a0cff7461433d918dfb71d"} Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.415832 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" event={"ID":"38f0c216-daa1-42c6-9105-11ad7d5fc686","Type":"ContainerStarted","Data":"39dee8363ceb2e4e3e0e527468730e2f88866ca83679486319418b5875a94a82"} Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.419960 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c7593ca7-9aeb-4763-8bc3-964147d459ce","Type":"ContainerStarted","Data":"20c0da8ce9148a9ce1d2bbb934c0cba1985f7cac4f00b74da4ba453452a4725d"} Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.433181 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cabfbeed-c979-4978-bdeb-68ac2c9023a1","Type":"ContainerStarted","Data":"d489edc5237f4ac81854560a301bc41de7cb9dc499684a0dffb569a687d5db5d"} Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.439486 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-hlnnm" podStartSLOduration=2.439464293 podStartE2EDuration="2.439464293s" podCreationTimestamp="2026-01-21 14:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:55:32.426191099 +0000 UTC m=+1294.503024128" watchObservedRunningTime="2026-01-21 14:55:32.439464293 +0000 UTC m=+1294.516297332" Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.663346 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lrj4d"] Jan 21 14:55:32 crc kubenswrapper[4902]: W0121 14:55:32.684640 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod169597ed_1e1f_490a_8d17_0d6520ae39d1.slice/crio-0835df5efec2028b909596249b9d9f9a73e0f10cf3316792b66bed135b1a92af WatchSource:0}: Error finding container 0835df5efec2028b909596249b9d9f9a73e0f10cf3316792b66bed135b1a92af: Status 404 returned error can't find the container with id 0835df5efec2028b909596249b9d9f9a73e0f10cf3316792b66bed135b1a92af Jan 21 14:55:33 crc kubenswrapper[4902]: I0121 14:55:33.451743 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" event={"ID":"38f0c216-daa1-42c6-9105-11ad7d5fc686","Type":"ContainerStarted","Data":"33ac3053e080a371fb9c1294b84f90b6187ab8ed37ebcb04994475127b9d12dc"} Jan 21 14:55:33 crc kubenswrapper[4902]: I0121 14:55:33.452195 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:33 crc kubenswrapper[4902]: I0121 14:55:33.457096 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lrj4d" event={"ID":"169597ed-1e1f-490a-8d17-0d6520ae39d1","Type":"ContainerStarted","Data":"f49c0a85c7357d87bfb57238893040a47cac5bc0bd2e46a347d2884a529aa300"} Jan 21 14:55:33 crc kubenswrapper[4902]: I0121 14:55:33.457178 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lrj4d" event={"ID":"169597ed-1e1f-490a-8d17-0d6520ae39d1","Type":"ContainerStarted","Data":"0835df5efec2028b909596249b9d9f9a73e0f10cf3316792b66bed135b1a92af"} Jan 21 14:55:33 crc kubenswrapper[4902]: I0121 14:55:33.478074 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" podStartSLOduration=3.478036107 podStartE2EDuration="3.478036107s" podCreationTimestamp="2026-01-21 14:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:55:33.471886193 +0000 UTC m=+1295.548719212" watchObservedRunningTime="2026-01-21 14:55:33.478036107 +0000 UTC m=+1295.554869136" Jan 21 14:55:33 crc kubenswrapper[4902]: I0121 14:55:33.501412 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-lrj4d" podStartSLOduration=2.5013918200000003 podStartE2EDuration="2.50139182s" podCreationTimestamp="2026-01-21 14:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:55:33.494366242 +0000 UTC m=+1295.571199271" watchObservedRunningTime="2026-01-21 14:55:33.50139182 +0000 UTC m=+1295.578224849" Jan 21 14:55:34 crc kubenswrapper[4902]: I0121 14:55:34.437466 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:55:34 crc kubenswrapper[4902]: I0121 14:55:34.449539 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:34 crc kubenswrapper[4902]: I0121 14:55:34.746719 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.485930 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d8ede08-fd8e-4922-901c-9767821d918d","Type":"ContainerStarted","Data":"d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f"} Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.486704 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d8ede08-fd8e-4922-901c-9767821d918d","Type":"ContainerStarted","Data":"1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991"} Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.486183 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3d8ede08-fd8e-4922-901c-9767821d918d" containerName="nova-metadata-metadata" containerID="cri-o://d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f" gracePeriod=30 Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.486105 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3d8ede08-fd8e-4922-901c-9767821d918d" containerName="nova-metadata-log" containerID="cri-o://1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991" gracePeriod=30 Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.489684 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09716208-ecef-418b-b04b-fcfad53e017d","Type":"ContainerStarted","Data":"adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a"} Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.489967 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09716208-ecef-418b-b04b-fcfad53e017d","Type":"ContainerStarted","Data":"802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d"} Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.497537 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c7593ca7-9aeb-4763-8bc3-964147d459ce","Type":"ContainerStarted","Data":"84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b"} Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.497670 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="c7593ca7-9aeb-4763-8bc3-964147d459ce" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b" gracePeriod=30 Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.512350 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cabfbeed-c979-4978-bdeb-68ac2c9023a1","Type":"ContainerStarted","Data":"0a33fc5008db4187bd7ddd59bc8804f14a26d7077e5c5b41e22b0a33b1e2dff7"} Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.522657 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.904092017 podStartE2EDuration="6.52263796s" podCreationTimestamp="2026-01-21 14:55:30 +0000 UTC" firstStartedPulling="2026-01-21 14:55:31.56583071 +0000 UTC m=+1293.642663739" lastFinishedPulling="2026-01-21 14:55:35.184376653 +0000 UTC m=+1297.261209682" observedRunningTime="2026-01-21 14:55:36.51024468 +0000 UTC m=+1298.587077709" watchObservedRunningTime="2026-01-21 14:55:36.52263796 +0000 UTC m=+1298.599470989" Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.533951 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.052426564 podStartE2EDuration="6.533935622s" podCreationTimestamp="2026-01-21 14:55:30 +0000 UTC" firstStartedPulling="2026-01-21 14:55:31.695337554 +0000 UTC m=+1293.772170583" lastFinishedPulling="2026-01-21 14:55:35.176846612 +0000 UTC m=+1297.253679641" observedRunningTime="2026-01-21 14:55:36.533103289 +0000 UTC m=+1298.609936318" watchObservedRunningTime="2026-01-21 14:55:36.533935622 +0000 UTC m=+1298.610768651" Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.553499 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.440434914 podStartE2EDuration="6.553483503s" podCreationTimestamp="2026-01-21 14:55:30 +0000 UTC" firstStartedPulling="2026-01-21 14:55:32.065150289 +0000 UTC m=+1294.141983318" lastFinishedPulling="2026-01-21 14:55:35.178198878 +0000 UTC m=+1297.255031907" observedRunningTime="2026-01-21 14:55:36.547924915 +0000 UTC m=+1298.624757944" watchObservedRunningTime="2026-01-21 14:55:36.553483503 +0000 UTC m=+1298.630316532" Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.576584 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.339683406 podStartE2EDuration="6.576567949s" podCreationTimestamp="2026-01-21 14:55:30 +0000 UTC" firstStartedPulling="2026-01-21 14:55:31.939479747 +0000 UTC m=+1294.016312776" lastFinishedPulling="2026-01-21 14:55:35.17636429 +0000 UTC m=+1297.253197319" observedRunningTime="2026-01-21 14:55:36.565067322 +0000 UTC m=+1298.641900351" watchObservedRunningTime="2026-01-21 14:55:36.576567949 +0000 UTC m=+1298.653400978" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.077057 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.227884 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d8ede08-fd8e-4922-901c-9767821d918d-config-data\") pod \"3d8ede08-fd8e-4922-901c-9767821d918d\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.227948 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8ede08-fd8e-4922-901c-9767821d918d-combined-ca-bundle\") pod \"3d8ede08-fd8e-4922-901c-9767821d918d\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.228026 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d8ede08-fd8e-4922-901c-9767821d918d-logs\") pod \"3d8ede08-fd8e-4922-901c-9767821d918d\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.228156 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6msrn\" (UniqueName: \"kubernetes.io/projected/3d8ede08-fd8e-4922-901c-9767821d918d-kube-api-access-6msrn\") pod \"3d8ede08-fd8e-4922-901c-9767821d918d\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.228434 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d8ede08-fd8e-4922-901c-9767821d918d-logs" (OuterVolumeSpecName: "logs") pod "3d8ede08-fd8e-4922-901c-9767821d918d" (UID: "3d8ede08-fd8e-4922-901c-9767821d918d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.228761 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d8ede08-fd8e-4922-901c-9767821d918d-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.246192 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d8ede08-fd8e-4922-901c-9767821d918d-kube-api-access-6msrn" (OuterVolumeSpecName: "kube-api-access-6msrn") pod "3d8ede08-fd8e-4922-901c-9767821d918d" (UID: "3d8ede08-fd8e-4922-901c-9767821d918d"). InnerVolumeSpecName "kube-api-access-6msrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.275170 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d8ede08-fd8e-4922-901c-9767821d918d-config-data" (OuterVolumeSpecName: "config-data") pod "3d8ede08-fd8e-4922-901c-9767821d918d" (UID: "3d8ede08-fd8e-4922-901c-9767821d918d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.284364 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d8ede08-fd8e-4922-901c-9767821d918d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d8ede08-fd8e-4922-901c-9767821d918d" (UID: "3d8ede08-fd8e-4922-901c-9767821d918d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.330213 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6msrn\" (UniqueName: \"kubernetes.io/projected/3d8ede08-fd8e-4922-901c-9767821d918d-kube-api-access-6msrn\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.330245 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d8ede08-fd8e-4922-901c-9767821d918d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.330255 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8ede08-fd8e-4922-901c-9767821d918d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.521980 4902 generic.go:334] "Generic (PLEG): container finished" podID="3d8ede08-fd8e-4922-901c-9767821d918d" containerID="d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f" exitCode=0 Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.522017 4902 generic.go:334] "Generic (PLEG): container finished" podID="3d8ede08-fd8e-4922-901c-9767821d918d" containerID="1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991" exitCode=143 Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.522034 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.522112 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d8ede08-fd8e-4922-901c-9767821d918d","Type":"ContainerDied","Data":"d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f"} Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.522169 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d8ede08-fd8e-4922-901c-9767821d918d","Type":"ContainerDied","Data":"1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991"} Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.522180 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d8ede08-fd8e-4922-901c-9767821d918d","Type":"ContainerDied","Data":"023c89d0fbc7b0fc5efc71aecbfa4d8d80f6d918ae1bd1523efe2599e8dc31eb"} Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.522212 4902 scope.go:117] "RemoveContainer" containerID="d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.546473 4902 scope.go:117] "RemoveContainer" containerID="1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.559823 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.582882 4902 scope.go:117] "RemoveContainer" containerID="d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f" Jan 21 14:55:37 crc kubenswrapper[4902]: E0121 14:55:37.583308 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f\": container with ID starting with d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f not found: ID does not exist" containerID="d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.583335 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f"} err="failed to get container status \"d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f\": rpc error: code = NotFound desc = could not find container \"d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f\": container with ID starting with d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f not found: ID does not exist" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.583353 4902 scope.go:117] "RemoveContainer" containerID="1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991" Jan 21 14:55:37 crc kubenswrapper[4902]: E0121 14:55:37.583759 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991\": container with ID starting with 1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991 not found: ID does not exist" containerID="1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.583818 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991"} err="failed to get container status \"1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991\": rpc error: code = NotFound desc = could not find container \"1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991\": container with ID starting with 1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991 not found: ID does not exist" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.583832 4902 scope.go:117] "RemoveContainer" containerID="d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.584003 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f"} err="failed to get container status \"d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f\": rpc error: code = NotFound desc = could not find container \"d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f\": container with ID starting with d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f not found: ID does not exist" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.584051 4902 scope.go:117] "RemoveContainer" containerID="1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.584196 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991"} err="failed to get container status \"1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991\": rpc error: code = NotFound desc = could not find container \"1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991\": container with ID starting with 1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991 not found: ID does not exist" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.586275 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.624782 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:37 crc kubenswrapper[4902]: E0121 14:55:37.625339 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d8ede08-fd8e-4922-901c-9767821d918d" containerName="nova-metadata-metadata" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.625363 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d8ede08-fd8e-4922-901c-9767821d918d" containerName="nova-metadata-metadata" Jan 21 14:55:37 crc kubenswrapper[4902]: E0121 14:55:37.625408 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d8ede08-fd8e-4922-901c-9767821d918d" containerName="nova-metadata-log" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.625418 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d8ede08-fd8e-4922-901c-9767821d918d" containerName="nova-metadata-log" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.625646 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d8ede08-fd8e-4922-901c-9767821d918d" containerName="nova-metadata-log" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.625668 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d8ede08-fd8e-4922-901c-9767821d918d" containerName="nova-metadata-metadata" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.626849 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.629631 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.629817 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.637443 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.740326 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkz6r\" (UniqueName: \"kubernetes.io/projected/89ac4ce1-6229-4354-a3a7-13251f691937-kube-api-access-gkz6r\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.740666 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.740863 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89ac4ce1-6229-4354-a3a7-13251f691937-logs\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.740924 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-config-data\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.741240 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.842684 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89ac4ce1-6229-4354-a3a7-13251f691937-logs\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.842731 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-config-data\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.842812 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.842853 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkz6r\" (UniqueName: \"kubernetes.io/projected/89ac4ce1-6229-4354-a3a7-13251f691937-kube-api-access-gkz6r\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.842871 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.844025 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89ac4ce1-6229-4354-a3a7-13251f691937-logs\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.847854 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-config-data\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.850660 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.850715 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.860433 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkz6r\" (UniqueName: \"kubernetes.io/projected/89ac4ce1-6229-4354-a3a7-13251f691937-kube-api-access-gkz6r\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.950839 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:55:38 crc kubenswrapper[4902]: I0121 14:55:38.363296 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d8ede08-fd8e-4922-901c-9767821d918d" path="/var/lib/kubelet/pods/3d8ede08-fd8e-4922-901c-9767821d918d/volumes" Jan 21 14:55:38 crc kubenswrapper[4902]: W0121 14:55:38.560116 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89ac4ce1_6229_4354_a3a7_13251f691937.slice/crio-128e5578ec9eebb1bc576fc0112c902af373a1f591fbf27e6e74b44b8af2d2e0 WatchSource:0}: Error finding container 128e5578ec9eebb1bc576fc0112c902af373a1f591fbf27e6e74b44b8af2d2e0: Status 404 returned error can't find the container with id 128e5578ec9eebb1bc576fc0112c902af373a1f591fbf27e6e74b44b8af2d2e0 Jan 21 14:55:38 crc kubenswrapper[4902]: I0121 14:55:38.560125 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:39 crc kubenswrapper[4902]: I0121 14:55:39.552642 4902 generic.go:334] "Generic (PLEG): container finished" podID="86cb92f1-5dde-4389-a5c8-1c0f76b1478d" containerID="80f1113ebae178430104e31cb438bfd4b8237fd75e17bfe92c4d153d21a7d7b4" exitCode=0 Jan 21 14:55:39 crc kubenswrapper[4902]: I0121 14:55:39.552734 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hlnnm" event={"ID":"86cb92f1-5dde-4389-a5c8-1c0f76b1478d","Type":"ContainerDied","Data":"80f1113ebae178430104e31cb438bfd4b8237fd75e17bfe92c4d153d21a7d7b4"} Jan 21 14:55:39 crc kubenswrapper[4902]: I0121 14:55:39.555887 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"89ac4ce1-6229-4354-a3a7-13251f691937","Type":"ContainerStarted","Data":"5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83"} Jan 21 14:55:39 crc kubenswrapper[4902]: I0121 14:55:39.555937 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"89ac4ce1-6229-4354-a3a7-13251f691937","Type":"ContainerStarted","Data":"46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3"} Jan 21 14:55:39 crc kubenswrapper[4902]: I0121 14:55:39.555946 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"89ac4ce1-6229-4354-a3a7-13251f691937","Type":"ContainerStarted","Data":"128e5578ec9eebb1bc576fc0112c902af373a1f591fbf27e6e74b44b8af2d2e0"} Jan 21 14:55:39 crc kubenswrapper[4902]: I0121 14:55:39.595982 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:55:39 crc kubenswrapper[4902]: I0121 14:55:39.596188 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="14fb6fe4-7f85-4d0a-b6f6-a86c152cb113" containerName="kube-state-metrics" containerID="cri-o://eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba" gracePeriod=30 Jan 21 14:55:39 crc kubenswrapper[4902]: I0121 14:55:39.598898 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.5988764680000003 podStartE2EDuration="2.598876468s" podCreationTimestamp="2026-01-21 14:55:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:55:39.598361984 +0000 UTC m=+1301.675195013" watchObservedRunningTime="2026-01-21 14:55:39.598876468 +0000 UTC m=+1301.675709497" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.103668 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.300204 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhfv5\" (UniqueName: \"kubernetes.io/projected/14fb6fe4-7f85-4d0a-b6f6-a86c152cb113-kube-api-access-jhfv5\") pod \"14fb6fe4-7f85-4d0a-b6f6-a86c152cb113\" (UID: \"14fb6fe4-7f85-4d0a-b6f6-a86c152cb113\") " Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.306033 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14fb6fe4-7f85-4d0a-b6f6-a86c152cb113-kube-api-access-jhfv5" (OuterVolumeSpecName: "kube-api-access-jhfv5") pod "14fb6fe4-7f85-4d0a-b6f6-a86c152cb113" (UID: "14fb6fe4-7f85-4d0a-b6f6-a86c152cb113"). InnerVolumeSpecName "kube-api-access-jhfv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.401927 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhfv5\" (UniqueName: \"kubernetes.io/projected/14fb6fe4-7f85-4d0a-b6f6-a86c152cb113-kube-api-access-jhfv5\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.568107 4902 generic.go:334] "Generic (PLEG): container finished" podID="14fb6fe4-7f85-4d0a-b6f6-a86c152cb113" containerID="eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba" exitCode=2 Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.568184 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"14fb6fe4-7f85-4d0a-b6f6-a86c152cb113","Type":"ContainerDied","Data":"eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba"} Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.568231 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"14fb6fe4-7f85-4d0a-b6f6-a86c152cb113","Type":"ContainerDied","Data":"83455c4bf3aeb7b7c76443c4b9198dde4cf810334ccfb634a4b5c17df6d13e97"} Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.568224 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.568253 4902 scope.go:117] "RemoveContainer" containerID="eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.598424 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.615919 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.633403 4902 scope.go:117] "RemoveContainer" containerID="eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba" Jan 21 14:55:40 crc kubenswrapper[4902]: E0121 14:55:40.635736 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba\": container with ID starting with eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba not found: ID does not exist" containerID="eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.636336 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba"} err="failed to get container status \"eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba\": rpc error: code = NotFound desc = could not find container \"eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba\": container with ID starting with eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba not found: ID does not exist" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.656113 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:55:40 crc kubenswrapper[4902]: E0121 14:55:40.656661 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14fb6fe4-7f85-4d0a-b6f6-a86c152cb113" containerName="kube-state-metrics" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.656680 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="14fb6fe4-7f85-4d0a-b6f6-a86c152cb113" containerName="kube-state-metrics" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.656971 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="14fb6fe4-7f85-4d0a-b6f6-a86c152cb113" containerName="kube-state-metrics" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.657793 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.662775 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.663181 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.666977 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.817037 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " pod="openstack/kube-state-metrics-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.829249 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4sgn\" (UniqueName: \"kubernetes.io/projected/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-api-access-t4sgn\") pod \"kube-state-metrics-0\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " pod="openstack/kube-state-metrics-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.829393 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " pod="openstack/kube-state-metrics-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.829642 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " pod="openstack/kube-state-metrics-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.860572 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.860613 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.932661 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " pod="openstack/kube-state-metrics-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.932759 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " pod="openstack/kube-state-metrics-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.932799 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4sgn\" (UniqueName: \"kubernetes.io/projected/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-api-access-t4sgn\") pod \"kube-state-metrics-0\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " pod="openstack/kube-state-metrics-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.932853 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " pod="openstack/kube-state-metrics-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.952546 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " pod="openstack/kube-state-metrics-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.957503 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " pod="openstack/kube-state-metrics-0" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.009705 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " pod="openstack/kube-state-metrics-0" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.022125 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4sgn\" (UniqueName: \"kubernetes.io/projected/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-api-access-t4sgn\") pod \"kube-state-metrics-0\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " pod="openstack/kube-state-metrics-0" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.135777 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.237836 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-combined-ca-bundle\") pod \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.237955 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-scripts\") pod \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.238198 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xdvq\" (UniqueName: \"kubernetes.io/projected/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-kube-api-access-2xdvq\") pod \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.238246 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-config-data\") pod \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.241891 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-kube-api-access-2xdvq" (OuterVolumeSpecName: "kube-api-access-2xdvq") pod "86cb92f1-5dde-4389-a5c8-1c0f76b1478d" (UID: "86cb92f1-5dde-4389-a5c8-1c0f76b1478d"). InnerVolumeSpecName "kube-api-access-2xdvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.243905 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-scripts" (OuterVolumeSpecName: "scripts") pod "86cb92f1-5dde-4389-a5c8-1c0f76b1478d" (UID: "86cb92f1-5dde-4389-a5c8-1c0f76b1478d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.257157 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.257656 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.271177 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86cb92f1-5dde-4389-a5c8-1c0f76b1478d" (UID: "86cb92f1-5dde-4389-a5c8-1c0f76b1478d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.279588 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.290511 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-config-data" (OuterVolumeSpecName: "config-data") pod "86cb92f1-5dde-4389-a5c8-1c0f76b1478d" (UID: "86cb92f1-5dde-4389-a5c8-1c0f76b1478d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.292149 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.293551 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.302163 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.340714 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.340993 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xdvq\" (UniqueName: \"kubernetes.io/projected/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-kube-api-access-2xdvq\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.341007 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.341028 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.411970 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-94qng"] Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.412394 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" podUID="9c09608f-53ce-4d79-85d0-75bf0e552380" containerName="dnsmasq-dns" containerID="cri-o://58e43d8b58cb6c7891b30fdbcfbbaedb613b0110edddfc00c5eeec2b0d50db94" gracePeriod=10 Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.629833 4902 generic.go:334] "Generic (PLEG): container finished" podID="169597ed-1e1f-490a-8d17-0d6520ae39d1" containerID="f49c0a85c7357d87bfb57238893040a47cac5bc0bd2e46a347d2884a529aa300" exitCode=0 Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.629946 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lrj4d" event={"ID":"169597ed-1e1f-490a-8d17-0d6520ae39d1","Type":"ContainerDied","Data":"f49c0a85c7357d87bfb57238893040a47cac5bc0bd2e46a347d2884a529aa300"} Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.638160 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hlnnm" event={"ID":"86cb92f1-5dde-4389-a5c8-1c0f76b1478d","Type":"ContainerDied","Data":"7fb36f9b5a0756160aa3e55d39cd3770f84453c05c73ff6f985ac88cc53732b4"} Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.638209 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fb36f9b5a0756160aa3e55d39cd3770f84453c05c73ff6f985ac88cc53732b4" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.638286 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.643011 4902 generic.go:334] "Generic (PLEG): container finished" podID="9c09608f-53ce-4d79-85d0-75bf0e552380" containerID="58e43d8b58cb6c7891b30fdbcfbbaedb613b0110edddfc00c5eeec2b0d50db94" exitCode=0 Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.643200 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" event={"ID":"9c09608f-53ce-4d79-85d0-75bf0e552380","Type":"ContainerDied","Data":"58e43d8b58cb6c7891b30fdbcfbbaedb613b0110edddfc00c5eeec2b0d50db94"} Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.685165 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.783654 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.783920 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="09716208-ecef-418b-b04b-fcfad53e017d" containerName="nova-api-log" containerID="cri-o://802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d" gracePeriod=30 Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.784497 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="09716208-ecef-418b-b04b-fcfad53e017d" containerName="nova-api-api" containerID="cri-o://adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a" gracePeriod=30 Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.790828 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.791014 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="89ac4ce1-6229-4354-a3a7-13251f691937" containerName="nova-metadata-log" containerID="cri-o://46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3" gracePeriod=30 Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.791140 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="89ac4ce1-6229-4354-a3a7-13251f691937" containerName="nova-metadata-metadata" containerID="cri-o://5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83" gracePeriod=30 Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.832436 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="09716208-ecef-418b-b04b-fcfad53e017d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.183:8774/\": EOF" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.832436 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="09716208-ecef-418b-b04b-fcfad53e017d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.183:8774/\": EOF" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.916614 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.334720 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14fb6fe4-7f85-4d0a-b6f6-a86c152cb113" path="/var/lib/kubelet/pods/14fb6fe4-7f85-4d0a-b6f6-a86c152cb113/volumes" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.335585 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.441971 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.442260 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="ceilometer-central-agent" containerID="cri-o://ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1" gracePeriod=30 Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.442652 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="proxy-httpd" containerID="cri-o://409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3" gracePeriod=30 Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.442693 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="sg-core" containerID="cri-o://8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35" gracePeriod=30 Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.442728 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="ceilometer-notification-agent" containerID="cri-o://f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e" gracePeriod=30 Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.579213 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.634152 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.697189 4902 generic.go:334] "Generic (PLEG): container finished" podID="09716208-ecef-418b-b04b-fcfad53e017d" containerID="802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d" exitCode=143 Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.697295 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09716208-ecef-418b-b04b-fcfad53e017d","Type":"ContainerDied","Data":"802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d"} Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.706934 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" event={"ID":"9c09608f-53ce-4d79-85d0-75bf0e552380","Type":"ContainerDied","Data":"d5afd22000b4d3f3c6a7c4d47e16d67d68cf4b35e698216b1393d6178399d3b9"} Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.706987 4902 scope.go:117] "RemoveContainer" containerID="58e43d8b58cb6c7891b30fdbcfbbaedb613b0110edddfc00c5eeec2b0d50db94" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.707150 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.711506 4902 generic.go:334] "Generic (PLEG): container finished" podID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerID="8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35" exitCode=2 Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.711550 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8167d9b9-ec38-488f-90e8-d5e11a6b75be","Type":"ContainerDied","Data":"8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35"} Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.713008 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b52494a8-ff56-449e-a274-b37eb4bad43d","Type":"ContainerStarted","Data":"b11f8ee0923ff98e0291569b03ef8eeccd15dca9bc3a6e79246d5a184580c3ae"} Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.714310 4902 generic.go:334] "Generic (PLEG): container finished" podID="89ac4ce1-6229-4354-a3a7-13251f691937" containerID="5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83" exitCode=0 Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.714330 4902 generic.go:334] "Generic (PLEG): container finished" podID="89ac4ce1-6229-4354-a3a7-13251f691937" containerID="46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3" exitCode=143 Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.714465 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.714836 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"89ac4ce1-6229-4354-a3a7-13251f691937","Type":"ContainerDied","Data":"5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83"} Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.714852 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"89ac4ce1-6229-4354-a3a7-13251f691937","Type":"ContainerDied","Data":"46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3"} Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.714863 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"89ac4ce1-6229-4354-a3a7-13251f691937","Type":"ContainerDied","Data":"128e5578ec9eebb1bc576fc0112c902af373a1f591fbf27e6e74b44b8af2d2e0"} Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.734752 4902 scope.go:117] "RemoveContainer" containerID="054019a0d14354ed0c0e875d417095f6b26794e582d8869760a6468e64837519" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.775549 4902 scope.go:117] "RemoveContainer" containerID="5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.786307 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-ovsdbserver-nb\") pod \"9c09608f-53ce-4d79-85d0-75bf0e552380\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.786433 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-config-data\") pod \"89ac4ce1-6229-4354-a3a7-13251f691937\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.786479 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-combined-ca-bundle\") pod \"89ac4ce1-6229-4354-a3a7-13251f691937\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.786512 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-dns-swift-storage-0\") pod \"9c09608f-53ce-4d79-85d0-75bf0e552380\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.786541 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-ovsdbserver-sb\") pod \"9c09608f-53ce-4d79-85d0-75bf0e552380\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.786599 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hssmx\" (UniqueName: \"kubernetes.io/projected/9c09608f-53ce-4d79-85d0-75bf0e552380-kube-api-access-hssmx\") pod \"9c09608f-53ce-4d79-85d0-75bf0e552380\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.786642 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkz6r\" (UniqueName: \"kubernetes.io/projected/89ac4ce1-6229-4354-a3a7-13251f691937-kube-api-access-gkz6r\") pod \"89ac4ce1-6229-4354-a3a7-13251f691937\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.786674 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89ac4ce1-6229-4354-a3a7-13251f691937-logs\") pod \"89ac4ce1-6229-4354-a3a7-13251f691937\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.786750 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-dns-svc\") pod \"9c09608f-53ce-4d79-85d0-75bf0e552380\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.786839 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-config\") pod \"9c09608f-53ce-4d79-85d0-75bf0e552380\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.786933 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-nova-metadata-tls-certs\") pod \"89ac4ce1-6229-4354-a3a7-13251f691937\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.789142 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89ac4ce1-6229-4354-a3a7-13251f691937-logs" (OuterVolumeSpecName: "logs") pod "89ac4ce1-6229-4354-a3a7-13251f691937" (UID: "89ac4ce1-6229-4354-a3a7-13251f691937"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.792971 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c09608f-53ce-4d79-85d0-75bf0e552380-kube-api-access-hssmx" (OuterVolumeSpecName: "kube-api-access-hssmx") pod "9c09608f-53ce-4d79-85d0-75bf0e552380" (UID: "9c09608f-53ce-4d79-85d0-75bf0e552380"). InnerVolumeSpecName "kube-api-access-hssmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.794886 4902 scope.go:117] "RemoveContainer" containerID="46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.805247 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89ac4ce1-6229-4354-a3a7-13251f691937-kube-api-access-gkz6r" (OuterVolumeSpecName: "kube-api-access-gkz6r") pod "89ac4ce1-6229-4354-a3a7-13251f691937" (UID: "89ac4ce1-6229-4354-a3a7-13251f691937"). InnerVolumeSpecName "kube-api-access-gkz6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.839072 4902 scope.go:117] "RemoveContainer" containerID="5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83" Jan 21 14:55:42 crc kubenswrapper[4902]: E0121 14:55:42.840787 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83\": container with ID starting with 5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83 not found: ID does not exist" containerID="5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.840827 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83"} err="failed to get container status \"5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83\": rpc error: code = NotFound desc = could not find container \"5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83\": container with ID starting with 5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83 not found: ID does not exist" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.840852 4902 scope.go:117] "RemoveContainer" containerID="46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3" Jan 21 14:55:42 crc kubenswrapper[4902]: E0121 14:55:42.841706 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3\": container with ID starting with 46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3 not found: ID does not exist" containerID="46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.841748 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3"} err="failed to get container status \"46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3\": rpc error: code = NotFound desc = could not find container \"46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3\": container with ID starting with 46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3 not found: ID does not exist" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.841766 4902 scope.go:117] "RemoveContainer" containerID="5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.848953 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-config-data" (OuterVolumeSpecName: "config-data") pod "89ac4ce1-6229-4354-a3a7-13251f691937" (UID: "89ac4ce1-6229-4354-a3a7-13251f691937"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.849613 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83"} err="failed to get container status \"5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83\": rpc error: code = NotFound desc = could not find container \"5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83\": container with ID starting with 5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83 not found: ID does not exist" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.849663 4902 scope.go:117] "RemoveContainer" containerID="46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.850354 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3"} err="failed to get container status \"46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3\": rpc error: code = NotFound desc = could not find container \"46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3\": container with ID starting with 46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3 not found: ID does not exist" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.869789 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9c09608f-53ce-4d79-85d0-75bf0e552380" (UID: "9c09608f-53ce-4d79-85d0-75bf0e552380"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.873176 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89ac4ce1-6229-4354-a3a7-13251f691937" (UID: "89ac4ce1-6229-4354-a3a7-13251f691937"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.874327 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9c09608f-53ce-4d79-85d0-75bf0e552380" (UID: "9c09608f-53ce-4d79-85d0-75bf0e552380"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.887314 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-config" (OuterVolumeSpecName: "config") pod "9c09608f-53ce-4d79-85d0-75bf0e552380" (UID: "9c09608f-53ce-4d79-85d0-75bf0e552380"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.898257 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9c09608f-53ce-4d79-85d0-75bf0e552380" (UID: "9c09608f-53ce-4d79-85d0-75bf0e552380"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.911192 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.911593 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "89ac4ce1-6229-4354-a3a7-13251f691937" (UID: "89ac4ce1-6229-4354-a3a7-13251f691937"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.912359 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.912390 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.912404 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hssmx\" (UniqueName: \"kubernetes.io/projected/9c09608f-53ce-4d79-85d0-75bf0e552380-kube-api-access-hssmx\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.912416 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkz6r\" (UniqueName: \"kubernetes.io/projected/89ac4ce1-6229-4354-a3a7-13251f691937-kube-api-access-gkz6r\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.912427 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89ac4ce1-6229-4354-a3a7-13251f691937-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.912437 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.912448 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.913600 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9c09608f-53ce-4d79-85d0-75bf0e552380" (UID: "9c09608f-53ce-4d79-85d0-75bf0e552380"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.016524 4902 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.017247 4902 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.017281 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.068837 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.069066 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-94qng"] Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.078915 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-94qng"] Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.103646 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.115241 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.118332 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hbvf\" (UniqueName: \"kubernetes.io/projected/169597ed-1e1f-490a-8d17-0d6520ae39d1-kube-api-access-7hbvf\") pod \"169597ed-1e1f-490a-8d17-0d6520ae39d1\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.118751 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-config-data\") pod \"169597ed-1e1f-490a-8d17-0d6520ae39d1\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.118807 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-scripts\") pod \"169597ed-1e1f-490a-8d17-0d6520ae39d1\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.118840 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-combined-ca-bundle\") pod \"169597ed-1e1f-490a-8d17-0d6520ae39d1\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.124420 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:43 crc kubenswrapper[4902]: E0121 14:55:43.124997 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ac4ce1-6229-4354-a3a7-13251f691937" containerName="nova-metadata-metadata" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.125091 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ac4ce1-6229-4354-a3a7-13251f691937" containerName="nova-metadata-metadata" Jan 21 14:55:43 crc kubenswrapper[4902]: E0121 14:55:43.125173 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c09608f-53ce-4d79-85d0-75bf0e552380" containerName="dnsmasq-dns" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.125253 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c09608f-53ce-4d79-85d0-75bf0e552380" containerName="dnsmasq-dns" Jan 21 14:55:43 crc kubenswrapper[4902]: E0121 14:55:43.125321 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86cb92f1-5dde-4389-a5c8-1c0f76b1478d" containerName="nova-manage" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.125371 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="86cb92f1-5dde-4389-a5c8-1c0f76b1478d" containerName="nova-manage" Jan 21 14:55:43 crc kubenswrapper[4902]: E0121 14:55:43.125435 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c09608f-53ce-4d79-85d0-75bf0e552380" containerName="init" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.125495 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c09608f-53ce-4d79-85d0-75bf0e552380" containerName="init" Jan 21 14:55:43 crc kubenswrapper[4902]: E0121 14:55:43.125562 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ac4ce1-6229-4354-a3a7-13251f691937" containerName="nova-metadata-log" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.125614 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ac4ce1-6229-4354-a3a7-13251f691937" containerName="nova-metadata-log" Jan 21 14:55:43 crc kubenswrapper[4902]: E0121 14:55:43.125673 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169597ed-1e1f-490a-8d17-0d6520ae39d1" containerName="nova-cell1-conductor-db-sync" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.125724 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="169597ed-1e1f-490a-8d17-0d6520ae39d1" containerName="nova-cell1-conductor-db-sync" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.125947 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="86cb92f1-5dde-4389-a5c8-1c0f76b1478d" containerName="nova-manage" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.126074 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c09608f-53ce-4d79-85d0-75bf0e552380" containerName="dnsmasq-dns" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.126141 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="169597ed-1e1f-490a-8d17-0d6520ae39d1" containerName="nova-cell1-conductor-db-sync" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.126218 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="89ac4ce1-6229-4354-a3a7-13251f691937" containerName="nova-metadata-metadata" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.126291 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="89ac4ce1-6229-4354-a3a7-13251f691937" containerName="nova-metadata-log" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.128035 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.126240 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-scripts" (OuterVolumeSpecName: "scripts") pod "169597ed-1e1f-490a-8d17-0d6520ae39d1" (UID: "169597ed-1e1f-490a-8d17-0d6520ae39d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.132557 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.133243 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.134251 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/169597ed-1e1f-490a-8d17-0d6520ae39d1-kube-api-access-7hbvf" (OuterVolumeSpecName: "kube-api-access-7hbvf") pod "169597ed-1e1f-490a-8d17-0d6520ae39d1" (UID: "169597ed-1e1f-490a-8d17-0d6520ae39d1"). InnerVolumeSpecName "kube-api-access-7hbvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.163983 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.186524 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "169597ed-1e1f-490a-8d17-0d6520ae39d1" (UID: "169597ed-1e1f-490a-8d17-0d6520ae39d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.197112 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-config-data" (OuterVolumeSpecName: "config-data") pod "169597ed-1e1f-490a-8d17-0d6520ae39d1" (UID: "169597ed-1e1f-490a-8d17-0d6520ae39d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.222179 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.222221 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.222230 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.222240 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hbvf\" (UniqueName: \"kubernetes.io/projected/169597ed-1e1f-490a-8d17-0d6520ae39d1-kube-api-access-7hbvf\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.325373 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-logs\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.325472 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.325530 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g62v\" (UniqueName: \"kubernetes.io/projected/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-kube-api-access-4g62v\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.325623 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-config-data\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.325682 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.427575 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.427635 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-logs\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.427778 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.427889 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g62v\" (UniqueName: \"kubernetes.io/projected/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-kube-api-access-4g62v\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.428035 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-config-data\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.428463 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-logs\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.433762 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-config-data\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.434537 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.448665 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.451711 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g62v\" (UniqueName: \"kubernetes.io/projected/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-kube-api-access-4g62v\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.468885 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.598386 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.736801 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-sg-core-conf-yaml\") pod \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.736881 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-combined-ca-bundle\") pod \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.736917 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8167d9b9-ec38-488f-90e8-d5e11a6b75be-log-httpd\") pod \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.736978 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-config-data\") pod \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.736999 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsdxx\" (UniqueName: \"kubernetes.io/projected/8167d9b9-ec38-488f-90e8-d5e11a6b75be-kube-api-access-wsdxx\") pod \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.737033 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-scripts\") pod \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.737124 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8167d9b9-ec38-488f-90e8-d5e11a6b75be-run-httpd\") pod \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.737971 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8167d9b9-ec38-488f-90e8-d5e11a6b75be-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8167d9b9-ec38-488f-90e8-d5e11a6b75be" (UID: "8167d9b9-ec38-488f-90e8-d5e11a6b75be"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.745487 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8167d9b9-ec38-488f-90e8-d5e11a6b75be-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8167d9b9-ec38-488f-90e8-d5e11a6b75be" (UID: "8167d9b9-ec38-488f-90e8-d5e11a6b75be"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.750617 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 14:55:43 crc kubenswrapper[4902]: E0121 14:55:43.751467 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="ceilometer-notification-agent" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.751484 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="ceilometer-notification-agent" Jan 21 14:55:43 crc kubenswrapper[4902]: E0121 14:55:43.751499 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="sg-core" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.751504 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="sg-core" Jan 21 14:55:43 crc kubenswrapper[4902]: E0121 14:55:43.751517 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="ceilometer-central-agent" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.751523 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="ceilometer-central-agent" Jan 21 14:55:43 crc kubenswrapper[4902]: E0121 14:55:43.751535 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="proxy-httpd" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.751540 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="proxy-httpd" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.751710 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="proxy-httpd" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.751726 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="sg-core" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.751741 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="ceilometer-central-agent" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.751749 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="ceilometer-notification-agent" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.752340 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.762356 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8167d9b9-ec38-488f-90e8-d5e11a6b75be-kube-api-access-wsdxx" (OuterVolumeSpecName: "kube-api-access-wsdxx") pod "8167d9b9-ec38-488f-90e8-d5e11a6b75be" (UID: "8167d9b9-ec38-488f-90e8-d5e11a6b75be"). InnerVolumeSpecName "kube-api-access-wsdxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.766835 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.782289 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-scripts" (OuterVolumeSpecName: "scripts") pod "8167d9b9-ec38-488f-90e8-d5e11a6b75be" (UID: "8167d9b9-ec38-488f-90e8-d5e11a6b75be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.787614 4902 generic.go:334] "Generic (PLEG): container finished" podID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerID="409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3" exitCode=0 Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.787644 4902 generic.go:334] "Generic (PLEG): container finished" podID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerID="f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e" exitCode=0 Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.787653 4902 generic.go:334] "Generic (PLEG): container finished" podID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerID="ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1" exitCode=0 Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.787690 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8167d9b9-ec38-488f-90e8-d5e11a6b75be","Type":"ContainerDied","Data":"409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3"} Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.787718 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8167d9b9-ec38-488f-90e8-d5e11a6b75be","Type":"ContainerDied","Data":"f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e"} Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.787730 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8167d9b9-ec38-488f-90e8-d5e11a6b75be","Type":"ContainerDied","Data":"ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1"} Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.787743 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8167d9b9-ec38-488f-90e8-d5e11a6b75be","Type":"ContainerDied","Data":"46febc8a2b0ea55e7f385549716696e2304cee994189d6c31ce4c2f325ad134b"} Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.787769 4902 scope.go:117] "RemoveContainer" containerID="409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.787887 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.794190 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.794218 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lrj4d" event={"ID":"169597ed-1e1f-490a-8d17-0d6520ae39d1","Type":"ContainerDied","Data":"0835df5efec2028b909596249b9d9f9a73e0f10cf3316792b66bed135b1a92af"} Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.794277 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0835df5efec2028b909596249b9d9f9a73e0f10cf3316792b66bed135b1a92af" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.809927 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b52494a8-ff56-449e-a274-b37eb4bad43d","Type":"ContainerStarted","Data":"af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723"} Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.810380 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="cabfbeed-c979-4978-bdeb-68ac2c9023a1" containerName="nova-scheduler-scheduler" containerID="cri-o://0a33fc5008db4187bd7ddd59bc8804f14a26d7077e5c5b41e22b0a33b1e2dff7" gracePeriod=30 Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.895245 4902 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8167d9b9-ec38-488f-90e8-d5e11a6b75be-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.895281 4902 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8167d9b9-ec38-488f-90e8-d5e11a6b75be-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.895296 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsdxx\" (UniqueName: \"kubernetes.io/projected/8167d9b9-ec38-488f-90e8-d5e11a6b75be-kube-api-access-wsdxx\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.895315 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.900309 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.401914252 podStartE2EDuration="3.900289897s" podCreationTimestamp="2026-01-21 14:55:40 +0000 UTC" firstStartedPulling="2026-01-21 14:55:41.916435488 +0000 UTC m=+1303.993268517" lastFinishedPulling="2026-01-21 14:55:42.414811133 +0000 UTC m=+1304.491644162" observedRunningTime="2026-01-21 14:55:43.882388769 +0000 UTC m=+1305.959221788" watchObservedRunningTime="2026-01-21 14:55:43.900289897 +0000 UTC m=+1305.977122926" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.910157 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8167d9b9-ec38-488f-90e8-d5e11a6b75be" (UID: "8167d9b9-ec38-488f-90e8-d5e11a6b75be"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.988252 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-config-data" (OuterVolumeSpecName: "config-data") pod "8167d9b9-ec38-488f-90e8-d5e11a6b75be" (UID: "8167d9b9-ec38-488f-90e8-d5e11a6b75be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.002720 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tbt5\" (UniqueName: \"kubernetes.io/projected/dbc235c8-beef-433d-b663-e1d09b6a9b65-kube-api-access-8tbt5\") pod \"nova-cell1-conductor-0\" (UID: \"dbc235c8-beef-433d-b663-e1d09b6a9b65\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.002803 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbc235c8-beef-433d-b663-e1d09b6a9b65-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"dbc235c8-beef-433d-b663-e1d09b6a9b65\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.002849 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc235c8-beef-433d-b663-e1d09b6a9b65-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"dbc235c8-beef-433d-b663-e1d09b6a9b65\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.003001 4902 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.003014 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.008204 4902 scope.go:117] "RemoveContainer" containerID="8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.032764 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.103606 4902 scope.go:117] "RemoveContainer" containerID="f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.104124 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tbt5\" (UniqueName: \"kubernetes.io/projected/dbc235c8-beef-433d-b663-e1d09b6a9b65-kube-api-access-8tbt5\") pod \"nova-cell1-conductor-0\" (UID: \"dbc235c8-beef-433d-b663-e1d09b6a9b65\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.104187 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbc235c8-beef-433d-b663-e1d09b6a9b65-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"dbc235c8-beef-433d-b663-e1d09b6a9b65\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.104225 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc235c8-beef-433d-b663-e1d09b6a9b65-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"dbc235c8-beef-433d-b663-e1d09b6a9b65\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.107280 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8167d9b9-ec38-488f-90e8-d5e11a6b75be" (UID: "8167d9b9-ec38-488f-90e8-d5e11a6b75be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.108713 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbc235c8-beef-433d-b663-e1d09b6a9b65-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"dbc235c8-beef-433d-b663-e1d09b6a9b65\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.109802 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc235c8-beef-433d-b663-e1d09b6a9b65-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"dbc235c8-beef-433d-b663-e1d09b6a9b65\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.122938 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tbt5\" (UniqueName: \"kubernetes.io/projected/dbc235c8-beef-433d-b663-e1d09b6a9b65-kube-api-access-8tbt5\") pod \"nova-cell1-conductor-0\" (UID: \"dbc235c8-beef-433d-b663-e1d09b6a9b65\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.132881 4902 scope.go:117] "RemoveContainer" containerID="ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.164055 4902 scope.go:117] "RemoveContainer" containerID="409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3" Jan 21 14:55:44 crc kubenswrapper[4902]: E0121 14:55:44.164959 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3\": container with ID starting with 409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3 not found: ID does not exist" containerID="409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.164990 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3"} err="failed to get container status \"409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3\": rpc error: code = NotFound desc = could not find container \"409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3\": container with ID starting with 409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3 not found: ID does not exist" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.165011 4902 scope.go:117] "RemoveContainer" containerID="8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35" Jan 21 14:55:44 crc kubenswrapper[4902]: E0121 14:55:44.165295 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35\": container with ID starting with 8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35 not found: ID does not exist" containerID="8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.165321 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35"} err="failed to get container status \"8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35\": rpc error: code = NotFound desc = could not find container \"8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35\": container with ID starting with 8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35 not found: ID does not exist" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.165340 4902 scope.go:117] "RemoveContainer" containerID="f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e" Jan 21 14:55:44 crc kubenswrapper[4902]: E0121 14:55:44.165620 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e\": container with ID starting with f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e not found: ID does not exist" containerID="f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.165643 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e"} err="failed to get container status \"f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e\": rpc error: code = NotFound desc = could not find container \"f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e\": container with ID starting with f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e not found: ID does not exist" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.165664 4902 scope.go:117] "RemoveContainer" containerID="ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1" Jan 21 14:55:44 crc kubenswrapper[4902]: E0121 14:55:44.165932 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1\": container with ID starting with ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1 not found: ID does not exist" containerID="ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.165952 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1"} err="failed to get container status \"ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1\": rpc error: code = NotFound desc = could not find container \"ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1\": container with ID starting with ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1 not found: ID does not exist" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.165965 4902 scope.go:117] "RemoveContainer" containerID="409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.166237 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3"} err="failed to get container status \"409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3\": rpc error: code = NotFound desc = could not find container \"409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3\": container with ID starting with 409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3 not found: ID does not exist" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.166254 4902 scope.go:117] "RemoveContainer" containerID="8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.167688 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35"} err="failed to get container status \"8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35\": rpc error: code = NotFound desc = could not find container \"8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35\": container with ID starting with 8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35 not found: ID does not exist" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.167712 4902 scope.go:117] "RemoveContainer" containerID="f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.168199 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e"} err="failed to get container status \"f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e\": rpc error: code = NotFound desc = could not find container \"f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e\": container with ID starting with f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e not found: ID does not exist" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.168236 4902 scope.go:117] "RemoveContainer" containerID="ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.168621 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1"} err="failed to get container status \"ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1\": rpc error: code = NotFound desc = could not find container \"ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1\": container with ID starting with ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1 not found: ID does not exist" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.168640 4902 scope.go:117] "RemoveContainer" containerID="409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.168831 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3"} err="failed to get container status \"409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3\": rpc error: code = NotFound desc = could not find container \"409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3\": container with ID starting with 409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3 not found: ID does not exist" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.168847 4902 scope.go:117] "RemoveContainer" containerID="8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.169082 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35"} err="failed to get container status \"8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35\": rpc error: code = NotFound desc = could not find container \"8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35\": container with ID starting with 8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35 not found: ID does not exist" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.169110 4902 scope.go:117] "RemoveContainer" containerID="f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.170996 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e"} err="failed to get container status \"f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e\": rpc error: code = NotFound desc = could not find container \"f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e\": container with ID starting with f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e not found: ID does not exist" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.171036 4902 scope.go:117] "RemoveContainer" containerID="ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.171313 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1"} err="failed to get container status \"ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1\": rpc error: code = NotFound desc = could not find container \"ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1\": container with ID starting with ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1 not found: ID does not exist" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.208773 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.277464 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.307639 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89ac4ce1-6229-4354-a3a7-13251f691937" path="/var/lib/kubelet/pods/89ac4ce1-6229-4354-a3a7-13251f691937/volumes" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.308366 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c09608f-53ce-4d79-85d0-75bf0e552380" path="/var/lib/kubelet/pods/9c09608f-53ce-4d79-85d0-75bf0e552380/volumes" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.498470 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.542106 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.563911 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.570219 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.573609 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.573672 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.573772 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.575355 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.722447 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-log-httpd\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.723218 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.723536 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-config-data\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.723602 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.723714 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.723769 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc46c\" (UniqueName: \"kubernetes.io/projected/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-kube-api-access-zc46c\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.723827 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-scripts\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.723910 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-run-httpd\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.825453 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.825487 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc46c\" (UniqueName: \"kubernetes.io/projected/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-kube-api-access-zc46c\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.825518 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-scripts\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.825571 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-run-httpd\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.825599 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-log-httpd\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.825641 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.825694 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-config-data\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.825715 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.827841 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-log-httpd\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.828227 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-run-httpd\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.838594 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-config-data\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.842447 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.842546 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a","Type":"ContainerStarted","Data":"f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675"} Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.842591 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.842609 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a","Type":"ContainerStarted","Data":"af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7"} Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.842621 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a","Type":"ContainerStarted","Data":"420d4e5dbc151ab2860e03ff284c833763ae6b775900cb8f0097accb8dfdab8c"} Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.844492 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.844650 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-scripts\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.845721 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc46c\" (UniqueName: \"kubernetes.io/projected/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-kube-api-access-zc46c\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.846489 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.847918 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.880487 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.880275478 podStartE2EDuration="1.880275478s" podCreationTimestamp="2026-01-21 14:55:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:55:44.872162011 +0000 UTC m=+1306.948995040" watchObservedRunningTime="2026-01-21 14:55:44.880275478 +0000 UTC m=+1306.957108507" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.914257 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:55:45 crc kubenswrapper[4902]: W0121 14:55:45.395615 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46fd9f2d_9b3f_46b4_9e16_8d0629431b8c.slice/crio-958f1bdb995189beafc661310dd6706581a17051cfd226d0ba249d51be82b53c WatchSource:0}: Error finding container 958f1bdb995189beafc661310dd6706581a17051cfd226d0ba249d51be82b53c: Status 404 returned error can't find the container with id 958f1bdb995189beafc661310dd6706581a17051cfd226d0ba249d51be82b53c Jan 21 14:55:45 crc kubenswrapper[4902]: I0121 14:55:45.397735 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:45 crc kubenswrapper[4902]: I0121 14:55:45.849993 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"dbc235c8-beef-433d-b663-e1d09b6a9b65","Type":"ContainerStarted","Data":"357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae"} Jan 21 14:55:45 crc kubenswrapper[4902]: I0121 14:55:45.850057 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"dbc235c8-beef-433d-b663-e1d09b6a9b65","Type":"ContainerStarted","Data":"b81adfeafc100f247345bb4dc1ec0bbf1a637bdabc4a363633412eb4f663c5f6"} Jan 21 14:55:45 crc kubenswrapper[4902]: I0121 14:55:45.850369 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:45 crc kubenswrapper[4902]: I0121 14:55:45.853543 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c","Type":"ContainerStarted","Data":"958f1bdb995189beafc661310dd6706581a17051cfd226d0ba249d51be82b53c"} Jan 21 14:55:45 crc kubenswrapper[4902]: I0121 14:55:45.874428 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.874409986 podStartE2EDuration="2.874409986s" podCreationTimestamp="2026-01-21 14:55:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:55:45.870407229 +0000 UTC m=+1307.947240258" watchObservedRunningTime="2026-01-21 14:55:45.874409986 +0000 UTC m=+1307.951243015" Jan 21 14:55:46 crc kubenswrapper[4902]: E0121 14:55:46.257917 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0a33fc5008db4187bd7ddd59bc8804f14a26d7077e5c5b41e22b0a33b1e2dff7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 14:55:46 crc kubenswrapper[4902]: E0121 14:55:46.259456 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0a33fc5008db4187bd7ddd59bc8804f14a26d7077e5c5b41e22b0a33b1e2dff7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 14:55:46 crc kubenswrapper[4902]: E0121 14:55:46.260854 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0a33fc5008db4187bd7ddd59bc8804f14a26d7077e5c5b41e22b0a33b1e2dff7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 14:55:46 crc kubenswrapper[4902]: E0121 14:55:46.260892 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="cabfbeed-c979-4978-bdeb-68ac2c9023a1" containerName="nova-scheduler-scheduler" Jan 21 14:55:46 crc kubenswrapper[4902]: I0121 14:55:46.306563 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" path="/var/lib/kubelet/pods/8167d9b9-ec38-488f-90e8-d5e11a6b75be/volumes" Jan 21 14:55:46 crc kubenswrapper[4902]: I0121 14:55:46.866059 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c","Type":"ContainerStarted","Data":"73d5c6a2e3d4353a6126370981cecb65384400070335eb42053d76f078ba3998"} Jan 21 14:55:47 crc kubenswrapper[4902]: I0121 14:55:47.877123 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c","Type":"ContainerStarted","Data":"9dcf6007e20245ce52b38ae3acb298e83909872d194cb4df4467e5731bef26a7"} Jan 21 14:55:47 crc kubenswrapper[4902]: I0121 14:55:47.878304 4902 generic.go:334] "Generic (PLEG): container finished" podID="cabfbeed-c979-4978-bdeb-68ac2c9023a1" containerID="0a33fc5008db4187bd7ddd59bc8804f14a26d7077e5c5b41e22b0a33b1e2dff7" exitCode=0 Jan 21 14:55:47 crc kubenswrapper[4902]: I0121 14:55:47.878328 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cabfbeed-c979-4978-bdeb-68ac2c9023a1","Type":"ContainerDied","Data":"0a33fc5008db4187bd7ddd59bc8804f14a26d7077e5c5b41e22b0a33b1e2dff7"} Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.408288 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.470478 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.470523 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.500555 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rzjx\" (UniqueName: \"kubernetes.io/projected/cabfbeed-c979-4978-bdeb-68ac2c9023a1-kube-api-access-7rzjx\") pod \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\" (UID: \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\") " Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.500758 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cabfbeed-c979-4978-bdeb-68ac2c9023a1-config-data\") pod \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\" (UID: \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\") " Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.500799 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cabfbeed-c979-4978-bdeb-68ac2c9023a1-combined-ca-bundle\") pod \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\" (UID: \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\") " Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.510256 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cabfbeed-c979-4978-bdeb-68ac2c9023a1-kube-api-access-7rzjx" (OuterVolumeSpecName: "kube-api-access-7rzjx") pod "cabfbeed-c979-4978-bdeb-68ac2c9023a1" (UID: "cabfbeed-c979-4978-bdeb-68ac2c9023a1"). InnerVolumeSpecName "kube-api-access-7rzjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.529798 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cabfbeed-c979-4978-bdeb-68ac2c9023a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cabfbeed-c979-4978-bdeb-68ac2c9023a1" (UID: "cabfbeed-c979-4978-bdeb-68ac2c9023a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.558315 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cabfbeed-c979-4978-bdeb-68ac2c9023a1-config-data" (OuterVolumeSpecName: "config-data") pod "cabfbeed-c979-4978-bdeb-68ac2c9023a1" (UID: "cabfbeed-c979-4978-bdeb-68ac2c9023a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.604383 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cabfbeed-c979-4978-bdeb-68ac2c9023a1-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.604657 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cabfbeed-c979-4978-bdeb-68ac2c9023a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.604668 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rzjx\" (UniqueName: \"kubernetes.io/projected/cabfbeed-c979-4978-bdeb-68ac2c9023a1-kube-api-access-7rzjx\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.780460 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.888787 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c","Type":"ContainerStarted","Data":"7ad84b6a868fbbeb6497690a3b9ab62535445a584970624ac9e7776480ccd69f"} Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.891882 4902 generic.go:334] "Generic (PLEG): container finished" podID="09716208-ecef-418b-b04b-fcfad53e017d" containerID="adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a" exitCode=0 Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.891901 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09716208-ecef-418b-b04b-fcfad53e017d","Type":"ContainerDied","Data":"adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a"} Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.891913 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.891956 4902 scope.go:117] "RemoveContainer" containerID="adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.891945 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09716208-ecef-418b-b04b-fcfad53e017d","Type":"ContainerDied","Data":"34f32efff8c1f6aabcc0c5371906f0935d5fd2d86c65b2814e1ce5ed501c9460"} Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.896245 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cabfbeed-c979-4978-bdeb-68ac2c9023a1","Type":"ContainerDied","Data":"d489edc5237f4ac81854560a301bc41de7cb9dc499684a0dffb569a687d5db5d"} Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.896454 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.907961 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09716208-ecef-418b-b04b-fcfad53e017d-logs\") pod \"09716208-ecef-418b-b04b-fcfad53e017d\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.908671 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09716208-ecef-418b-b04b-fcfad53e017d-logs" (OuterVolumeSpecName: "logs") pod "09716208-ecef-418b-b04b-fcfad53e017d" (UID: "09716208-ecef-418b-b04b-fcfad53e017d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.908901 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09716208-ecef-418b-b04b-fcfad53e017d-combined-ca-bundle\") pod \"09716208-ecef-418b-b04b-fcfad53e017d\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.909005 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xcdf\" (UniqueName: \"kubernetes.io/projected/09716208-ecef-418b-b04b-fcfad53e017d-kube-api-access-9xcdf\") pod \"09716208-ecef-418b-b04b-fcfad53e017d\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.909031 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09716208-ecef-418b-b04b-fcfad53e017d-config-data\") pod \"09716208-ecef-418b-b04b-fcfad53e017d\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.910310 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09716208-ecef-418b-b04b-fcfad53e017d-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.913533 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09716208-ecef-418b-b04b-fcfad53e017d-kube-api-access-9xcdf" (OuterVolumeSpecName: "kube-api-access-9xcdf") pod "09716208-ecef-418b-b04b-fcfad53e017d" (UID: "09716208-ecef-418b-b04b-fcfad53e017d"). InnerVolumeSpecName "kube-api-access-9xcdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.914403 4902 scope.go:117] "RemoveContainer" containerID="802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.936112 4902 scope.go:117] "RemoveContainer" containerID="adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a" Jan 21 14:55:48 crc kubenswrapper[4902]: E0121 14:55:48.936909 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a\": container with ID starting with adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a not found: ID does not exist" containerID="adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.936948 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a"} err="failed to get container status \"adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a\": rpc error: code = NotFound desc = could not find container \"adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a\": container with ID starting with adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a not found: ID does not exist" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.936986 4902 scope.go:117] "RemoveContainer" containerID="802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d" Jan 21 14:55:48 crc kubenswrapper[4902]: E0121 14:55:48.937363 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d\": container with ID starting with 802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d not found: ID does not exist" containerID="802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.937403 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d"} err="failed to get container status \"802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d\": rpc error: code = NotFound desc = could not find container \"802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d\": container with ID starting with 802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d not found: ID does not exist" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.937428 4902 scope.go:117] "RemoveContainer" containerID="0a33fc5008db4187bd7ddd59bc8804f14a26d7077e5c5b41e22b0a33b1e2dff7" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.950657 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.965424 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.965551 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09716208-ecef-418b-b04b-fcfad53e017d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09716208-ecef-418b-b04b-fcfad53e017d" (UID: "09716208-ecef-418b-b04b-fcfad53e017d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.986759 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:55:48 crc kubenswrapper[4902]: E0121 14:55:48.987315 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09716208-ecef-418b-b04b-fcfad53e017d" containerName="nova-api-log" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.987338 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="09716208-ecef-418b-b04b-fcfad53e017d" containerName="nova-api-log" Jan 21 14:55:48 crc kubenswrapper[4902]: E0121 14:55:48.987358 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09716208-ecef-418b-b04b-fcfad53e017d" containerName="nova-api-api" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.987366 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="09716208-ecef-418b-b04b-fcfad53e017d" containerName="nova-api-api" Jan 21 14:55:48 crc kubenswrapper[4902]: E0121 14:55:48.987397 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cabfbeed-c979-4978-bdeb-68ac2c9023a1" containerName="nova-scheduler-scheduler" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.987406 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="cabfbeed-c979-4978-bdeb-68ac2c9023a1" containerName="nova-scheduler-scheduler" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.987634 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="cabfbeed-c979-4978-bdeb-68ac2c9023a1" containerName="nova-scheduler-scheduler" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.987671 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="09716208-ecef-418b-b04b-fcfad53e017d" containerName="nova-api-api" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.987686 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="09716208-ecef-418b-b04b-fcfad53e017d" containerName="nova-api-log" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.988864 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.992199 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09716208-ecef-418b-b04b-fcfad53e017d-config-data" (OuterVolumeSpecName: "config-data") pod "09716208-ecef-418b-b04b-fcfad53e017d" (UID: "09716208-ecef-418b-b04b-fcfad53e017d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.997004 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.997014 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.015468 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09716208-ecef-418b-b04b-fcfad53e017d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.015496 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xcdf\" (UniqueName: \"kubernetes.io/projected/09716208-ecef-418b-b04b-fcfad53e017d-kube-api-access-9xcdf\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.015506 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09716208-ecef-418b-b04b-fcfad53e017d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:49 crc kubenswrapper[4902]: E0121 14:55:49.026085 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcabfbeed_c979_4978_bdeb_68ac2c9023a1.slice\": RecentStats: unable to find data in memory cache]" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.116627 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.116692 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c97cm\" (UniqueName: \"kubernetes.io/projected/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-kube-api-access-c97cm\") pod \"nova-scheduler-0\" (UID: \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.116837 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-config-data\") pod \"nova-scheduler-0\" (UID: \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.218120 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-config-data\") pod \"nova-scheduler-0\" (UID: \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.218200 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.218237 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c97cm\" (UniqueName: \"kubernetes.io/projected/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-kube-api-access-c97cm\") pod \"nova-scheduler-0\" (UID: \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.224236 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.227152 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-config-data\") pod \"nova-scheduler-0\" (UID: \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.227159 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.232070 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.251787 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.253606 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.255712 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.264186 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.264603 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c97cm\" (UniqueName: \"kubernetes.io/projected/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-kube-api-access-c97cm\") pod \"nova-scheduler-0\" (UID: \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.314162 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.319383 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5577ca29-6f08-4b68-954f-8bdff5d886cc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.319434 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5577ca29-6f08-4b68-954f-8bdff5d886cc-config-data\") pod \"nova-api-0\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.319454 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5577ca29-6f08-4b68-954f-8bdff5d886cc-logs\") pod \"nova-api-0\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.319617 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv2xq\" (UniqueName: \"kubernetes.io/projected/5577ca29-6f08-4b68-954f-8bdff5d886cc-kube-api-access-hv2xq\") pod \"nova-api-0\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.422137 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5577ca29-6f08-4b68-954f-8bdff5d886cc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.422193 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5577ca29-6f08-4b68-954f-8bdff5d886cc-config-data\") pod \"nova-api-0\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.422211 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5577ca29-6f08-4b68-954f-8bdff5d886cc-logs\") pod \"nova-api-0\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.422293 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv2xq\" (UniqueName: \"kubernetes.io/projected/5577ca29-6f08-4b68-954f-8bdff5d886cc-kube-api-access-hv2xq\") pod \"nova-api-0\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.426090 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5577ca29-6f08-4b68-954f-8bdff5d886cc-logs\") pod \"nova-api-0\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.442082 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5577ca29-6f08-4b68-954f-8bdff5d886cc-config-data\") pod \"nova-api-0\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.470064 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5577ca29-6f08-4b68-954f-8bdff5d886cc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.478998 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv2xq\" (UniqueName: \"kubernetes.io/projected/5577ca29-6f08-4b68-954f-8bdff5d886cc-kube-api-access-hv2xq\") pod \"nova-api-0\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.583789 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.810865 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.922830 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2cfa319b-3748-4cf5-9254-2af8ad04ffdc","Type":"ContainerStarted","Data":"4096ebf03a14e0e73e0ea175c0db3b27f6bd8591d31d085c11182fa32bf5e186"} Jan 21 14:55:50 crc kubenswrapper[4902]: I0121 14:55:50.117367 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:50 crc kubenswrapper[4902]: W0121 14:55:50.118962 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5577ca29_6f08_4b68_954f_8bdff5d886cc.slice/crio-96e9941d2b125212d0ca1c62e5f8963a43be81ae9253acc0176ed7621b2023bf WatchSource:0}: Error finding container 96e9941d2b125212d0ca1c62e5f8963a43be81ae9253acc0176ed7621b2023bf: Status 404 returned error can't find the container with id 96e9941d2b125212d0ca1c62e5f8963a43be81ae9253acc0176ed7621b2023bf Jan 21 14:55:50 crc kubenswrapper[4902]: I0121 14:55:50.314831 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09716208-ecef-418b-b04b-fcfad53e017d" path="/var/lib/kubelet/pods/09716208-ecef-418b-b04b-fcfad53e017d/volumes" Jan 21 14:55:50 crc kubenswrapper[4902]: I0121 14:55:50.315984 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cabfbeed-c979-4978-bdeb-68ac2c9023a1" path="/var/lib/kubelet/pods/cabfbeed-c979-4978-bdeb-68ac2c9023a1/volumes" Jan 21 14:55:50 crc kubenswrapper[4902]: I0121 14:55:50.940703 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2cfa319b-3748-4cf5-9254-2af8ad04ffdc","Type":"ContainerStarted","Data":"2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096"} Jan 21 14:55:50 crc kubenswrapper[4902]: I0121 14:55:50.942707 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5577ca29-6f08-4b68-954f-8bdff5d886cc","Type":"ContainerStarted","Data":"c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be"} Jan 21 14:55:50 crc kubenswrapper[4902]: I0121 14:55:50.942742 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5577ca29-6f08-4b68-954f-8bdff5d886cc","Type":"ContainerStarted","Data":"ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1"} Jan 21 14:55:50 crc kubenswrapper[4902]: I0121 14:55:50.942756 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5577ca29-6f08-4b68-954f-8bdff5d886cc","Type":"ContainerStarted","Data":"96e9941d2b125212d0ca1c62e5f8963a43be81ae9253acc0176ed7621b2023bf"} Jan 21 14:55:50 crc kubenswrapper[4902]: I0121 14:55:50.960704 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.96068865 podStartE2EDuration="2.96068865s" podCreationTimestamp="2026-01-21 14:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:55:50.955633336 +0000 UTC m=+1313.032466385" watchObservedRunningTime="2026-01-21 14:55:50.96068865 +0000 UTC m=+1313.037521679" Jan 21 14:55:50 crc kubenswrapper[4902]: I0121 14:55:50.984131 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.9841131349999999 podStartE2EDuration="1.984113135s" podCreationTimestamp="2026-01-21 14:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:55:50.975873466 +0000 UTC m=+1313.052706505" watchObservedRunningTime="2026-01-21 14:55:50.984113135 +0000 UTC m=+1313.060946164" Jan 21 14:55:51 crc kubenswrapper[4902]: I0121 14:55:51.305244 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 14:55:53 crc kubenswrapper[4902]: I0121 14:55:53.469836 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 14:55:53 crc kubenswrapper[4902]: I0121 14:55:53.470474 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 14:55:54 crc kubenswrapper[4902]: I0121 14:55:54.313381 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:54 crc kubenswrapper[4902]: I0121 14:55:54.314471 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 14:55:54 crc kubenswrapper[4902]: I0121 14:55:54.484227 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 14:55:54 crc kubenswrapper[4902]: I0121 14:55:54.484389 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 14:55:56 crc kubenswrapper[4902]: I0121 14:55:56.002631 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c","Type":"ContainerStarted","Data":"bd52d798dc010916ae8edaeb3274cb5243189094cb02b39553324e926f46afe7"} Jan 21 14:55:56 crc kubenswrapper[4902]: I0121 14:55:56.003217 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 14:55:56 crc kubenswrapper[4902]: I0121 14:55:56.036300 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.694638346 podStartE2EDuration="12.036277201s" podCreationTimestamp="2026-01-21 14:55:44 +0000 UTC" firstStartedPulling="2026-01-21 14:55:45.397770902 +0000 UTC m=+1307.474603931" lastFinishedPulling="2026-01-21 14:55:54.739409737 +0000 UTC m=+1316.816242786" observedRunningTime="2026-01-21 14:55:56.028816192 +0000 UTC m=+1318.105649281" watchObservedRunningTime="2026-01-21 14:55:56.036277201 +0000 UTC m=+1318.113110230" Jan 21 14:55:59 crc kubenswrapper[4902]: I0121 14:55:59.315289 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 14:55:59 crc kubenswrapper[4902]: I0121 14:55:59.348116 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 14:55:59 crc kubenswrapper[4902]: I0121 14:55:59.584946 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 14:55:59 crc kubenswrapper[4902]: I0121 14:55:59.585036 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 14:56:00 crc kubenswrapper[4902]: I0121 14:56:00.067015 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 14:56:00 crc kubenswrapper[4902]: I0121 14:56:00.667358 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5577ca29-6f08-4b68-954f-8bdff5d886cc" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 14:56:00 crc kubenswrapper[4902]: I0121 14:56:00.668877 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5577ca29-6f08-4b68-954f-8bdff5d886cc" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 14:56:03 crc kubenswrapper[4902]: I0121 14:56:03.476698 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 14:56:03 crc kubenswrapper[4902]: I0121 14:56:03.477289 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 14:56:03 crc kubenswrapper[4902]: I0121 14:56:03.485958 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 14:56:03 crc kubenswrapper[4902]: I0121 14:56:03.490918 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 14:56:06 crc kubenswrapper[4902]: I0121 14:56:06.946531 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.115823 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7593ca7-9aeb-4763-8bc3-964147d459ce-config-data\") pod \"c7593ca7-9aeb-4763-8bc3-964147d459ce\" (UID: \"c7593ca7-9aeb-4763-8bc3-964147d459ce\") " Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.117033 4902 generic.go:334] "Generic (PLEG): container finished" podID="c7593ca7-9aeb-4763-8bc3-964147d459ce" containerID="84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b" exitCode=137 Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.117128 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c7593ca7-9aeb-4763-8bc3-964147d459ce","Type":"ContainerDied","Data":"84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b"} Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.117408 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c7593ca7-9aeb-4763-8bc3-964147d459ce","Type":"ContainerDied","Data":"20c0da8ce9148a9ce1d2bbb934c0cba1985f7cac4f00b74da4ba453452a4725d"} Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.117448 4902 scope.go:117] "RemoveContainer" containerID="84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.117450 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7593ca7-9aeb-4763-8bc3-964147d459ce-combined-ca-bundle\") pod \"c7593ca7-9aeb-4763-8bc3-964147d459ce\" (UID: \"c7593ca7-9aeb-4763-8bc3-964147d459ce\") " Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.117500 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnv6n\" (UniqueName: \"kubernetes.io/projected/c7593ca7-9aeb-4763-8bc3-964147d459ce-kube-api-access-fnv6n\") pod \"c7593ca7-9aeb-4763-8bc3-964147d459ce\" (UID: \"c7593ca7-9aeb-4763-8bc3-964147d459ce\") " Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.117145 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.124435 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7593ca7-9aeb-4763-8bc3-964147d459ce-kube-api-access-fnv6n" (OuterVolumeSpecName: "kube-api-access-fnv6n") pod "c7593ca7-9aeb-4763-8bc3-964147d459ce" (UID: "c7593ca7-9aeb-4763-8bc3-964147d459ce"). InnerVolumeSpecName "kube-api-access-fnv6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.153327 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7593ca7-9aeb-4763-8bc3-964147d459ce-config-data" (OuterVolumeSpecName: "config-data") pod "c7593ca7-9aeb-4763-8bc3-964147d459ce" (UID: "c7593ca7-9aeb-4763-8bc3-964147d459ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.162064 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7593ca7-9aeb-4763-8bc3-964147d459ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7593ca7-9aeb-4763-8bc3-964147d459ce" (UID: "c7593ca7-9aeb-4763-8bc3-964147d459ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.219405 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7593ca7-9aeb-4763-8bc3-964147d459ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.219467 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnv6n\" (UniqueName: \"kubernetes.io/projected/c7593ca7-9aeb-4763-8bc3-964147d459ce-kube-api-access-fnv6n\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.219488 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7593ca7-9aeb-4763-8bc3-964147d459ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.249332 4902 scope.go:117] "RemoveContainer" containerID="84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b" Jan 21 14:56:07 crc kubenswrapper[4902]: E0121 14:56:07.249859 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b\": container with ID starting with 84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b not found: ID does not exist" containerID="84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.249909 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b"} err="failed to get container status \"84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b\": rpc error: code = NotFound desc = could not find container \"84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b\": container with ID starting with 84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b not found: ID does not exist" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.458770 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.471322 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.482173 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:56:07 crc kubenswrapper[4902]: E0121 14:56:07.482729 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7593ca7-9aeb-4763-8bc3-964147d459ce" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.482749 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7593ca7-9aeb-4763-8bc3-964147d459ce" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.482989 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7593ca7-9aeb-4763-8bc3-964147d459ce" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.484641 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.489149 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.489472 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.490339 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.501285 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.627256 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.627781 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.627877 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.628126 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.628207 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mw6c\" (UniqueName: \"kubernetes.io/projected/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-kube-api-access-4mw6c\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.729540 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.729608 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.729684 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.729726 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mw6c\" (UniqueName: \"kubernetes.io/projected/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-kube-api-access-4mw6c\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.729798 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.736071 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.742779 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.748592 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.748829 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.754919 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mw6c\" (UniqueName: \"kubernetes.io/projected/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-kube-api-access-4mw6c\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.828926 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:08 crc kubenswrapper[4902]: I0121 14:56:08.323100 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7593ca7-9aeb-4763-8bc3-964147d459ce" path="/var/lib/kubelet/pods/c7593ca7-9aeb-4763-8bc3-964147d459ce/volumes" Jan 21 14:56:08 crc kubenswrapper[4902]: I0121 14:56:08.324728 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:56:09 crc kubenswrapper[4902]: I0121 14:56:09.140665 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044","Type":"ContainerStarted","Data":"51f3e0557ba29d0e459dc32f45c40c004e66a2616c90bcc78b93663bdae1ff99"} Jan 21 14:56:09 crc kubenswrapper[4902]: I0121 14:56:09.141361 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044","Type":"ContainerStarted","Data":"140924a047cb28624865b0efcf1a901932347a50fbd34bbfa1c4027f44fbc891"} Jan 21 14:56:09 crc kubenswrapper[4902]: I0121 14:56:09.164914 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.164898131 podStartE2EDuration="2.164898131s" podCreationTimestamp="2026-01-21 14:56:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:56:09.160612687 +0000 UTC m=+1331.237445716" watchObservedRunningTime="2026-01-21 14:56:09.164898131 +0000 UTC m=+1331.241731160" Jan 21 14:56:09 crc kubenswrapper[4902]: I0121 14:56:09.589964 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 14:56:09 crc kubenswrapper[4902]: I0121 14:56:09.590591 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 14:56:09 crc kubenswrapper[4902]: I0121 14:56:09.590819 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 14:56:09 crc kubenswrapper[4902]: I0121 14:56:09.600745 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.149878 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.156341 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.375377 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-gzrwg"] Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.377121 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.392682 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-gzrwg"] Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.580757 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.581128 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.581344 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-config\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.581451 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.581653 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.581693 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j28lj\" (UniqueName: \"kubernetes.io/projected/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-kube-api-access-j28lj\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.683165 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.683665 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j28lj\" (UniqueName: \"kubernetes.io/projected/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-kube-api-access-j28lj\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.683783 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.683890 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.683988 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-config\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.684128 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.684269 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.684772 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.684880 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-config\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.684956 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.684995 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.712139 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j28lj\" (UniqueName: \"kubernetes.io/projected/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-kube-api-access-j28lj\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.996105 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:11 crc kubenswrapper[4902]: I0121 14:56:11.452291 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-gzrwg"] Jan 21 14:56:12 crc kubenswrapper[4902]: I0121 14:56:12.178598 4902 generic.go:334] "Generic (PLEG): container finished" podID="5ef26f87-2d73-4847-abfb-a3bbda8c01c6" containerID="462406faba8c1d9f8c0864988f3185e2594f2024aa4406a8b2fa2099a7006d0c" exitCode=0 Jan 21 14:56:12 crc kubenswrapper[4902]: I0121 14:56:12.178836 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" event={"ID":"5ef26f87-2d73-4847-abfb-a3bbda8c01c6","Type":"ContainerDied","Data":"462406faba8c1d9f8c0864988f3185e2594f2024aa4406a8b2fa2099a7006d0c"} Jan 21 14:56:12 crc kubenswrapper[4902]: I0121 14:56:12.179025 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" event={"ID":"5ef26f87-2d73-4847-abfb-a3bbda8c01c6","Type":"ContainerStarted","Data":"e36154beae48e47217e600b25e3832ce07f5b5cba75bd916fc8d19d2d77082ca"} Jan 21 14:56:12 crc kubenswrapper[4902]: I0121 14:56:12.375119 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:56:12 crc kubenswrapper[4902]: I0121 14:56:12.375522 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="ceilometer-central-agent" containerID="cri-o://73d5c6a2e3d4353a6126370981cecb65384400070335eb42053d76f078ba3998" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4902]: I0121 14:56:12.375678 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="proxy-httpd" containerID="cri-o://bd52d798dc010916ae8edaeb3274cb5243189094cb02b39553324e926f46afe7" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4902]: I0121 14:56:12.375739 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="sg-core" containerID="cri-o://7ad84b6a868fbbeb6497690a3b9ab62535445a584970624ac9e7776480ccd69f" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4902]: I0121 14:56:12.375780 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="ceilometer-notification-agent" containerID="cri-o://9dcf6007e20245ce52b38ae3acb298e83909872d194cb4df4467e5731bef26a7" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4902]: I0121 14:56:12.390860 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.193:3000/\": EOF" Jan 21 14:56:12 crc kubenswrapper[4902]: I0121 14:56:12.829269 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:12 crc kubenswrapper[4902]: I0121 14:56:12.957739 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:13 crc kubenswrapper[4902]: I0121 14:56:13.202154 4902 generic.go:334] "Generic (PLEG): container finished" podID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerID="bd52d798dc010916ae8edaeb3274cb5243189094cb02b39553324e926f46afe7" exitCode=0 Jan 21 14:56:13 crc kubenswrapper[4902]: I0121 14:56:13.202189 4902 generic.go:334] "Generic (PLEG): container finished" podID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerID="7ad84b6a868fbbeb6497690a3b9ab62535445a584970624ac9e7776480ccd69f" exitCode=2 Jan 21 14:56:13 crc kubenswrapper[4902]: I0121 14:56:13.202200 4902 generic.go:334] "Generic (PLEG): container finished" podID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerID="73d5c6a2e3d4353a6126370981cecb65384400070335eb42053d76f078ba3998" exitCode=0 Jan 21 14:56:13 crc kubenswrapper[4902]: I0121 14:56:13.202220 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c","Type":"ContainerDied","Data":"bd52d798dc010916ae8edaeb3274cb5243189094cb02b39553324e926f46afe7"} Jan 21 14:56:13 crc kubenswrapper[4902]: I0121 14:56:13.202279 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c","Type":"ContainerDied","Data":"7ad84b6a868fbbeb6497690a3b9ab62535445a584970624ac9e7776480ccd69f"} Jan 21 14:56:13 crc kubenswrapper[4902]: I0121 14:56:13.202294 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c","Type":"ContainerDied","Data":"73d5c6a2e3d4353a6126370981cecb65384400070335eb42053d76f078ba3998"} Jan 21 14:56:13 crc kubenswrapper[4902]: I0121 14:56:13.204198 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" event={"ID":"5ef26f87-2d73-4847-abfb-a3bbda8c01c6","Type":"ContainerStarted","Data":"193c2ec1f234088f5b0bf3f8d841b9715ab506a6f64990bd75f4173da10330ef"} Jan 21 14:56:13 crc kubenswrapper[4902]: I0121 14:56:13.204354 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5577ca29-6f08-4b68-954f-8bdff5d886cc" containerName="nova-api-log" containerID="cri-o://ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1" gracePeriod=30 Jan 21 14:56:13 crc kubenswrapper[4902]: I0121 14:56:13.204394 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5577ca29-6f08-4b68-954f-8bdff5d886cc" containerName="nova-api-api" containerID="cri-o://c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be" gracePeriod=30 Jan 21 14:56:13 crc kubenswrapper[4902]: I0121 14:56:13.245230 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" podStartSLOduration=3.245208552 podStartE2EDuration="3.245208552s" podCreationTimestamp="2026-01-21 14:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:56:13.241810321 +0000 UTC m=+1335.318643350" watchObservedRunningTime="2026-01-21 14:56:13.245208552 +0000 UTC m=+1335.322041581" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.217307 4902 generic.go:334] "Generic (PLEG): container finished" podID="5577ca29-6f08-4b68-954f-8bdff5d886cc" containerID="ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1" exitCode=143 Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.217455 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5577ca29-6f08-4b68-954f-8bdff5d886cc","Type":"ContainerDied","Data":"ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1"} Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.222721 4902 generic.go:334] "Generic (PLEG): container finished" podID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerID="9dcf6007e20245ce52b38ae3acb298e83909872d194cb4df4467e5731bef26a7" exitCode=0 Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.222778 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c","Type":"ContainerDied","Data":"9dcf6007e20245ce52b38ae3acb298e83909872d194cb4df4467e5731bef26a7"} Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.223096 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.472149 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.670587 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-sg-core-conf-yaml\") pod \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.670698 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-config-data\") pod \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.670777 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-ceilometer-tls-certs\") pod \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.670845 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-scripts\") pod \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.670927 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-run-httpd\") pod \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.670968 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc46c\" (UniqueName: \"kubernetes.io/projected/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-kube-api-access-zc46c\") pod \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.670991 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-log-httpd\") pod \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.671026 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-combined-ca-bundle\") pod \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.672248 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" (UID: "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.673513 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" (UID: "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.676968 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-kube-api-access-zc46c" (OuterVolumeSpecName: "kube-api-access-zc46c") pod "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" (UID: "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c"). InnerVolumeSpecName "kube-api-access-zc46c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.677501 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-scripts" (OuterVolumeSpecName: "scripts") pod "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" (UID: "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.704665 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" (UID: "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.746238 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" (UID: "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.764376 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" (UID: "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.773283 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.773312 4902 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.773323 4902 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.773332 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.773342 4902 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.773350 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc46c\" (UniqueName: \"kubernetes.io/projected/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-kube-api-access-zc46c\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.773359 4902 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.799837 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-config-data" (OuterVolumeSpecName: "config-data") pod "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" (UID: "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.875801 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.235145 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.235230 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c","Type":"ContainerDied","Data":"958f1bdb995189beafc661310dd6706581a17051cfd226d0ba249d51be82b53c"} Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.235365 4902 scope.go:117] "RemoveContainer" containerID="bd52d798dc010916ae8edaeb3274cb5243189094cb02b39553324e926f46afe7" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.262484 4902 scope.go:117] "RemoveContainer" containerID="7ad84b6a868fbbeb6497690a3b9ab62535445a584970624ac9e7776480ccd69f" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.275895 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.285771 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.296357 4902 scope.go:117] "RemoveContainer" containerID="9dcf6007e20245ce52b38ae3acb298e83909872d194cb4df4467e5731bef26a7" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.315625 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:56:15 crc kubenswrapper[4902]: E0121 14:56:15.316245 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="ceilometer-notification-agent" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.316263 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="ceilometer-notification-agent" Jan 21 14:56:15 crc kubenswrapper[4902]: E0121 14:56:15.316280 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="sg-core" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.316288 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="sg-core" Jan 21 14:56:15 crc kubenswrapper[4902]: E0121 14:56:15.316312 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="ceilometer-central-agent" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.316317 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="ceilometer-central-agent" Jan 21 14:56:15 crc kubenswrapper[4902]: E0121 14:56:15.316336 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="proxy-httpd" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.316342 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="proxy-httpd" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.316552 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="sg-core" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.316569 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="ceilometer-central-agent" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.316582 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="proxy-httpd" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.316595 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="ceilometer-notification-agent" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.320788 4902 scope.go:117] "RemoveContainer" containerID="73d5c6a2e3d4353a6126370981cecb65384400070335eb42053d76f078ba3998" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.322253 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.324301 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.325667 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.325807 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.330139 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.489382 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.489468 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.489495 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-run-httpd\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.489700 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-log-httpd\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.489866 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5c9s\" (UniqueName: \"kubernetes.io/projected/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-kube-api-access-r5c9s\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.489957 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-scripts\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.490176 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-config-data\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.490381 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.591500 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-config-data\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.591587 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.591652 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.591691 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.591716 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-run-httpd\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.591746 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-log-httpd\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.591773 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5c9s\" (UniqueName: \"kubernetes.io/projected/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-kube-api-access-r5c9s\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.591796 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-scripts\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.592720 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-run-httpd\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.593077 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-log-httpd\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.597448 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.597467 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-scripts\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.597502 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.598236 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-config-data\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.599523 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.624500 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5c9s\" (UniqueName: \"kubernetes.io/projected/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-kube-api-access-r5c9s\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.684832 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:56:16 crc kubenswrapper[4902]: W0121 14:56:16.138467 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod874c6c46_dedc_4ec9_8ee5_c45ef9cddb53.slice/crio-c200d00278992d8d2cca7e33c912295c7207132824d0c1563c30e02dcd83a48e WatchSource:0}: Error finding container c200d00278992d8d2cca7e33c912295c7207132824d0c1563c30e02dcd83a48e: Status 404 returned error can't find the container with id c200d00278992d8d2cca7e33c912295c7207132824d0c1563c30e02dcd83a48e Jan 21 14:56:16 crc kubenswrapper[4902]: I0121 14:56:16.148595 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:56:16 crc kubenswrapper[4902]: I0121 14:56:16.251370 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53","Type":"ContainerStarted","Data":"c200d00278992d8d2cca7e33c912295c7207132824d0c1563c30e02dcd83a48e"} Jan 21 14:56:16 crc kubenswrapper[4902]: I0121 14:56:16.308406 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" path="/var/lib/kubelet/pods/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c/volumes" Jan 21 14:56:16 crc kubenswrapper[4902]: I0121 14:56:16.832892 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:56:16 crc kubenswrapper[4902]: I0121 14:56:16.917853 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5577ca29-6f08-4b68-954f-8bdff5d886cc-combined-ca-bundle\") pod \"5577ca29-6f08-4b68-954f-8bdff5d886cc\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " Jan 21 14:56:16 crc kubenswrapper[4902]: I0121 14:56:16.917946 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5577ca29-6f08-4b68-954f-8bdff5d886cc-config-data\") pod \"5577ca29-6f08-4b68-954f-8bdff5d886cc\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " Jan 21 14:56:16 crc kubenswrapper[4902]: I0121 14:56:16.918745 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5577ca29-6f08-4b68-954f-8bdff5d886cc-logs\") pod \"5577ca29-6f08-4b68-954f-8bdff5d886cc\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " Jan 21 14:56:16 crc kubenswrapper[4902]: I0121 14:56:16.918804 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv2xq\" (UniqueName: \"kubernetes.io/projected/5577ca29-6f08-4b68-954f-8bdff5d886cc-kube-api-access-hv2xq\") pod \"5577ca29-6f08-4b68-954f-8bdff5d886cc\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " Jan 21 14:56:16 crc kubenswrapper[4902]: I0121 14:56:16.919303 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5577ca29-6f08-4b68-954f-8bdff5d886cc-logs" (OuterVolumeSpecName: "logs") pod "5577ca29-6f08-4b68-954f-8bdff5d886cc" (UID: "5577ca29-6f08-4b68-954f-8bdff5d886cc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4902]: I0121 14:56:16.924302 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5577ca29-6f08-4b68-954f-8bdff5d886cc-kube-api-access-hv2xq" (OuterVolumeSpecName: "kube-api-access-hv2xq") pod "5577ca29-6f08-4b68-954f-8bdff5d886cc" (UID: "5577ca29-6f08-4b68-954f-8bdff5d886cc"). InnerVolumeSpecName "kube-api-access-hv2xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4902]: I0121 14:56:16.958165 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5577ca29-6f08-4b68-954f-8bdff5d886cc-config-data" (OuterVolumeSpecName: "config-data") pod "5577ca29-6f08-4b68-954f-8bdff5d886cc" (UID: "5577ca29-6f08-4b68-954f-8bdff5d886cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4902]: I0121 14:56:16.975262 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5577ca29-6f08-4b68-954f-8bdff5d886cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5577ca29-6f08-4b68-954f-8bdff5d886cc" (UID: "5577ca29-6f08-4b68-954f-8bdff5d886cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.020556 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv2xq\" (UniqueName: \"kubernetes.io/projected/5577ca29-6f08-4b68-954f-8bdff5d886cc-kube-api-access-hv2xq\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.020587 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5577ca29-6f08-4b68-954f-8bdff5d886cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.020597 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5577ca29-6f08-4b68-954f-8bdff5d886cc-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.020605 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5577ca29-6f08-4b68-954f-8bdff5d886cc-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.261407 4902 generic.go:334] "Generic (PLEG): container finished" podID="5577ca29-6f08-4b68-954f-8bdff5d886cc" containerID="c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be" exitCode=0 Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.261484 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.261509 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5577ca29-6f08-4b68-954f-8bdff5d886cc","Type":"ContainerDied","Data":"c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be"} Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.261867 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5577ca29-6f08-4b68-954f-8bdff5d886cc","Type":"ContainerDied","Data":"96e9941d2b125212d0ca1c62e5f8963a43be81ae9253acc0176ed7621b2023bf"} Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.261885 4902 scope.go:117] "RemoveContainer" containerID="c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.263765 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53","Type":"ContainerStarted","Data":"49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110"} Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.284815 4902 scope.go:117] "RemoveContainer" containerID="ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.308598 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.316419 4902 scope.go:117] "RemoveContainer" containerID="c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be" Jan 21 14:56:17 crc kubenswrapper[4902]: E0121 14:56:17.319327 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be\": container with ID starting with c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be not found: ID does not exist" containerID="c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.319379 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be"} err="failed to get container status \"c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be\": rpc error: code = NotFound desc = could not find container \"c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be\": container with ID starting with c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be not found: ID does not exist" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.319403 4902 scope.go:117] "RemoveContainer" containerID="ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.319417 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:17 crc kubenswrapper[4902]: E0121 14:56:17.324315 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1\": container with ID starting with ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1 not found: ID does not exist" containerID="ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.324370 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1"} err="failed to get container status \"ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1\": rpc error: code = NotFound desc = could not find container \"ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1\": container with ID starting with ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1 not found: ID does not exist" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.344159 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:17 crc kubenswrapper[4902]: E0121 14:56:17.344646 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5577ca29-6f08-4b68-954f-8bdff5d886cc" containerName="nova-api-api" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.344666 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5577ca29-6f08-4b68-954f-8bdff5d886cc" containerName="nova-api-api" Jan 21 14:56:17 crc kubenswrapper[4902]: E0121 14:56:17.344696 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5577ca29-6f08-4b68-954f-8bdff5d886cc" containerName="nova-api-log" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.344706 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5577ca29-6f08-4b68-954f-8bdff5d886cc" containerName="nova-api-log" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.344945 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5577ca29-6f08-4b68-954f-8bdff5d886cc" containerName="nova-api-log" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.344974 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5577ca29-6f08-4b68-954f-8bdff5d886cc" containerName="nova-api-api" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.346126 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.351876 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.352286 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.352436 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.354370 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.538268 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.538367 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd799afd-06ad-483d-b59d-9b5c1e947a6a-logs\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.538425 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-public-tls-certs\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.538454 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-config-data\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.538527 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.538571 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w9q5\" (UniqueName: \"kubernetes.io/projected/cd799afd-06ad-483d-b59d-9b5c1e947a6a-kube-api-access-2w9q5\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.639642 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.639714 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd799afd-06ad-483d-b59d-9b5c1e947a6a-logs\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.639774 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-public-tls-certs\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.639794 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-config-data\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.639844 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.639863 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w9q5\" (UniqueName: \"kubernetes.io/projected/cd799afd-06ad-483d-b59d-9b5c1e947a6a-kube-api-access-2w9q5\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.640537 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd799afd-06ad-483d-b59d-9b5c1e947a6a-logs\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.643272 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.643587 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.644287 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-config-data\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.648574 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-public-tls-certs\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.662472 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w9q5\" (UniqueName: \"kubernetes.io/projected/cd799afd-06ad-483d-b59d-9b5c1e947a6a-kube-api-access-2w9q5\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.692262 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.829529 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.855485 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.180646 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:18 crc kubenswrapper[4902]: W0121 14:56:18.183607 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd799afd_06ad_483d_b59d_9b5c1e947a6a.slice/crio-e337f3f5a33903a3ca45aa510f3c236212e8e9e8cef1f827f67fc4fbb4689ba2 WatchSource:0}: Error finding container e337f3f5a33903a3ca45aa510f3c236212e8e9e8cef1f827f67fc4fbb4689ba2: Status 404 returned error can't find the container with id e337f3f5a33903a3ca45aa510f3c236212e8e9e8cef1f827f67fc4fbb4689ba2 Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.285370 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cd799afd-06ad-483d-b59d-9b5c1e947a6a","Type":"ContainerStarted","Data":"e337f3f5a33903a3ca45aa510f3c236212e8e9e8cef1f827f67fc4fbb4689ba2"} Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.288546 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53","Type":"ContainerStarted","Data":"c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67"} Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.306895 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5577ca29-6f08-4b68-954f-8bdff5d886cc" path="/var/lib/kubelet/pods/5577ca29-6f08-4b68-954f-8bdff5d886cc/volumes" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.309624 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.494087 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-2k78d"] Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.495277 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.506414 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.509943 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.546380 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-2k78d"] Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.563801 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-config-data\") pod \"nova-cell1-cell-mapping-2k78d\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.563949 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2k78d\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.563985 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9mgr\" (UniqueName: \"kubernetes.io/projected/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-kube-api-access-f9mgr\") pod \"nova-cell1-cell-mapping-2k78d\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.564007 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-scripts\") pod \"nova-cell1-cell-mapping-2k78d\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.664512 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2k78d\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.664764 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9mgr\" (UniqueName: \"kubernetes.io/projected/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-kube-api-access-f9mgr\") pod \"nova-cell1-cell-mapping-2k78d\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.664784 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-scripts\") pod \"nova-cell1-cell-mapping-2k78d\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.664855 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-config-data\") pod \"nova-cell1-cell-mapping-2k78d\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.668860 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-scripts\") pod \"nova-cell1-cell-mapping-2k78d\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.670066 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2k78d\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.672651 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-config-data\") pod \"nova-cell1-cell-mapping-2k78d\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.681619 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9mgr\" (UniqueName: \"kubernetes.io/projected/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-kube-api-access-f9mgr\") pod \"nova-cell1-cell-mapping-2k78d\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.691744 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:19 crc kubenswrapper[4902]: I0121 14:56:19.164960 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-2k78d"] Jan 21 14:56:19 crc kubenswrapper[4902]: W0121 14:56:19.166658 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f3ab19a_d650_41ea_aadd_8ec73ed824f2.slice/crio-a6af641e9642822dba90080f660756012cdf63aa5e5e1d6680572bc43b1b15f2 WatchSource:0}: Error finding container a6af641e9642822dba90080f660756012cdf63aa5e5e1d6680572bc43b1b15f2: Status 404 returned error can't find the container with id a6af641e9642822dba90080f660756012cdf63aa5e5e1d6680572bc43b1b15f2 Jan 21 14:56:19 crc kubenswrapper[4902]: I0121 14:56:19.301864 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53","Type":"ContainerStarted","Data":"91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881"} Jan 21 14:56:19 crc kubenswrapper[4902]: I0121 14:56:19.303710 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2k78d" event={"ID":"8f3ab19a-d650-41ea-aadd-8ec73ed824f2","Type":"ContainerStarted","Data":"a6af641e9642822dba90080f660756012cdf63aa5e5e1d6680572bc43b1b15f2"} Jan 21 14:56:19 crc kubenswrapper[4902]: I0121 14:56:19.306440 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cd799afd-06ad-483d-b59d-9b5c1e947a6a","Type":"ContainerStarted","Data":"946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e"} Jan 21 14:56:19 crc kubenswrapper[4902]: I0121 14:56:19.306709 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cd799afd-06ad-483d-b59d-9b5c1e947a6a","Type":"ContainerStarted","Data":"1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0"} Jan 21 14:56:19 crc kubenswrapper[4902]: I0121 14:56:19.339916 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.339892976 podStartE2EDuration="2.339892976s" podCreationTimestamp="2026-01-21 14:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:56:19.333509895 +0000 UTC m=+1341.410342924" watchObservedRunningTime="2026-01-21 14:56:19.339892976 +0000 UTC m=+1341.416726025" Jan 21 14:56:20 crc kubenswrapper[4902]: I0121 14:56:20.318030 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2k78d" event={"ID":"8f3ab19a-d650-41ea-aadd-8ec73ed824f2","Type":"ContainerStarted","Data":"878874319de7ae0b30076fed21352753826b954ce4e5342f533a40aa94a4f9e8"} Jan 21 14:56:20 crc kubenswrapper[4902]: I0121 14:56:20.322599 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53","Type":"ContainerStarted","Data":"d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0"} Jan 21 14:56:20 crc kubenswrapper[4902]: I0121 14:56:20.340152 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-2k78d" podStartSLOduration=2.340134377 podStartE2EDuration="2.340134377s" podCreationTimestamp="2026-01-21 14:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:56:20.335609576 +0000 UTC m=+1342.412442605" watchObservedRunningTime="2026-01-21 14:56:20.340134377 +0000 UTC m=+1342.416967406" Jan 21 14:56:20 crc kubenswrapper[4902]: I0121 14:56:20.997282 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.023007 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.191687083 podStartE2EDuration="6.022981792s" podCreationTimestamp="2026-01-21 14:56:15 +0000 UTC" firstStartedPulling="2026-01-21 14:56:16.143593835 +0000 UTC m=+1338.220426864" lastFinishedPulling="2026-01-21 14:56:19.974888524 +0000 UTC m=+1342.051721573" observedRunningTime="2026-01-21 14:56:20.364490297 +0000 UTC m=+1342.441323336" watchObservedRunningTime="2026-01-21 14:56:21.022981792 +0000 UTC m=+1343.099814821" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.052032 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-2m4b6"] Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.052470 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" podUID="38f0c216-daa1-42c6-9105-11ad7d5fc686" containerName="dnsmasq-dns" containerID="cri-o://33ac3053e080a371fb9c1294b84f90b6187ab8ed37ebcb04994475127b9d12dc" gracePeriod=10 Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.332558 4902 generic.go:334] "Generic (PLEG): container finished" podID="38f0c216-daa1-42c6-9105-11ad7d5fc686" containerID="33ac3053e080a371fb9c1294b84f90b6187ab8ed37ebcb04994475127b9d12dc" exitCode=0 Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.332683 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" event={"ID":"38f0c216-daa1-42c6-9105-11ad7d5fc686","Type":"ContainerDied","Data":"33ac3053e080a371fb9c1294b84f90b6187ab8ed37ebcb04994475127b9d12dc"} Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.333979 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.566599 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.719353 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-dns-svc\") pod \"38f0c216-daa1-42c6-9105-11ad7d5fc686\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.719726 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqp69\" (UniqueName: \"kubernetes.io/projected/38f0c216-daa1-42c6-9105-11ad7d5fc686-kube-api-access-zqp69\") pod \"38f0c216-daa1-42c6-9105-11ad7d5fc686\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.719938 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-ovsdbserver-nb\") pod \"38f0c216-daa1-42c6-9105-11ad7d5fc686\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.719974 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-dns-swift-storage-0\") pod \"38f0c216-daa1-42c6-9105-11ad7d5fc686\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.719993 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-ovsdbserver-sb\") pod \"38f0c216-daa1-42c6-9105-11ad7d5fc686\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.720018 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-config\") pod \"38f0c216-daa1-42c6-9105-11ad7d5fc686\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.729253 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38f0c216-daa1-42c6-9105-11ad7d5fc686-kube-api-access-zqp69" (OuterVolumeSpecName: "kube-api-access-zqp69") pod "38f0c216-daa1-42c6-9105-11ad7d5fc686" (UID: "38f0c216-daa1-42c6-9105-11ad7d5fc686"). InnerVolumeSpecName "kube-api-access-zqp69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.781939 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "38f0c216-daa1-42c6-9105-11ad7d5fc686" (UID: "38f0c216-daa1-42c6-9105-11ad7d5fc686"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.784586 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-config" (OuterVolumeSpecName: "config") pod "38f0c216-daa1-42c6-9105-11ad7d5fc686" (UID: "38f0c216-daa1-42c6-9105-11ad7d5fc686"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.786374 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "38f0c216-daa1-42c6-9105-11ad7d5fc686" (UID: "38f0c216-daa1-42c6-9105-11ad7d5fc686"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.796807 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "38f0c216-daa1-42c6-9105-11ad7d5fc686" (UID: "38f0c216-daa1-42c6-9105-11ad7d5fc686"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.814546 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "38f0c216-daa1-42c6-9105-11ad7d5fc686" (UID: "38f0c216-daa1-42c6-9105-11ad7d5fc686"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.821764 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.821803 4902 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.821819 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.821833 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.821845 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.821857 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqp69\" (UniqueName: \"kubernetes.io/projected/38f0c216-daa1-42c6-9105-11ad7d5fc686-kube-api-access-zqp69\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4902]: I0121 14:56:22.346980 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:56:22 crc kubenswrapper[4902]: I0121 14:56:22.347458 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" event={"ID":"38f0c216-daa1-42c6-9105-11ad7d5fc686","Type":"ContainerDied","Data":"39dee8363ceb2e4e3e0e527468730e2f88866ca83679486319418b5875a94a82"} Jan 21 14:56:22 crc kubenswrapper[4902]: I0121 14:56:22.347486 4902 scope.go:117] "RemoveContainer" containerID="33ac3053e080a371fb9c1294b84f90b6187ab8ed37ebcb04994475127b9d12dc" Jan 21 14:56:22 crc kubenswrapper[4902]: I0121 14:56:22.380538 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-2m4b6"] Jan 21 14:56:22 crc kubenswrapper[4902]: I0121 14:56:22.381632 4902 scope.go:117] "RemoveContainer" containerID="e33abee5a4d9568ebeebc43b93ca969e5d4b5cadc5a0cff7461433d918dfb71d" Jan 21 14:56:22 crc kubenswrapper[4902]: I0121 14:56:22.390136 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-2m4b6"] Jan 21 14:56:24 crc kubenswrapper[4902]: I0121 14:56:24.305576 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38f0c216-daa1-42c6-9105-11ad7d5fc686" path="/var/lib/kubelet/pods/38f0c216-daa1-42c6-9105-11ad7d5fc686/volumes" Jan 21 14:56:24 crc kubenswrapper[4902]: I0121 14:56:24.368014 4902 generic.go:334] "Generic (PLEG): container finished" podID="8f3ab19a-d650-41ea-aadd-8ec73ed824f2" containerID="878874319de7ae0b30076fed21352753826b954ce4e5342f533a40aa94a4f9e8" exitCode=0 Jan 21 14:56:24 crc kubenswrapper[4902]: I0121 14:56:24.368070 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2k78d" event={"ID":"8f3ab19a-d650-41ea-aadd-8ec73ed824f2","Type":"ContainerDied","Data":"878874319de7ae0b30076fed21352753826b954ce4e5342f533a40aa94a4f9e8"} Jan 21 14:56:25 crc kubenswrapper[4902]: I0121 14:56:25.751522 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:25 crc kubenswrapper[4902]: I0121 14:56:25.797148 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-config-data\") pod \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " Jan 21 14:56:25 crc kubenswrapper[4902]: I0121 14:56:25.798802 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-combined-ca-bundle\") pod \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " Jan 21 14:56:25 crc kubenswrapper[4902]: I0121 14:56:25.869947 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f3ab19a-d650-41ea-aadd-8ec73ed824f2" (UID: "8f3ab19a-d650-41ea-aadd-8ec73ed824f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:25 crc kubenswrapper[4902]: I0121 14:56:25.869975 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-config-data" (OuterVolumeSpecName: "config-data") pod "8f3ab19a-d650-41ea-aadd-8ec73ed824f2" (UID: "8f3ab19a-d650-41ea-aadd-8ec73ed824f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:25 crc kubenswrapper[4902]: I0121 14:56:25.901497 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-scripts\") pod \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " Jan 21 14:56:25 crc kubenswrapper[4902]: I0121 14:56:25.901566 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9mgr\" (UniqueName: \"kubernetes.io/projected/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-kube-api-access-f9mgr\") pod \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " Jan 21 14:56:25 crc kubenswrapper[4902]: I0121 14:56:25.902400 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:25 crc kubenswrapper[4902]: I0121 14:56:25.902421 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:25 crc kubenswrapper[4902]: I0121 14:56:25.906062 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-kube-api-access-f9mgr" (OuterVolumeSpecName: "kube-api-access-f9mgr") pod "8f3ab19a-d650-41ea-aadd-8ec73ed824f2" (UID: "8f3ab19a-d650-41ea-aadd-8ec73ed824f2"). InnerVolumeSpecName "kube-api-access-f9mgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:25 crc kubenswrapper[4902]: I0121 14:56:25.906105 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-scripts" (OuterVolumeSpecName: "scripts") pod "8f3ab19a-d650-41ea-aadd-8ec73ed824f2" (UID: "8f3ab19a-d650-41ea-aadd-8ec73ed824f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.004833 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.004865 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9mgr\" (UniqueName: \"kubernetes.io/projected/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-kube-api-access-f9mgr\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.300412 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" podUID="38f0c216-daa1-42c6-9105-11ad7d5fc686" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.187:5353: i/o timeout" Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.387910 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2k78d" event={"ID":"8f3ab19a-d650-41ea-aadd-8ec73ed824f2","Type":"ContainerDied","Data":"a6af641e9642822dba90080f660756012cdf63aa5e5e1d6680572bc43b1b15f2"} Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.388278 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6af641e9642822dba90080f660756012cdf63aa5e5e1d6680572bc43b1b15f2" Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.388161 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.565840 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.566152 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2cfa319b-3748-4cf5-9254-2af8ad04ffdc" containerName="nova-scheduler-scheduler" containerID="cri-o://2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096" gracePeriod=30 Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.576350 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.576628 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cd799afd-06ad-483d-b59d-9b5c1e947a6a" containerName="nova-api-log" containerID="cri-o://1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0" gracePeriod=30 Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.576671 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cd799afd-06ad-483d-b59d-9b5c1e947a6a" containerName="nova-api-api" containerID="cri-o://946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e" gracePeriod=30 Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.641256 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.641531 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerName="nova-metadata-log" containerID="cri-o://af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7" gracePeriod=30 Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.641635 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerName="nova-metadata-metadata" containerID="cri-o://f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675" gracePeriod=30 Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.151399 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.327868 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd799afd-06ad-483d-b59d-9b5c1e947a6a-logs\") pod \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.327941 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-internal-tls-certs\") pod \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.328026 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9q5\" (UniqueName: \"kubernetes.io/projected/cd799afd-06ad-483d-b59d-9b5c1e947a6a-kube-api-access-2w9q5\") pod \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.328078 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-public-tls-certs\") pod \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.328121 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-config-data\") pod \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.328149 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-combined-ca-bundle\") pod \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.328258 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd799afd-06ad-483d-b59d-9b5c1e947a6a-logs" (OuterVolumeSpecName: "logs") pod "cd799afd-06ad-483d-b59d-9b5c1e947a6a" (UID: "cd799afd-06ad-483d-b59d-9b5c1e947a6a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.328508 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd799afd-06ad-483d-b59d-9b5c1e947a6a-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.332756 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd799afd-06ad-483d-b59d-9b5c1e947a6a-kube-api-access-2w9q5" (OuterVolumeSpecName: "kube-api-access-2w9q5") pod "cd799afd-06ad-483d-b59d-9b5c1e947a6a" (UID: "cd799afd-06ad-483d-b59d-9b5c1e947a6a"). InnerVolumeSpecName "kube-api-access-2w9q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.360173 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd799afd-06ad-483d-b59d-9b5c1e947a6a" (UID: "cd799afd-06ad-483d-b59d-9b5c1e947a6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.360246 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-config-data" (OuterVolumeSpecName: "config-data") pod "cd799afd-06ad-483d-b59d-9b5c1e947a6a" (UID: "cd799afd-06ad-483d-b59d-9b5c1e947a6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.391704 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cd799afd-06ad-483d-b59d-9b5c1e947a6a" (UID: "cd799afd-06ad-483d-b59d-9b5c1e947a6a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.393936 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cd799afd-06ad-483d-b59d-9b5c1e947a6a" (UID: "cd799afd-06ad-483d-b59d-9b5c1e947a6a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.401018 4902 generic.go:334] "Generic (PLEG): container finished" podID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerID="af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7" exitCode=143 Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.401119 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a","Type":"ContainerDied","Data":"af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7"} Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.403017 4902 generic.go:334] "Generic (PLEG): container finished" podID="cd799afd-06ad-483d-b59d-9b5c1e947a6a" containerID="946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e" exitCode=0 Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.403036 4902 generic.go:334] "Generic (PLEG): container finished" podID="cd799afd-06ad-483d-b59d-9b5c1e947a6a" containerID="1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0" exitCode=143 Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.403066 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cd799afd-06ad-483d-b59d-9b5c1e947a6a","Type":"ContainerDied","Data":"946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e"} Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.403084 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cd799afd-06ad-483d-b59d-9b5c1e947a6a","Type":"ContainerDied","Data":"1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0"} Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.403095 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cd799afd-06ad-483d-b59d-9b5c1e947a6a","Type":"ContainerDied","Data":"e337f3f5a33903a3ca45aa510f3c236212e8e9e8cef1f827f67fc4fbb4689ba2"} Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.403111 4902 scope.go:117] "RemoveContainer" containerID="946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.403227 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.431481 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.431738 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.431834 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.431914 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9q5\" (UniqueName: \"kubernetes.io/projected/cd799afd-06ad-483d-b59d-9b5c1e947a6a-kube-api-access-2w9q5\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.431997 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.438703 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.444459 4902 scope.go:117] "RemoveContainer" containerID="1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.449435 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.463285 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:27 crc kubenswrapper[4902]: E0121 14:56:27.463969 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd799afd-06ad-483d-b59d-9b5c1e947a6a" containerName="nova-api-api" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.463992 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd799afd-06ad-483d-b59d-9b5c1e947a6a" containerName="nova-api-api" Jan 21 14:56:27 crc kubenswrapper[4902]: E0121 14:56:27.464033 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38f0c216-daa1-42c6-9105-11ad7d5fc686" containerName="init" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.464075 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="38f0c216-daa1-42c6-9105-11ad7d5fc686" containerName="init" Jan 21 14:56:27 crc kubenswrapper[4902]: E0121 14:56:27.464098 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f3ab19a-d650-41ea-aadd-8ec73ed824f2" containerName="nova-manage" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.464108 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f3ab19a-d650-41ea-aadd-8ec73ed824f2" containerName="nova-manage" Jan 21 14:56:27 crc kubenswrapper[4902]: E0121 14:56:27.464121 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38f0c216-daa1-42c6-9105-11ad7d5fc686" containerName="dnsmasq-dns" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.464154 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="38f0c216-daa1-42c6-9105-11ad7d5fc686" containerName="dnsmasq-dns" Jan 21 14:56:27 crc kubenswrapper[4902]: E0121 14:56:27.464174 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd799afd-06ad-483d-b59d-9b5c1e947a6a" containerName="nova-api-log" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.464183 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd799afd-06ad-483d-b59d-9b5c1e947a6a" containerName="nova-api-log" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.465202 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd799afd-06ad-483d-b59d-9b5c1e947a6a" containerName="nova-api-log" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.465234 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f3ab19a-d650-41ea-aadd-8ec73ed824f2" containerName="nova-manage" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.465254 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd799afd-06ad-483d-b59d-9b5c1e947a6a" containerName="nova-api-api" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.465265 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="38f0c216-daa1-42c6-9105-11ad7d5fc686" containerName="dnsmasq-dns" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.466692 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.468622 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.468708 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.468718 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.471678 4902 scope.go:117] "RemoveContainer" containerID="946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e" Jan 21 14:56:27 crc kubenswrapper[4902]: E0121 14:56:27.472146 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e\": container with ID starting with 946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e not found: ID does not exist" containerID="946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.472170 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e"} err="failed to get container status \"946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e\": rpc error: code = NotFound desc = could not find container \"946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e\": container with ID starting with 946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e not found: ID does not exist" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.472188 4902 scope.go:117] "RemoveContainer" containerID="1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0" Jan 21 14:56:27 crc kubenswrapper[4902]: E0121 14:56:27.472551 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0\": container with ID starting with 1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0 not found: ID does not exist" containerID="1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.472570 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0"} err="failed to get container status \"1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0\": rpc error: code = NotFound desc = could not find container \"1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0\": container with ID starting with 1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0 not found: ID does not exist" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.472584 4902 scope.go:117] "RemoveContainer" containerID="946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.472924 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e"} err="failed to get container status \"946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e\": rpc error: code = NotFound desc = could not find container \"946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e\": container with ID starting with 946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e not found: ID does not exist" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.472941 4902 scope.go:117] "RemoveContainer" containerID="1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.475731 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.477342 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0"} err="failed to get container status \"1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0\": rpc error: code = NotFound desc = could not find container \"1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0\": container with ID starting with 1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0 not found: ID does not exist" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.635997 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.636348 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk9n4\" (UniqueName: \"kubernetes.io/projected/0ea9ca5b-2e24-41de-8a99-a882ec11c222-kube-api-access-xk9n4\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.636537 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.636660 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-config-data\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.636818 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ea9ca5b-2e24-41de-8a99-a882ec11c222-logs\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.636878 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-public-tls-certs\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.738932 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.739015 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-config-data\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.739061 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ea9ca5b-2e24-41de-8a99-a882ec11c222-logs\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.739090 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-public-tls-certs\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.739148 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.739184 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk9n4\" (UniqueName: \"kubernetes.io/projected/0ea9ca5b-2e24-41de-8a99-a882ec11c222-kube-api-access-xk9n4\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.739774 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ea9ca5b-2e24-41de-8a99-a882ec11c222-logs\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.744961 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.745575 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.746117 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-config-data\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.746125 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-public-tls-certs\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.760795 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk9n4\" (UniqueName: \"kubernetes.io/projected/0ea9ca5b-2e24-41de-8a99-a882ec11c222-kube-api-access-xk9n4\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.789702 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.264873 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.318810 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd799afd-06ad-483d-b59d-9b5c1e947a6a" path="/var/lib/kubelet/pods/cd799afd-06ad-483d-b59d-9b5c1e947a6a/volumes" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.334193 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.445740 4902 generic.go:334] "Generic (PLEG): container finished" podID="2cfa319b-3748-4cf5-9254-2af8ad04ffdc" containerID="2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096" exitCode=0 Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.445836 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2cfa319b-3748-4cf5-9254-2af8ad04ffdc","Type":"ContainerDied","Data":"2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096"} Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.445869 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2cfa319b-3748-4cf5-9254-2af8ad04ffdc","Type":"ContainerDied","Data":"4096ebf03a14e0e73e0ea175c0db3b27f6bd8591d31d085c11182fa32bf5e186"} Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.445888 4902 scope.go:117] "RemoveContainer" containerID="2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.446072 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.448895 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0ea9ca5b-2e24-41de-8a99-a882ec11c222","Type":"ContainerStarted","Data":"6a0fa8e1aa73ccaec735410bd00188e5105d8445c279b0829562f3033236ffec"} Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.450454 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c97cm\" (UniqueName: \"kubernetes.io/projected/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-kube-api-access-c97cm\") pod \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\" (UID: \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\") " Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.450619 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-config-data\") pod \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\" (UID: \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\") " Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.450743 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-combined-ca-bundle\") pod \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\" (UID: \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\") " Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.460357 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-kube-api-access-c97cm" (OuterVolumeSpecName: "kube-api-access-c97cm") pod "2cfa319b-3748-4cf5-9254-2af8ad04ffdc" (UID: "2cfa319b-3748-4cf5-9254-2af8ad04ffdc"). InnerVolumeSpecName "kube-api-access-c97cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.474301 4902 scope.go:117] "RemoveContainer" containerID="2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096" Jan 21 14:56:28 crc kubenswrapper[4902]: E0121 14:56:28.474826 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096\": container with ID starting with 2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096 not found: ID does not exist" containerID="2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.474884 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096"} err="failed to get container status \"2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096\": rpc error: code = NotFound desc = could not find container \"2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096\": container with ID starting with 2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096 not found: ID does not exist" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.480678 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-config-data" (OuterVolumeSpecName: "config-data") pod "2cfa319b-3748-4cf5-9254-2af8ad04ffdc" (UID: "2cfa319b-3748-4cf5-9254-2af8ad04ffdc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.481530 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2cfa319b-3748-4cf5-9254-2af8ad04ffdc" (UID: "2cfa319b-3748-4cf5-9254-2af8ad04ffdc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.554546 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.554587 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c97cm\" (UniqueName: \"kubernetes.io/projected/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-kube-api-access-c97cm\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.554603 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.786985 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.797799 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.812773 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:56:28 crc kubenswrapper[4902]: E0121 14:56:28.813198 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cfa319b-3748-4cf5-9254-2af8ad04ffdc" containerName="nova-scheduler-scheduler" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.813219 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cfa319b-3748-4cf5-9254-2af8ad04ffdc" containerName="nova-scheduler-scheduler" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.813387 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cfa319b-3748-4cf5-9254-2af8ad04ffdc" containerName="nova-scheduler-scheduler" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.814007 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.816624 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.828012 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.961800 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvj2m\" (UniqueName: \"kubernetes.io/projected/c366100e-d2a0-4be9-965f-ef7b7ad39f78-kube-api-access-lvj2m\") pod \"nova-scheduler-0\" (UID: \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\") " pod="openstack/nova-scheduler-0" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.961876 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c366100e-d2a0-4be9-965f-ef7b7ad39f78-config-data\") pod \"nova-scheduler-0\" (UID: \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\") " pod="openstack/nova-scheduler-0" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.962164 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c366100e-d2a0-4be9-965f-ef7b7ad39f78-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\") " pod="openstack/nova-scheduler-0" Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.063280 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c366100e-d2a0-4be9-965f-ef7b7ad39f78-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\") " pod="openstack/nova-scheduler-0" Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.063696 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvj2m\" (UniqueName: \"kubernetes.io/projected/c366100e-d2a0-4be9-965f-ef7b7ad39f78-kube-api-access-lvj2m\") pod \"nova-scheduler-0\" (UID: \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\") " pod="openstack/nova-scheduler-0" Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.063823 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c366100e-d2a0-4be9-965f-ef7b7ad39f78-config-data\") pod \"nova-scheduler-0\" (UID: \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\") " pod="openstack/nova-scheduler-0" Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.069764 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c366100e-d2a0-4be9-965f-ef7b7ad39f78-config-data\") pod \"nova-scheduler-0\" (UID: \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\") " pod="openstack/nova-scheduler-0" Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.069783 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c366100e-d2a0-4be9-965f-ef7b7ad39f78-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\") " pod="openstack/nova-scheduler-0" Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.086929 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvj2m\" (UniqueName: \"kubernetes.io/projected/c366100e-d2a0-4be9-965f-ef7b7ad39f78-kube-api-access-lvj2m\") pod \"nova-scheduler-0\" (UID: \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\") " pod="openstack/nova-scheduler-0" Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.128883 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.460702 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0ea9ca5b-2e24-41de-8a99-a882ec11c222","Type":"ContainerStarted","Data":"fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748"} Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.461116 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0ea9ca5b-2e24-41de-8a99-a882ec11c222","Type":"ContainerStarted","Data":"155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f"} Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.493799 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.493769527 podStartE2EDuration="2.493769527s" podCreationTimestamp="2026-01-21 14:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:56:29.477118903 +0000 UTC m=+1351.553951962" watchObservedRunningTime="2026-01-21 14:56:29.493769527 +0000 UTC m=+1351.570602586" Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.605494 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.780556 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": read tcp 10.217.0.2:47370->10.217.0.191:8775: read: connection reset by peer" Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.780879 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": read tcp 10.217.0.2:47386->10.217.0.191:8775: read: connection reset by peer" Jan 21 14:56:29 crc kubenswrapper[4902]: E0121 14:56:29.972566 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8f8e17e_d3bd_45ef_bc21_53e079ab3b4a.slice/crio-conmon-f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675.scope\": RecentStats: unable to find data in memory cache]" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.231566 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.312009 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cfa319b-3748-4cf5-9254-2af8ad04ffdc" path="/var/lib/kubelet/pods/2cfa319b-3748-4cf5-9254-2af8ad04ffdc/volumes" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.386959 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-logs\") pod \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.387278 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-combined-ca-bundle\") pod \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.387310 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g62v\" (UniqueName: \"kubernetes.io/projected/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-kube-api-access-4g62v\") pod \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.387349 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-nova-metadata-tls-certs\") pod \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.387377 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-config-data\") pod \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.388463 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-logs" (OuterVolumeSpecName: "logs") pod "a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" (UID: "a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.393032 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-kube-api-access-4g62v" (OuterVolumeSpecName: "kube-api-access-4g62v") pod "a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" (UID: "a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a"). InnerVolumeSpecName "kube-api-access-4g62v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.419298 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" (UID: "a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.420139 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-config-data" (OuterVolumeSpecName: "config-data") pod "a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" (UID: "a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.444663 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" (UID: "a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.472459 4902 generic.go:334] "Generic (PLEG): container finished" podID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerID="f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675" exitCode=0 Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.472525 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a","Type":"ContainerDied","Data":"f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675"} Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.473606 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a","Type":"ContainerDied","Data":"420d4e5dbc151ab2860e03ff284c833763ae6b775900cb8f0097accb8dfdab8c"} Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.473659 4902 scope.go:117] "RemoveContainer" containerID="f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.472535 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.478704 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c366100e-d2a0-4be9-965f-ef7b7ad39f78","Type":"ContainerStarted","Data":"421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812"} Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.478753 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c366100e-d2a0-4be9-965f-ef7b7ad39f78","Type":"ContainerStarted","Data":"f71b431a165886dfcb60b7772fbf29ab480085d500ccd4f828f82ea85ca3c58b"} Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.489363 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.489396 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.489406 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g62v\" (UniqueName: \"kubernetes.io/projected/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-kube-api-access-4g62v\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.489415 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.489424 4902 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.506219 4902 scope.go:117] "RemoveContainer" containerID="af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.515208 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.5151871630000002 podStartE2EDuration="2.515187163s" podCreationTimestamp="2026-01-21 14:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:56:30.501296903 +0000 UTC m=+1352.578129932" watchObservedRunningTime="2026-01-21 14:56:30.515187163 +0000 UTC m=+1352.592020192" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.557022 4902 scope.go:117] "RemoveContainer" containerID="f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.557819 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:56:30 crc kubenswrapper[4902]: E0121 14:56:30.559166 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675\": container with ID starting with f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675 not found: ID does not exist" containerID="f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.559260 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675"} err="failed to get container status \"f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675\": rpc error: code = NotFound desc = could not find container \"f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675\": container with ID starting with f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675 not found: ID does not exist" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.559299 4902 scope.go:117] "RemoveContainer" containerID="af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7" Jan 21 14:56:30 crc kubenswrapper[4902]: E0121 14:56:30.559627 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7\": container with ID starting with af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7 not found: ID does not exist" containerID="af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.559658 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7"} err="failed to get container status \"af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7\": rpc error: code = NotFound desc = could not find container \"af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7\": container with ID starting with af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7 not found: ID does not exist" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.581484 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.592167 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:56:30 crc kubenswrapper[4902]: E0121 14:56:30.592630 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerName="nova-metadata-metadata" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.592654 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerName="nova-metadata-metadata" Jan 21 14:56:30 crc kubenswrapper[4902]: E0121 14:56:30.592731 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerName="nova-metadata-log" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.592739 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerName="nova-metadata-log" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.592945 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerName="nova-metadata-metadata" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.592967 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerName="nova-metadata-log" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.594074 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.596619 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.597664 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.601013 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.694158 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.694697 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.695260 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cgzm\" (UniqueName: \"kubernetes.io/projected/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-kube-api-access-9cgzm\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.695502 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-logs\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.695578 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-config-data\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.797488 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cgzm\" (UniqueName: \"kubernetes.io/projected/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-kube-api-access-9cgzm\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.797566 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-logs\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.797602 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-config-data\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.797718 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.797750 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.798361 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-logs\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.803201 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.803528 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.803678 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-config-data\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.816400 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cgzm\" (UniqueName: \"kubernetes.io/projected/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-kube-api-access-9cgzm\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.914802 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:56:31 crc kubenswrapper[4902]: I0121 14:56:31.393081 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:56:31 crc kubenswrapper[4902]: I0121 14:56:31.492891 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3aa6f350-dd82-4d59-ac24-5460acc2a8a6","Type":"ContainerStarted","Data":"34e826e9786b7ad724ed0dc96336ea0075c6129a9fc9742797a8ae0fd3c41773"} Jan 21 14:56:32 crc kubenswrapper[4902]: I0121 14:56:32.306030 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" path="/var/lib/kubelet/pods/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a/volumes" Jan 21 14:56:32 crc kubenswrapper[4902]: I0121 14:56:32.503499 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3aa6f350-dd82-4d59-ac24-5460acc2a8a6","Type":"ContainerStarted","Data":"6bf1eb34ffb8ebb875ad0db959e31364a4f9a1f5a32e44cce848251c4a780377"} Jan 21 14:56:32 crc kubenswrapper[4902]: I0121 14:56:32.503544 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3aa6f350-dd82-4d59-ac24-5460acc2a8a6","Type":"ContainerStarted","Data":"090f15138593116ea5509f9b1db81b64387863cddd781c3e2ec064762515d25e"} Jan 21 14:56:34 crc kubenswrapper[4902]: I0121 14:56:34.129882 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 14:56:35 crc kubenswrapper[4902]: I0121 14:56:35.915301 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 14:56:35 crc kubenswrapper[4902]: I0121 14:56:35.915743 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 14:56:37 crc kubenswrapper[4902]: I0121 14:56:37.790317 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 14:56:37 crc kubenswrapper[4902]: I0121 14:56:37.790587 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 14:56:38 crc kubenswrapper[4902]: I0121 14:56:38.804223 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.201:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 14:56:38 crc kubenswrapper[4902]: I0121 14:56:38.804297 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.201:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 14:56:39 crc kubenswrapper[4902]: I0121 14:56:39.130135 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 14:56:39 crc kubenswrapper[4902]: I0121 14:56:39.166650 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 14:56:39 crc kubenswrapper[4902]: I0121 14:56:39.186184 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=9.186167958 podStartE2EDuration="9.186167958s" podCreationTimestamp="2026-01-21 14:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:56:32.527642515 +0000 UTC m=+1354.604475544" watchObservedRunningTime="2026-01-21 14:56:39.186167958 +0000 UTC m=+1361.263000987" Jan 21 14:56:39 crc kubenswrapper[4902]: I0121 14:56:39.605287 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 14:56:40 crc kubenswrapper[4902]: I0121 14:56:40.915842 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 14:56:40 crc kubenswrapper[4902]: I0121 14:56:40.916092 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 14:56:41 crc kubenswrapper[4902]: I0121 14:56:41.930219 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 14:56:41 crc kubenswrapper[4902]: I0121 14:56:41.930249 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 14:56:45 crc kubenswrapper[4902]: I0121 14:56:45.693057 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 14:56:47 crc kubenswrapper[4902]: I0121 14:56:47.796956 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 14:56:47 crc kubenswrapper[4902]: I0121 14:56:47.797774 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 14:56:47 crc kubenswrapper[4902]: I0121 14:56:47.802675 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 14:56:47 crc kubenswrapper[4902]: I0121 14:56:47.804645 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 14:56:48 crc kubenswrapper[4902]: I0121 14:56:48.657080 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 14:56:48 crc kubenswrapper[4902]: I0121 14:56:48.663596 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 14:56:50 crc kubenswrapper[4902]: I0121 14:56:50.921263 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 14:56:50 crc kubenswrapper[4902]: I0121 14:56:50.922808 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 14:56:50 crc kubenswrapper[4902]: I0121 14:56:50.927373 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 14:56:51 crc kubenswrapper[4902]: I0121 14:56:51.710209 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.604425 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-zbrd5"] Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.605860 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zbrd5" Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.607924 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.618483 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zbrd5"] Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.722441 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e00e8be-96f7-4457-821f-440694bd8692-operator-scripts\") pod \"root-account-create-update-zbrd5\" (UID: \"8e00e8be-96f7-4457-821f-440694bd8692\") " pod="openstack/root-account-create-update-zbrd5" Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.722555 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7gjv\" (UniqueName: \"kubernetes.io/projected/8e00e8be-96f7-4457-821f-440694bd8692-kube-api-access-p7gjv\") pod \"root-account-create-update-zbrd5\" (UID: \"8e00e8be-96f7-4457-821f-440694bd8692\") " pod="openstack/root-account-create-update-zbrd5" Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.824588 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e00e8be-96f7-4457-821f-440694bd8692-operator-scripts\") pod \"root-account-create-update-zbrd5\" (UID: \"8e00e8be-96f7-4457-821f-440694bd8692\") " pod="openstack/root-account-create-update-zbrd5" Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.824699 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7gjv\" (UniqueName: \"kubernetes.io/projected/8e00e8be-96f7-4457-821f-440694bd8692-kube-api-access-p7gjv\") pod \"root-account-create-update-zbrd5\" (UID: \"8e00e8be-96f7-4457-821f-440694bd8692\") " pod="openstack/root-account-create-update-zbrd5" Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.825645 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e00e8be-96f7-4457-821f-440694bd8692-operator-scripts\") pod \"root-account-create-update-zbrd5\" (UID: \"8e00e8be-96f7-4457-821f-440694bd8692\") " pod="openstack/root-account-create-update-zbrd5" Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.878184 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-9bb1-account-create-update-dbdlg"] Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.890733 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7gjv\" (UniqueName: \"kubernetes.io/projected/8e00e8be-96f7-4457-821f-440694bd8692-kube-api-access-p7gjv\") pod \"root-account-create-update-zbrd5\" (UID: \"8e00e8be-96f7-4457-821f-440694bd8692\") " pod="openstack/root-account-create-update-zbrd5" Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.911784 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9bb1-account-create-update-dbdlg" Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.916675 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.940812 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zbrd5" Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.945088 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-gvjmj"] Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.979788 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-gvjmj"] Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.030319 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx42c\" (UniqueName: \"kubernetes.io/projected/5a189ccd-729c-4453-8adf-7ef08834d320-kube-api-access-bx42c\") pod \"barbican-9bb1-account-create-update-dbdlg\" (UID: \"5a189ccd-729c-4453-8adf-7ef08834d320\") " pod="openstack/barbican-9bb1-account-create-update-dbdlg" Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.030456 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a189ccd-729c-4453-8adf-7ef08834d320-operator-scripts\") pod \"barbican-9bb1-account-create-update-dbdlg\" (UID: \"5a189ccd-729c-4453-8adf-7ef08834d320\") " pod="openstack/barbican-9bb1-account-create-update-dbdlg" Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.065150 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.065377 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="b14dfbd1-cf80-4ba8-9372-ca5767f5d689" containerName="openstackclient" containerID="cri-o://c3136e4ad34aa1ed13927876b4fa6d4fd6063bfd87d28178ef6e44c14e4cf73a" gracePeriod=2 Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.097889 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9bb1-account-create-update-dbdlg"] Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.113287 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.133535 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx42c\" (UniqueName: \"kubernetes.io/projected/5a189ccd-729c-4453-8adf-7ef08834d320-kube-api-access-bx42c\") pod \"barbican-9bb1-account-create-update-dbdlg\" (UID: \"5a189ccd-729c-4453-8adf-7ef08834d320\") " pod="openstack/barbican-9bb1-account-create-update-dbdlg" Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.133676 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a189ccd-729c-4453-8adf-7ef08834d320-operator-scripts\") pod \"barbican-9bb1-account-create-update-dbdlg\" (UID: \"5a189ccd-729c-4453-8adf-7ef08834d320\") " pod="openstack/barbican-9bb1-account-create-update-dbdlg" Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.134940 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a189ccd-729c-4453-8adf-7ef08834d320-operator-scripts\") pod \"barbican-9bb1-account-create-update-dbdlg\" (UID: \"5a189ccd-729c-4453-8adf-7ef08834d320\") " pod="openstack/barbican-9bb1-account-create-update-dbdlg" Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.138158 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-9bb1-account-create-update-f5hbr"] Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.176112 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-9bb1-account-create-update-f5hbr"] Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.204328 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.235375 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.235598 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" containerName="ovn-northd" containerID="cri-o://c4adcc4c76cce96e7677ca792f5ca78a7e382164071237e39c2950d3440922c3" gracePeriod=30 Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.235971 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" containerName="openstack-network-exporter" containerID="cri-o://e8a096d5f6a2e59562479be65d1cff285382747948d319ddcc17f47f718069db" gracePeriod=30 Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.237639 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx42c\" (UniqueName: \"kubernetes.io/projected/5a189ccd-729c-4453-8adf-7ef08834d320-kube-api-access-bx42c\") pod \"barbican-9bb1-account-create-update-dbdlg\" (UID: \"5a189ccd-729c-4453-8adf-7ef08834d320\") " pod="openstack/barbican-9bb1-account-create-update-dbdlg" Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.261101 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7226-account-create-update-dvfjh"] Jan 21 14:57:11 crc kubenswrapper[4902]: E0121 14:57:11.261523 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b14dfbd1-cf80-4ba8-9372-ca5767f5d689" containerName="openstackclient" Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.261543 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b14dfbd1-cf80-4ba8-9372-ca5767f5d689" containerName="openstackclient" Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.261720 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b14dfbd1-cf80-4ba8-9372-ca5767f5d689" containerName="openstackclient" Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.262298 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7226-account-create-update-dvfjh" Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.267576 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.274007 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7226-account-create-update-dvfjh"] Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.285109 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7226-account-create-update-krlk5"] Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.294355 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9bb1-account-create-update-dbdlg" Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.295030 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-7226-account-create-update-krlk5"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.336905 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a8bdead-378c-4db8-acfe-a0b449c69e8a-operator-scripts\") pod \"cinder-7226-account-create-update-dvfjh\" (UID: \"6a8bdead-378c-4db8-acfe-a0b449c69e8a\") " pod="openstack/cinder-7226-account-create-update-dvfjh" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.336996 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8zjl\" (UniqueName: \"kubernetes.io/projected/6a8bdead-378c-4db8-acfe-a0b449c69e8a-kube-api-access-n8zjl\") pod \"cinder-7226-account-create-update-dvfjh\" (UID: \"6a8bdead-378c-4db8-acfe-a0b449c69e8a\") " pod="openstack/cinder-7226-account-create-update-dvfjh" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.369175 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-6f87-account-create-update-w85cg"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.405050 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-ktqgj"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.416382 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-ktqgj"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.436174 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-6f87-account-create-update-w85cg"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.438377 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a8bdead-378c-4db8-acfe-a0b449c69e8a-operator-scripts\") pod \"cinder-7226-account-create-update-dvfjh\" (UID: \"6a8bdead-378c-4db8-acfe-a0b449c69e8a\") " pod="openstack/cinder-7226-account-create-update-dvfjh" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.438522 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8zjl\" (UniqueName: \"kubernetes.io/projected/6a8bdead-378c-4db8-acfe-a0b449c69e8a-kube-api-access-n8zjl\") pod \"cinder-7226-account-create-update-dvfjh\" (UID: \"6a8bdead-378c-4db8-acfe-a0b449c69e8a\") " pod="openstack/cinder-7226-account-create-update-dvfjh" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.440106 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a8bdead-378c-4db8-acfe-a0b449c69e8a-operator-scripts\") pod \"cinder-7226-account-create-update-dvfjh\" (UID: \"6a8bdead-378c-4db8-acfe-a0b449c69e8a\") " pod="openstack/cinder-7226-account-create-update-dvfjh" Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:11.440350 4902 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:11.440396 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data podName:8d7103bd-b24b-4a0c-b68a-17373307f1aa nodeName:}" failed. No retries permitted until 2026-01-21 14:57:11.940380258 +0000 UTC m=+1394.017213287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data") pod "rabbitmq-cell1-server-0" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa") : configmap "rabbitmq-cell1-config-data" not found Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.538245 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8zjl\" (UniqueName: \"kubernetes.io/projected/6a8bdead-378c-4db8-acfe-a0b449c69e8a-kube-api-access-n8zjl\") pod \"cinder-7226-account-create-update-dvfjh\" (UID: \"6a8bdead-378c-4db8-acfe-a0b449c69e8a\") " pod="openstack/cinder-7226-account-create-update-dvfjh" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.605781 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-6f87-account-create-update-rx9dv"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.606952 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6f87-account-create-update-rx9dv" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.608496 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.615217 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-b64dh"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.662120 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-b64dh"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.716645 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c005e52-f6e5-413f-ba23-cb99e461cb66-operator-scripts\") pod \"nova-api-6f87-account-create-update-rx9dv\" (UID: \"8c005e52-f6e5-413f-ba23-cb99e461cb66\") " pod="openstack/nova-api-6f87-account-create-update-rx9dv" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.716745 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvjnr\" (UniqueName: \"kubernetes.io/projected/8c005e52-f6e5-413f-ba23-cb99e461cb66-kube-api-access-rvjnr\") pod \"nova-api-6f87-account-create-update-rx9dv\" (UID: \"8c005e52-f6e5-413f-ba23-cb99e461cb66\") " pod="openstack/nova-api-6f87-account-create-update-rx9dv" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.748224 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6f87-account-create-update-rx9dv"] Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:11.784727 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c4adcc4c76cce96e7677ca792f5ca78a7e382164071237e39c2950d3440922c3" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.803790 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-kxwsm"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.820183 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c005e52-f6e5-413f-ba23-cb99e461cb66-operator-scripts\") pod \"nova-api-6f87-account-create-update-rx9dv\" (UID: \"8c005e52-f6e5-413f-ba23-cb99e461cb66\") " pod="openstack/nova-api-6f87-account-create-update-rx9dv" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.820223 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvjnr\" (UniqueName: \"kubernetes.io/projected/8c005e52-f6e5-413f-ba23-cb99e461cb66-kube-api-access-rvjnr\") pod \"nova-api-6f87-account-create-update-rx9dv\" (UID: \"8c005e52-f6e5-413f-ba23-cb99e461cb66\") " pod="openstack/nova-api-6f87-account-create-update-rx9dv" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.821239 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c005e52-f6e5-413f-ba23-cb99e461cb66-operator-scripts\") pod \"nova-api-6f87-account-create-update-rx9dv\" (UID: \"8c005e52-f6e5-413f-ba23-cb99e461cb66\") " pod="openstack/nova-api-6f87-account-create-update-rx9dv" Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:11.839563 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c4adcc4c76cce96e7677ca792f5ca78a7e382164071237e39c2950d3440922c3" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.852986 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-4sm9h"] Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:11.921080 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c4adcc4c76cce96e7677ca792f5ca78a7e382164071237e39c2950d3440922c3" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:11.921139 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" containerName="ovn-northd" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.963089 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvjnr\" (UniqueName: \"kubernetes.io/projected/8c005e52-f6e5-413f-ba23-cb99e461cb66-kube-api-access-rvjnr\") pod \"nova-api-6f87-account-create-update-rx9dv\" (UID: \"8c005e52-f6e5-413f-ba23-cb99e461cb66\") " pod="openstack/nova-api-6f87-account-create-update-rx9dv" Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:11.980598 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ff2c3d8_2d68_4255_a175_21f0df1b9276.slice/crio-e8a096d5f6a2e59562479be65d1cff285382747948d319ddcc17f47f718069db.scope\": RecentStats: unable to find data in memory cache]" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.917410 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-c27gh"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.019785 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7df7-account-create-update-lmnmw"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.019979 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-c27gh" podUID="8891f80f-6cb0-4dc6-9f92-836d465e1c84" containerName="openstack-network-exporter" containerID="cri-o://1e365c417d7c9fc9f0e3c50b8df2956ab629924185f3c066a501456bc7f2f244" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:12.026518 4902 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:12.026573 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data podName:8d7103bd-b24b-4a0c-b68a-17373307f1aa nodeName:}" failed. No retries permitted until 2026-01-21 14:57:13.026556951 +0000 UTC m=+1395.103389980 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data") pod "rabbitmq-cell1-server-0" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa") : configmap "rabbitmq-cell1-config-data" not found Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.032806 4902 generic.go:334] "Generic (PLEG): container finished" podID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" containerID="e8a096d5f6a2e59562479be65d1cff285382747948d319ddcc17f47f718069db" exitCode=2 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.032843 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2ff2c3d8-2d68-4255-a175-21f0df1b9276","Type":"ContainerDied","Data":"e8a096d5f6a2e59562479be65d1cff285382747948d319ddcc17f47f718069db"} Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.054552 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7df7-account-create-update-lmnmw"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.116527 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7df7-account-create-update-vrg52"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.132488 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7df7-account-create-update-vrg52" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.139199 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.150885 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7df7-account-create-update-vrg52"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.156835 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7226-account-create-update-dvfjh" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.181813 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-457b-account-create-update-2trwh"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.193237 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-457b-account-create-update-2trwh"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.203538 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-twg7k"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.214312 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-7ca2-account-create-update-tz26x"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.226024 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-7ca2-account-create-update-tz26x"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.234797 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-twg7k"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.263743 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-gzrwg"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.263994 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" podUID="5ef26f87-2d73-4847-abfb-a3bbda8c01c6" containerName="dnsmasq-dns" containerID="cri-o://193c2ec1f234088f5b0bf3f8d841b9715ab506a6f64990bd75f4173da10330ef" gracePeriod=10 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.305778 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-zlh54"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.334768 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a61b8e1-9a04-429b-9439-bee181301046-operator-scripts\") pod \"nova-cell0-7df7-account-create-update-vrg52\" (UID: \"1a61b8e1-9a04-429b-9439-bee181301046\") " pod="openstack/nova-cell0-7df7-account-create-update-vrg52" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.334958 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cln9f\" (UniqueName: \"kubernetes.io/projected/1a61b8e1-9a04-429b-9439-bee181301046-kube-api-access-cln9f\") pod \"nova-cell0-7df7-account-create-update-vrg52\" (UID: \"1a61b8e1-9a04-429b-9439-bee181301046\") " pod="openstack/nova-cell0-7df7-account-create-update-vrg52" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.360009 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="137b1040-d368-4b6d-a4db-ba7c626f666f" path="/var/lib/kubelet/pods/137b1040-d368-4b6d-a4db-ba7c626f666f/volumes" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.361134 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="249b1461-ed19-4572-b1e6-c5c44cfa9145" path="/var/lib/kubelet/pods/249b1461-ed19-4572-b1e6-c5c44cfa9145/volumes" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.361896 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b401edf-e2ca-4abb-adb7-008ce32403b1" path="/var/lib/kubelet/pods/3b401edf-e2ca-4abb-adb7-008ce32403b1/volumes" Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:12.372204 4902 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-kxwsm" message=< Jan 21 14:57:12 crc kubenswrapper[4902]: Exiting ovn-controller (1) [ OK ] Jan 21 14:57:12 crc kubenswrapper[4902]: > Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:12.372237 4902 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-kxwsm" podUID="e8135258-f03d-4c9a-be6f-7dd1dd099188" containerName="ovn-controller" containerID="cri-o://339126d2349790760c7b3087cf9fa15cd976581645c959f56ddb41d46b290f7c" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.372269 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-kxwsm" podUID="e8135258-f03d-4c9a-be6f-7dd1dd099188" containerName="ovn-controller" containerID="cri-o://339126d2349790760c7b3087cf9fa15cd976581645c959f56ddb41d46b290f7c" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.373678 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6baf26e6-f197-4ae1-b7a5-40a1147e3276" path="/var/lib/kubelet/pods/6baf26e6-f197-4ae1-b7a5-40a1147e3276/volumes" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.374928 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83490157-abed-443f-8843-945bb43715af" path="/var/lib/kubelet/pods/83490157-abed-443f-8843-945bb43715af/volumes" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.375630 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab17d58c-9dc5-4a20-8ca7-3d06256080c3" path="/var/lib/kubelet/pods/ab17d58c-9dc5-4a20-8ca7-3d06256080c3/volumes" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.376272 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0fa0e74-137e-4ff6-9610-37b9ebe612c9" path="/var/lib/kubelet/pods/d0fa0e74-137e-4ff6-9610-37b9ebe612c9/volumes" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.427303 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd0110fe-ef40-4a4b-bad7-a3c24aa5089a" path="/var/lib/kubelet/pods/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a/volumes" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.428317 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7eab019-1ec9-4109-93f8-2f3caa1fa508" path="/var/lib/kubelet/pods/e7eab019-1ec9-4109-93f8-2f3caa1fa508/volumes" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.429090 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb5e91bc-7b75-4275-b1b6-998431981fca" path="/var/lib/kubelet/pods/eb5e91bc-7b75-4275-b1b6-998431981fca/volumes" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.429879 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.429920 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-4ds4z"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.429935 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-zlh54"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.429953 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-4ds4z"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.429966 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-hmcs2"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.429978 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-hmcs2"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.430364 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="55191d4e-0310-4e6a-a10c-902e0cc8a209" containerName="openstack-network-exporter" containerID="cri-o://f33529c27085ffa8a5953825706b4cb4672e9bfd551a411eede0445f1ce65803" gracePeriod=300 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.435524 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7ddf9d8f68-jjk7f"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.435932 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7ddf9d8f68-jjk7f" podUID="b71fc896-318c-4277-bb32-70e3424a26c9" containerName="placement-log" containerID="cri-o://bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.436150 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7ddf9d8f68-jjk7f" podUID="b71fc896-318c-4277-bb32-70e3424a26c9" containerName="placement-api" containerID="cri-o://51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.437391 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a61b8e1-9a04-429b-9439-bee181301046-operator-scripts\") pod \"nova-cell0-7df7-account-create-update-vrg52\" (UID: \"1a61b8e1-9a04-429b-9439-bee181301046\") " pod="openstack/nova-cell0-7df7-account-create-update-vrg52" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.437630 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cln9f\" (UniqueName: \"kubernetes.io/projected/1a61b8e1-9a04-429b-9439-bee181301046-kube-api-access-cln9f\") pod \"nova-cell0-7df7-account-create-update-vrg52\" (UID: \"1a61b8e1-9a04-429b-9439-bee181301046\") " pod="openstack/nova-cell0-7df7-account-create-update-vrg52" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.444581 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a61b8e1-9a04-429b-9439-bee181301046-operator-scripts\") pod \"nova-cell0-7df7-account-create-update-vrg52\" (UID: \"1a61b8e1-9a04-429b-9439-bee181301046\") " pod="openstack/nova-cell0-7df7-account-create-update-vrg52" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.463117 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.463859 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" containerName="openstack-network-exporter" containerID="cri-o://fabbe3c5e36565bf6c2514be460d8e197d15c7ef2a2eaad51eaaf9fc51cd6931" gracePeriod=300 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.464675 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6f87-account-create-update-rx9dv" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.484800 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.486575 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-server" containerID="cri-o://69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487016 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="swift-recon-cron" containerID="cri-o://71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487099 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="rsync" containerID="cri-o://a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487149 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-expirer" containerID="cri-o://589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487199 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-updater" containerID="cri-o://6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487249 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-auditor" containerID="cri-o://0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487307 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-replicator" containerID="cri-o://a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487570 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-server" containerID="cri-o://fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487625 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-updater" containerID="cri-o://eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487674 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-auditor" containerID="cri-o://756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487719 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-replicator" containerID="cri-o://c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487763 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-server" containerID="cri-o://df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487809 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-reaper" containerID="cri-o://b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487852 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-auditor" containerID="cri-o://723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487898 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-replicator" containerID="cri-o://ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.504941 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.527819 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.528159 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" containerName="glance-log" containerID="cri-o://11db3a976cf5ea9322be5da7913baf9b9709079192d4b3c588596ad2459819bd" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.528354 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" containerName="glance-httpd" containerID="cri-o://29a7ab7f1ceb1b7248d2507a5eb6085cbee233d8230ecf775819b6f6ce78389e" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.596479 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cln9f\" (UniqueName: \"kubernetes.io/projected/1a61b8e1-9a04-429b-9439-bee181301046-kube-api-access-cln9f\") pod \"nova-cell0-7df7-account-create-update-vrg52\" (UID: \"1a61b8e1-9a04-429b-9439-bee181301046\") " pod="openstack/nova-cell0-7df7-account-create-update-vrg52" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.644750 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-2k78d"] Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:12.654712 4902 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:12.654821 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data podName:67f50f65-9151-4444-9680-f86e0f256069 nodeName:}" failed. No retries permitted until 2026-01-21 14:57:13.154792276 +0000 UTC m=+1395.231625305 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data") pod "rabbitmq-server-0" (UID: "67f50f65-9151-4444-9680-f86e0f256069") : configmap "rabbitmq-config-data" not found Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.676509 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="55191d4e-0310-4e6a-a10c-902e0cc8a209" containerName="ovsdbserver-sb" containerID="cri-o://00bf7a3928a19891dd7e4eeb9d6cbd183d170218b09cf88bac1204f77dcea9f1" gracePeriod=300 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.725177 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-2k78d"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.828156 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-hlnnm"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.853196 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7df7-account-create-update-vrg52" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.889596 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-hlnnm"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.917254 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" containerName="ovsdbserver-nb" containerID="cri-o://9f4ace11ba250ec7523d0e7ac4b0965a74da40d284c328763aa454874b0606f8" gracePeriod=300 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.962651 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.962955 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="58b4678d-e59b-49d1-b06e-338a42a0e51e" containerName="cinder-scheduler" containerID="cri-o://669110a27652bb9b7b8004db550a35eb0dceaedaf48edf3ca2483cc2449bc57c" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.963466 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="58b4678d-e59b-49d1-b06e-338a42a0e51e" containerName="probe" containerID="cri-o://c7dbc8dbff5390b63de46436cbdf0b7cd9f0cbbc930ab3a08d07d477a6d55001" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.994832 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-62fdp"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.013667 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-62fdp"] Jan 21 14:57:13 crc kubenswrapper[4902]: E0121 14:57:13.019978 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9f4ace11ba250ec7523d0e7ac4b0965a74da40d284c328763aa454874b0606f8 is running failed: container process not found" containerID="9f4ace11ba250ec7523d0e7ac4b0965a74da40d284c328763aa454874b0606f8" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 21 14:57:13 crc kubenswrapper[4902]: E0121 14:57:13.020713 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9f4ace11ba250ec7523d0e7ac4b0965a74da40d284c328763aa454874b0606f8 is running failed: container process not found" containerID="9f4ace11ba250ec7523d0e7ac4b0965a74da40d284c328763aa454874b0606f8" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 21 14:57:13 crc kubenswrapper[4902]: E0121 14:57:13.021209 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9f4ace11ba250ec7523d0e7ac4b0965a74da40d284c328763aa454874b0606f8 is running failed: container process not found" containerID="9f4ace11ba250ec7523d0e7ac4b0965a74da40d284c328763aa454874b0606f8" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 21 14:57:13 crc kubenswrapper[4902]: E0121 14:57:13.021271 4902 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9f4ace11ba250ec7523d0e7ac4b0965a74da40d284c328763aa454874b0606f8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-nb-0" podUID="caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" containerName="ovsdbserver-nb" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.034171 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.034528 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c4168bc0-26cf-4786-9e28-95647462c372" containerName="glance-log" containerID="cri-o://baf5060a9be38be6557c2e269eeef0d7067b99a8ffc55de9fabcd6c3d7fd4375" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.035348 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c4168bc0-26cf-4786-9e28-95647462c372" containerName="glance-httpd" containerID="cri-o://635d235f3800b93dc934010299b8ed6cf8c1efd38064d7aecd2aa2faa2ae46a0" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: E0121 14:57:13.083310 4902 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 21 14:57:13 crc kubenswrapper[4902]: E0121 14:57:13.083373 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data podName:8d7103bd-b24b-4a0c-b68a-17373307f1aa nodeName:}" failed. No retries permitted until 2026-01-21 14:57:15.083359673 +0000 UTC m=+1397.160192702 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data") pod "rabbitmq-cell1-server-0" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa") : configmap "rabbitmq-cell1-config-data" not found Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.111886 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.112158 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="db4d047b-49f4-4b55-a053-081f1be632b7" containerName="cinder-api-log" containerID="cri-o://d81b469d4bfe4317399c28b768091ee1e4d32b1ffeb38b5ab40fde67bdde4b7f" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.112553 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="db4d047b-49f4-4b55-a053-081f1be632b7" containerName="cinder-api" containerID="cri-o://4c05b52bed8146e4b813b72bd57efca7be3d0268ea82de7f8102940d78d0f674" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.146826 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-x9wcg"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.149444 4902 generic.go:334] "Generic (PLEG): container finished" podID="ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" containerID="11db3a976cf5ea9322be5da7913baf9b9709079192d4b3c588596ad2459819bd" exitCode=143 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.149495 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f","Type":"ContainerDied","Data":"11db3a976cf5ea9322be5da7913baf9b9709079192d4b3c588596ad2459819bd"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.159233 4902 generic.go:334] "Generic (PLEG): container finished" podID="e8135258-f03d-4c9a-be6f-7dd1dd099188" containerID="339126d2349790760c7b3087cf9fa15cd976581645c959f56ddb41d46b290f7c" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.159305 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxwsm" event={"ID":"e8135258-f03d-4c9a-be6f-7dd1dd099188","Type":"ContainerDied","Data":"339126d2349790760c7b3087cf9fa15cd976581645c959f56ddb41d46b290f7c"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.164795 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_55191d4e-0310-4e6a-a10c-902e0cc8a209/ovsdbserver-sb/0.log" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.164834 4902 generic.go:334] "Generic (PLEG): container finished" podID="55191d4e-0310-4e6a-a10c-902e0cc8a209" containerID="f33529c27085ffa8a5953825706b4cb4672e9bfd551a411eede0445f1ce65803" exitCode=2 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.164850 4902 generic.go:334] "Generic (PLEG): container finished" podID="55191d4e-0310-4e6a-a10c-902e0cc8a209" containerID="00bf7a3928a19891dd7e4eeb9d6cbd183d170218b09cf88bac1204f77dcea9f1" exitCode=143 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.164885 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"55191d4e-0310-4e6a-a10c-902e0cc8a209","Type":"ContainerDied","Data":"f33529c27085ffa8a5953825706b4cb4672e9bfd551a411eede0445f1ce65803"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.164906 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"55191d4e-0310-4e6a-a10c-902e0cc8a209","Type":"ContainerDied","Data":"00bf7a3928a19891dd7e4eeb9d6cbd183d170218b09cf88bac1204f77dcea9f1"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.166518 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-c27gh_8891f80f-6cb0-4dc6-9f92-836d465e1c84/openstack-network-exporter/0.log" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.166544 4902 generic.go:334] "Generic (PLEG): container finished" podID="8891f80f-6cb0-4dc6-9f92-836d465e1c84" containerID="1e365c417d7c9fc9f0e3c50b8df2956ab629924185f3c066a501456bc7f2f244" exitCode=2 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.166579 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-c27gh" event={"ID":"8891f80f-6cb0-4dc6-9f92-836d465e1c84","Type":"ContainerDied","Data":"1e365c417d7c9fc9f0e3c50b8df2956ab629924185f3c066a501456bc7f2f244"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.166593 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-c27gh" event={"ID":"8891f80f-6cb0-4dc6-9f92-836d465e1c84","Type":"ContainerDied","Data":"b07d2a04235629b220fbd6c246ba8a8b5088d31b321ecb0ba20c9950895f0f74"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.166626 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b07d2a04235629b220fbd6c246ba8a8b5088d31b321ecb0ba20c9950895f0f74" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.171170 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-x9wcg"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.175390 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a0c6-account-create-update-g2pwx"] Jan 21 14:57:13 crc kubenswrapper[4902]: E0121 14:57:13.185803 4902 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 21 14:57:13 crc kubenswrapper[4902]: E0121 14:57:13.185868 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data podName:67f50f65-9151-4444-9680-f86e0f256069 nodeName:}" failed. No retries permitted until 2026-01-21 14:57:14.185853994 +0000 UTC m=+1396.262687023 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data") pod "rabbitmq-server-0" (UID: "67f50f65-9151-4444-9680-f86e0f256069") : configmap "rabbitmq-config-data" not found Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.203206 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a0c6-account-create-update-g2pwx"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.212897 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-431b-account-create-update-trwhd"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.238204 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-431b-account-create-update-trwhd"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258104 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258132 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258139 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258145 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258152 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258158 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258164 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258170 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258176 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258183 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258190 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258196 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258236 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258263 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258273 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258281 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258289 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258297 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258305 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258312 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258320 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258328 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258337 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258345 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.259996 4902 generic.go:334] "Generic (PLEG): container finished" podID="5ef26f87-2d73-4847-abfb-a3bbda8c01c6" containerID="193c2ec1f234088f5b0bf3f8d841b9715ab506a6f64990bd75f4173da10330ef" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.260059 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" event={"ID":"5ef26f87-2d73-4847-abfb-a3bbda8c01c6","Type":"ContainerDied","Data":"193c2ec1f234088f5b0bf3f8d841b9715ab506a6f64990bd75f4173da10330ef"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.261411 4902 generic.go:334] "Generic (PLEG): container finished" podID="b71fc896-318c-4277-bb32-70e3424a26c9" containerID="bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924" exitCode=143 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.261458 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ddf9d8f68-jjk7f" event={"ID":"b71fc896-318c-4277-bb32-70e3424a26c9","Type":"ContainerDied","Data":"bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.263125 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3/ovsdbserver-nb/0.log" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.263161 4902 generic.go:334] "Generic (PLEG): container finished" podID="caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" containerID="fabbe3c5e36565bf6c2514be460d8e197d15c7ef2a2eaad51eaaf9fc51cd6931" exitCode=2 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.263181 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3","Type":"ContainerDied","Data":"fabbe3c5e36565bf6c2514be460d8e197d15c7ef2a2eaad51eaaf9fc51cd6931"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.310062 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-c27gh_8891f80f-6cb0-4dc6-9f92-836d465e1c84/openstack-network-exporter/0.log" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.310331 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.322760 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxwsm" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.344385 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovs-vswitchd" containerID="cri-o://0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" gracePeriod=29 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.347763 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7887695489-rtxbl"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.348003 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7887695489-rtxbl" podUID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" containerName="neutron-api" containerID="cri-o://51583e6b97e071d7cf96bdf513ff863344bb3712ef59fd993cdce4376b16aa3c" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.348190 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7887695489-rtxbl" podUID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" containerName="neutron-httpd" containerID="cri-o://2c30f8fcf44519868021b999009e6e0a364f65ba9bb5e12d8b816868d45e7ed6" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397340 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8891f80f-6cb0-4dc6-9f92-836d465e1c84-ovn-rundir\") pod \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397382 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5wqz\" (UniqueName: \"kubernetes.io/projected/8891f80f-6cb0-4dc6-9f92-836d465e1c84-kube-api-access-x5wqz\") pod \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397437 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8891f80f-6cb0-4dc6-9f92-836d465e1c84-config\") pod \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397483 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-log-ovn\") pod \"e8135258-f03d-4c9a-be6f-7dd1dd099188\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397505 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-run-ovn\") pod \"e8135258-f03d-4c9a-be6f-7dd1dd099188\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397560 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8135258-f03d-4c9a-be6f-7dd1dd099188-ovn-controller-tls-certs\") pod \"e8135258-f03d-4c9a-be6f-7dd1dd099188\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397590 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8135258-f03d-4c9a-be6f-7dd1dd099188-combined-ca-bundle\") pod \"e8135258-f03d-4c9a-be6f-7dd1dd099188\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397613 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8891f80f-6cb0-4dc6-9f92-836d465e1c84-ovs-rundir\") pod \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397643 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8891f80f-6cb0-4dc6-9f92-836d465e1c84-metrics-certs-tls-certs\") pod \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397665 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smxgb\" (UniqueName: \"kubernetes.io/projected/e8135258-f03d-4c9a-be6f-7dd1dd099188-kube-api-access-smxgb\") pod \"e8135258-f03d-4c9a-be6f-7dd1dd099188\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397688 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-run\") pod \"e8135258-f03d-4c9a-be6f-7dd1dd099188\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397725 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8891f80f-6cb0-4dc6-9f92-836d465e1c84-combined-ca-bundle\") pod \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397748 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8135258-f03d-4c9a-be6f-7dd1dd099188-scripts\") pod \"e8135258-f03d-4c9a-be6f-7dd1dd099188\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.398943 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8891f80f-6cb0-4dc6-9f92-836d465e1c84-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "8891f80f-6cb0-4dc6-9f92-836d465e1c84" (UID: "8891f80f-6cb0-4dc6-9f92-836d465e1c84"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.403179 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8891f80f-6cb0-4dc6-9f92-836d465e1c84-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "8891f80f-6cb0-4dc6-9f92-836d465e1c84" (UID: "8891f80f-6cb0-4dc6-9f92-836d465e1c84"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.403255 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-run" (OuterVolumeSpecName: "var-run") pod "e8135258-f03d-4c9a-be6f-7dd1dd099188" (UID: "e8135258-f03d-4c9a-be6f-7dd1dd099188"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.408446 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8891f80f-6cb0-4dc6-9f92-836d465e1c84-config" (OuterVolumeSpecName: "config") pod "8891f80f-6cb0-4dc6-9f92-836d465e1c84" (UID: "8891f80f-6cb0-4dc6-9f92-836d465e1c84"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.408515 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e8135258-f03d-4c9a-be6f-7dd1dd099188" (UID: "e8135258-f03d-4c9a-be6f-7dd1dd099188"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.408536 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e8135258-f03d-4c9a-be6f-7dd1dd099188" (UID: "e8135258-f03d-4c9a-be6f-7dd1dd099188"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.410115 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-nxvvs"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.413633 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8135258-f03d-4c9a-be6f-7dd1dd099188-scripts" (OuterVolumeSpecName: "scripts") pod "e8135258-f03d-4c9a-be6f-7dd1dd099188" (UID: "e8135258-f03d-4c9a-be6f-7dd1dd099188"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: E0121 14:57:13.441423 4902 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 21 14:57:13 crc kubenswrapper[4902]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 21 14:57:13 crc kubenswrapper[4902]: + source /usr/local/bin/container-scripts/functions Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNBridge=br-int Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNRemote=tcp:localhost:6642 Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNEncapType=geneve Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNAvailabilityZones= Jan 21 14:57:13 crc kubenswrapper[4902]: ++ EnableChassisAsGateway=true Jan 21 14:57:13 crc kubenswrapper[4902]: ++ PhysicalNetworks= Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNHostName= Jan 21 14:57:13 crc kubenswrapper[4902]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 21 14:57:13 crc kubenswrapper[4902]: ++ ovs_dir=/var/lib/openvswitch Jan 21 14:57:13 crc kubenswrapper[4902]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 21 14:57:13 crc kubenswrapper[4902]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 21 14:57:13 crc kubenswrapper[4902]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 14:57:13 crc kubenswrapper[4902]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 14:57:13 crc kubenswrapper[4902]: + sleep 0.5 Jan 21 14:57:13 crc kubenswrapper[4902]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 14:57:13 crc kubenswrapper[4902]: + sleep 0.5 Jan 21 14:57:13 crc kubenswrapper[4902]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 14:57:13 crc kubenswrapper[4902]: + cleanup_ovsdb_server_semaphore Jan 21 14:57:13 crc kubenswrapper[4902]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 14:57:13 crc kubenswrapper[4902]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 21 14:57:13 crc kubenswrapper[4902]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-4sm9h" message=< Jan 21 14:57:13 crc kubenswrapper[4902]: Exiting ovsdb-server (5) [ OK ] Jan 21 14:57:13 crc kubenswrapper[4902]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 21 14:57:13 crc kubenswrapper[4902]: + source /usr/local/bin/container-scripts/functions Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNBridge=br-int Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNRemote=tcp:localhost:6642 Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNEncapType=geneve Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNAvailabilityZones= Jan 21 14:57:13 crc kubenswrapper[4902]: ++ EnableChassisAsGateway=true Jan 21 14:57:13 crc kubenswrapper[4902]: ++ PhysicalNetworks= Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNHostName= Jan 21 14:57:13 crc kubenswrapper[4902]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 21 14:57:13 crc kubenswrapper[4902]: ++ ovs_dir=/var/lib/openvswitch Jan 21 14:57:13 crc kubenswrapper[4902]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 21 14:57:13 crc kubenswrapper[4902]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 21 14:57:13 crc kubenswrapper[4902]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 14:57:13 crc kubenswrapper[4902]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 14:57:13 crc kubenswrapper[4902]: + sleep 0.5 Jan 21 14:57:13 crc kubenswrapper[4902]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 14:57:13 crc kubenswrapper[4902]: + sleep 0.5 Jan 21 14:57:13 crc kubenswrapper[4902]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 14:57:13 crc kubenswrapper[4902]: + cleanup_ovsdb_server_semaphore Jan 21 14:57:13 crc kubenswrapper[4902]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 14:57:13 crc kubenswrapper[4902]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 21 14:57:13 crc kubenswrapper[4902]: > Jan 21 14:57:13 crc kubenswrapper[4902]: E0121 14:57:13.441471 4902 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 21 14:57:13 crc kubenswrapper[4902]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 21 14:57:13 crc kubenswrapper[4902]: + source /usr/local/bin/container-scripts/functions Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNBridge=br-int Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNRemote=tcp:localhost:6642 Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNEncapType=geneve Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNAvailabilityZones= Jan 21 14:57:13 crc kubenswrapper[4902]: ++ EnableChassisAsGateway=true Jan 21 14:57:13 crc kubenswrapper[4902]: ++ PhysicalNetworks= Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNHostName= Jan 21 14:57:13 crc kubenswrapper[4902]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 21 14:57:13 crc kubenswrapper[4902]: ++ ovs_dir=/var/lib/openvswitch Jan 21 14:57:13 crc kubenswrapper[4902]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 21 14:57:13 crc kubenswrapper[4902]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 21 14:57:13 crc kubenswrapper[4902]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 14:57:13 crc kubenswrapper[4902]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 14:57:13 crc kubenswrapper[4902]: + sleep 0.5 Jan 21 14:57:13 crc kubenswrapper[4902]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 14:57:13 crc kubenswrapper[4902]: + sleep 0.5 Jan 21 14:57:13 crc kubenswrapper[4902]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 14:57:13 crc kubenswrapper[4902]: + cleanup_ovsdb_server_semaphore Jan 21 14:57:13 crc kubenswrapper[4902]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 14:57:13 crc kubenswrapper[4902]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 21 14:57:13 crc kubenswrapper[4902]: > pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server" containerID="cri-o://df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.441512 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server" containerID="cri-o://df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" gracePeriod=29 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.442813 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8135258-f03d-4c9a-be6f-7dd1dd099188-kube-api-access-smxgb" (OuterVolumeSpecName: "kube-api-access-smxgb") pod "e8135258-f03d-4c9a-be6f-7dd1dd099188" (UID: "e8135258-f03d-4c9a-be6f-7dd1dd099188"). InnerVolumeSpecName "kube-api-access-smxgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.443569 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8891f80f-6cb0-4dc6-9f92-836d465e1c84-kube-api-access-x5wqz" (OuterVolumeSpecName: "kube-api-access-x5wqz") pod "8891f80f-6cb0-4dc6-9f92-836d465e1c84" (UID: "8891f80f-6cb0-4dc6-9f92-836d465e1c84"). InnerVolumeSpecName "kube-api-access-x5wqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.480160 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-nxvvs"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.500062 4902 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8891f80f-6cb0-4dc6-9f92-836d465e1c84-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.500088 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5wqz\" (UniqueName: \"kubernetes.io/projected/8891f80f-6cb0-4dc6-9f92-836d465e1c84-kube-api-access-x5wqz\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.500097 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8891f80f-6cb0-4dc6-9f92-836d465e1c84-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.500106 4902 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.500114 4902 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.500121 4902 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8891f80f-6cb0-4dc6-9f92-836d465e1c84-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.500129 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smxgb\" (UniqueName: \"kubernetes.io/projected/e8135258-f03d-4c9a-be6f-7dd1dd099188-kube-api-access-smxgb\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.500137 4902 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.500146 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8135258-f03d-4c9a-be6f-7dd1dd099188-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.503450 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.504255 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8891f80f-6cb0-4dc6-9f92-836d465e1c84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8891f80f-6cb0-4dc6-9f92-836d465e1c84" (UID: "8891f80f-6cb0-4dc6-9f92-836d465e1c84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.523017 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-4czjl"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.545348 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8135258-f03d-4c9a-be6f-7dd1dd099188-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8135258-f03d-4c9a-be6f-7dd1dd099188" (UID: "e8135258-f03d-4c9a-be6f-7dd1dd099188"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.547210 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.563193 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-4czjl"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.603088 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8135258-f03d-4c9a-be6f-7dd1dd099188-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.603113 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8891f80f-6cb0-4dc6-9f92-836d465e1c84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.608891 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8891f80f-6cb0-4dc6-9f92-836d465e1c84-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "8891f80f-6cb0-4dc6-9f92-836d465e1c84" (UID: "8891f80f-6cb0-4dc6-9f92-836d465e1c84"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.617902 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8135258-f03d-4c9a-be6f-7dd1dd099188-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "e8135258-f03d-4c9a-be6f-7dd1dd099188" (UID: "e8135258-f03d-4c9a-be6f-7dd1dd099188"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.643191 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.643463 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerName="nova-metadata-log" containerID="cri-o://090f15138593116ea5509f9b1db81b64387863cddd781c3e2ec064762515d25e" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.643883 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerName="nova-metadata-metadata" containerID="cri-o://6bf1eb34ffb8ebb875ad0db959e31364a4f9a1f5a32e44cce848251c4a780377" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.664677 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-lwq2z"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.671910 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-lwq2z"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.678102 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-np7hz"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.684724 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-np7hz"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.694778 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-b755cd77b-nd6p7"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.695001 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" podUID="365d6c18-395e-4a62-939d-a04927ffa8aa" containerName="barbican-keystone-listener-log" containerID="cri-o://f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.695370 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" podUID="365d6c18-395e-4a62-939d-a04927ffa8aa" containerName="barbican-keystone-listener" containerID="cri-o://c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.704217 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="94031dcf-9569-4cf1-90a9-61c962434ae8" containerName="galera" containerID="cri-o://6843f7fdaa415e7e2f0347cd97fdaa8f7eaf2a1c6b75202daa5f85889752389a" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.704357 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5df595696d-2ftxp"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.704572 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5df595696d-2ftxp" podUID="561efc1e-a930-440f-83b1-a75217a11f32" containerName="barbican-api-log" containerID="cri-o://b91bda9e24415f053bbf7e3136ae0eb36d0535911dff5c3a69ee2c9fd40feb34" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.704902 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5df595696d-2ftxp" podUID="561efc1e-a930-440f-83b1-a75217a11f32" containerName="barbican-api" containerID="cri-o://709dea640199a3e29bbff0c5bd046ca78f3c55c233e1043ae28cc59e518b7cd2" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.705631 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-ovsdbserver-sb\") pod \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.705678 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-ovsdbserver-nb\") pod \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.705702 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-dns-swift-storage-0\") pod \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.705752 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j28lj\" (UniqueName: \"kubernetes.io/projected/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-kube-api-access-j28lj\") pod \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.705826 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-config\") pod \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.705991 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-dns-svc\") pod \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.706478 4902 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8135258-f03d-4c9a-be6f-7dd1dd099188-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.706502 4902 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8891f80f-6cb0-4dc6-9f92-836d465e1c84-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.727563 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7226-account-create-update-dvfjh"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.734075 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-kube-api-access-j28lj" (OuterVolumeSpecName: "kube-api-access-j28lj") pod "5ef26f87-2d73-4847-abfb-a3bbda8c01c6" (UID: "5ef26f87-2d73-4847-abfb-a3bbda8c01c6"). InnerVolumeSpecName "kube-api-access-j28lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.761201 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-68564cb5c-bh98h"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.761520 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-68564cb5c-bh98h" podUID="c653ffa0-195e-4eda-8c25-cfcff2715bdf" containerName="barbican-worker-log" containerID="cri-o://43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.762149 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-68564cb5c-bh98h" podUID="c653ffa0-195e-4eda-8c25-cfcff2715bdf" containerName="barbican-worker" containerID="cri-o://5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.789423 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-9bb1-account-create-update-dbdlg"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.794259 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.794568 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" containerName="nova-api-log" containerID="cri-o://155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.795106 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" containerName="nova-api-api" containerID="cri-o://fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.801420 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.803355 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5ef26f87-2d73-4847-abfb-a3bbda8c01c6" (UID: "5ef26f87-2d73-4847-abfb-a3bbda8c01c6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.808012 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-6f87-account-create-update-rx9dv"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.810313 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j28lj\" (UniqueName: \"kubernetes.io/projected/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-kube-api-access-j28lj\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.810342 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.822426 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-vcplz"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.833652 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-vcplz"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.845284 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5ef26f87-2d73-4847-abfb-a3bbda8c01c6" (UID: "5ef26f87-2d73-4847-abfb-a3bbda8c01c6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.878274 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="8d7103bd-b24b-4a0c-b68a-17373307f1aa" containerName="rabbitmq" containerID="cri-o://9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70" gracePeriod=604800 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.878447 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-rkcxd"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.889704 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5ef26f87-2d73-4847-abfb-a3bbda8c01c6" (UID: "5ef26f87-2d73-4847-abfb-a3bbda8c01c6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.911907 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.911934 4902 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.917642 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-config" (OuterVolumeSpecName: "config") pod "5ef26f87-2d73-4847-abfb-a3bbda8c01c6" (UID: "5ef26f87-2d73-4847-abfb-a3bbda8c01c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.951416 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-rkcxd"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.020939 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.022957 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5ef26f87-2d73-4847-abfb-a3bbda8c01c6" (UID: "5ef26f87-2d73-4847-abfb-a3bbda8c01c6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.026519 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zbrd5"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.148847 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7df7-account-create-update-vrg52"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.163844 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.239255 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.239508 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="d59e8c8f-5bf6-4dd1-835a-b2ed93e81044" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://51f3e0557ba29d0e459dc32f45c40c004e66a2616c90bcc78b93663bdae1ff99" gracePeriod=30 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.249108 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.262086 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.268889 4902 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.269218 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data podName:67f50f65-9151-4444-9680-f86e0f256069 nodeName:}" failed. No retries permitted until 2026-01-21 14:57:16.269204992 +0000 UTC m=+1398.346038021 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data") pod "rabbitmq-server-0" (UID: "67f50f65-9151-4444-9680-f86e0f256069") : configmap "rabbitmq-config-data" not found Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.285014 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.285231 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c366100e-d2a0-4be9-965f-ef7b7ad39f78" containerName="nova-scheduler-scheduler" containerID="cri-o://421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812" gracePeriod=30 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.312181 4902 generic.go:334] "Generic (PLEG): container finished" podID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" exitCode=0 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.314668 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3/ovsdbserver-nb/0.log" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.314696 4902 generic.go:334] "Generic (PLEG): container finished" podID="caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" containerID="9f4ace11ba250ec7523d0e7ac4b0965a74da40d284c328763aa454874b0606f8" exitCode=143 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.314997 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="035bb03b-fb8e-4b30-a30f-bfde97b03291" path="/var/lib/kubelet/pods/035bb03b-fb8e-4b30-a30f-bfde97b03291/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.316096 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="095a6aec-1aa5-4754-818a-bbe7eedad9f2" path="/var/lib/kubelet/pods/095a6aec-1aa5-4754-818a-bbe7eedad9f2/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.316382 4902 generic.go:334] "Generic (PLEG): container finished" podID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerID="090f15138593116ea5509f9b1db81b64387863cddd781c3e2ec064762515d25e" exitCode=143 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.317115 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15ef0c45-4c21-4824-850e-545f66a2c20a" path="/var/lib/kubelet/pods/15ef0c45-4c21-4824-850e-545f66a2c20a/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.318060 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22787b52-e166-415c-906e-788b1b73ccd0" path="/var/lib/kubelet/pods/22787b52-e166-415c-906e-788b1b73ccd0/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.318396 4902 generic.go:334] "Generic (PLEG): container finished" podID="561efc1e-a930-440f-83b1-a75217a11f32" containerID="b91bda9e24415f053bbf7e3136ae0eb36d0535911dff5c3a69ee2c9fd40feb34" exitCode=143 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.319520 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dfed335-1a3f-4e42-b593-e5958039dadc" path="/var/lib/kubelet/pods/3dfed335-1a3f-4e42-b593-e5958039dadc/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.320685 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c6d6225-3f7d-485d-a384-5f0e53c3055d" path="/var/lib/kubelet/pods/4c6d6225-3f7d-485d-a384-5f0e53c3055d/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.321011 4902 generic.go:334] "Generic (PLEG): container finished" podID="c4168bc0-26cf-4786-9e28-95647462c372" containerID="baf5060a9be38be6557c2e269eeef0d7067b99a8ffc55de9fabcd6c3d7fd4375" exitCode=143 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.322165 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d" path="/var/lib/kubelet/pods/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.322749 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.324345 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86cb92f1-5dde-4389-a5c8-1c0f76b1478d" path="/var/lib/kubelet/pods/86cb92f1-5dde-4389-a5c8-1c0f76b1478d/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.325139 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f3ab19a-d650-41ea-aadd-8ec73ed824f2" path="/var/lib/kubelet/pods/8f3ab19a-d650-41ea-aadd-8ec73ed824f2/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.326199 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9959d508-3783-403a-bdd6-65159821fc9e" path="/var/lib/kubelet/pods/9959d508-3783-403a-bdd6-65159821fc9e/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.327601 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a55b324-126b-4571-a2ab-1ea8005e3c46" path="/var/lib/kubelet/pods/9a55b324-126b-4571-a2ab-1ea8005e3c46/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.328380 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acb2fcf0-980e-418a-b776-ec7836101d6b" path="/var/lib/kubelet/pods/acb2fcf0-980e-418a-b776-ec7836101d6b/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.329244 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df9277be-e557-4d2e-b799-8fc6def975b9" path="/var/lib/kubelet/pods/df9277be-e557-4d2e-b799-8fc6def975b9/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.329910 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eef10c95-ed5c-4479-b01f-8f956d478dcf" path="/var/lib/kubelet/pods/eef10c95-ed5c-4479-b01f-8f956d478dcf/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.331090 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5dd3ace-42a8-4c8e-8531-0c04f145a002" path="/var/lib/kubelet/pods/f5dd3ace-42a8-4c8e-8531-0c04f145a002/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.352195 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" containerID="155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f" exitCode=143 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.358556 4902 generic.go:334] "Generic (PLEG): container finished" podID="b14dfbd1-cf80-4ba8-9372-ca5767f5d689" containerID="c3136e4ad34aa1ed13927876b4fa6d4fd6063bfd87d28178ef6e44c14e4cf73a" exitCode=137 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.358993 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.361750 4902 generic.go:334] "Generic (PLEG): container finished" podID="58b4678d-e59b-49d1-b06e-338a42a0e51e" containerID="c7dbc8dbff5390b63de46436cbdf0b7cd9f0cbbc930ab3a08d07d477a6d55001" exitCode=0 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368104 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4sm9h" event={"ID":"bfa512c9-b91a-4a30-8a23-548ef53b094e","Type":"ContainerDied","Data":"df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368143 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3","Type":"ContainerDied","Data":"9f4ace11ba250ec7523d0e7ac4b0965a74da40d284c328763aa454874b0606f8"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368161 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2kxkv"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368178 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368193 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3","Type":"ContainerDied","Data":"2fa4acbc229f26d07119b0fd5c43c50281090a6fcc6e1442dc8b7ca5938b7ddb"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368203 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fa4acbc229f26d07119b0fd5c43c50281090a6fcc6e1442dc8b7ca5938b7ddb" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368214 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3aa6f350-dd82-4d59-ac24-5460acc2a8a6","Type":"ContainerDied","Data":"090f15138593116ea5509f9b1db81b64387863cddd781c3e2ec064762515d25e"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368227 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2kxkv"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368242 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5df595696d-2ftxp" event={"ID":"561efc1e-a930-440f-83b1-a75217a11f32","Type":"ContainerDied","Data":"b91bda9e24415f053bbf7e3136ae0eb36d0535911dff5c3a69ee2c9fd40feb34"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368255 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4168bc0-26cf-4786-9e28-95647462c372","Type":"ContainerDied","Data":"baf5060a9be38be6557c2e269eeef0d7067b99a8ffc55de9fabcd6c3d7fd4375"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368266 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lrj4d"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368277 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" event={"ID":"5ef26f87-2d73-4847-abfb-a3bbda8c01c6","Type":"ContainerDied","Data":"e36154beae48e47217e600b25e3832ce07f5b5cba75bd916fc8d19d2d77082ca"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368290 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0ea9ca5b-2e24-41de-8a99-a882ec11c222","Type":"ContainerDied","Data":"155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368301 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368316 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"58b4678d-e59b-49d1-b06e-338a42a0e51e","Type":"ContainerDied","Data":"c7dbc8dbff5390b63de46436cbdf0b7cd9f0cbbc930ab3a08d07d477a6d55001"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368328 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lrj4d"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368478 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="dbc235c8-beef-433d-b663-e1d09b6a9b65" containerName="nova-cell1-conductor-conductor" containerID="cri-o://357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae" gracePeriod=30 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.369879 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="67f50f65-9151-4444-9680-f86e0f256069" containerName="rabbitmq" containerID="cri-o://d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852" gracePeriod=604800 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.370181 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="359a818e-1c34-4dfd-bb59-0e72280a85a0" containerName="nova-cell0-conductor-conductor" containerID="cri-o://a133e1c90783c83c710a2eea26c02cb7d28a759bac5a441a7e04e3644c54f5fe" gracePeriod=30 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.370250 4902 scope.go:117] "RemoveContainer" containerID="193c2ec1f234088f5b0bf3f8d841b9715ab506a6f64990bd75f4173da10330ef" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.371106 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-openstack-config-secret\") pod \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.371138 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5x67\" (UniqueName: \"kubernetes.io/projected/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-kube-api-access-s5x67\") pod \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.371229 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-combined-ca-bundle\") pod \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.371354 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-openstack-config\") pod \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.386351 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157" exitCode=0 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.386379 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b" exitCode=0 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.386417 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.386441 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.389366 4902 generic.go:334] "Generic (PLEG): container finished" podID="c653ffa0-195e-4eda-8c25-cfcff2715bdf" containerID="43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb" exitCode=143 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.389411 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68564cb5c-bh98h" event={"ID":"c653ffa0-195e-4eda-8c25-cfcff2715bdf","Type":"ContainerDied","Data":"43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.391669 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-kube-api-access-s5x67" (OuterVolumeSpecName: "kube-api-access-s5x67") pod "b14dfbd1-cf80-4ba8-9372-ca5767f5d689" (UID: "b14dfbd1-cf80-4ba8-9372-ca5767f5d689"). InnerVolumeSpecName "kube-api-access-s5x67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.392432 4902 generic.go:334] "Generic (PLEG): container finished" podID="365d6c18-395e-4a62-939d-a04927ffa8aa" containerID="f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48" exitCode=143 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.392476 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" event={"ID":"365d6c18-395e-4a62-939d-a04927ffa8aa","Type":"ContainerDied","Data":"f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.393754 4902 generic.go:334] "Generic (PLEG): container finished" podID="db4d047b-49f4-4b55-a053-081f1be632b7" containerID="d81b469d4bfe4317399c28b768091ee1e4d32b1ffeb38b5ab40fde67bdde4b7f" exitCode=143 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.393782 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"db4d047b-49f4-4b55-a053-081f1be632b7","Type":"ContainerDied","Data":"d81b469d4bfe4317399c28b768091ee1e4d32b1ffeb38b5ab40fde67bdde4b7f"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.397536 4902 generic.go:334] "Generic (PLEG): container finished" podID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" containerID="2c30f8fcf44519868021b999009e6e0a364f65ba9bb5e12d8b816868d45e7ed6" exitCode=0 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.397581 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887695489-rtxbl" event={"ID":"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5","Type":"ContainerDied","Data":"2c30f8fcf44519868021b999009e6e0a364f65ba9bb5e12d8b816868d45e7ed6"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.400601 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.401221 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxwsm" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.401506 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxwsm" event={"ID":"e8135258-f03d-4c9a-be6f-7dd1dd099188","Type":"ContainerDied","Data":"4abe7b149b5deee49487446d44f9ad3581d14a3d2ca4cc34cd11e6b49541512c"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.408652 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3/ovsdbserver-nb/0.log" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.408728 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.442661 4902 scope.go:117] "RemoveContainer" containerID="462406faba8c1d9f8c0864988f3185e2594f2024aa4406a8b2fa2099a7006d0c" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.457615 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "b14dfbd1-cf80-4ba8-9372-ca5767f5d689" (UID: "b14dfbd1-cf80-4ba8-9372-ca5767f5d689"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.471563 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_55191d4e-0310-4e6a-a10c-902e0cc8a209/ovsdbserver-sb/0.log" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.471644 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.474425 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5x67\" (UniqueName: \"kubernetes.io/projected/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-kube-api-access-s5x67\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.476951 4902 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.484268 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "b14dfbd1-cf80-4ba8-9372-ca5767f5d689" (UID: "b14dfbd1-cf80-4ba8-9372-ca5767f5d689"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.496154 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b14dfbd1-cf80-4ba8-9372-ca5767f5d689" (UID: "b14dfbd1-cf80-4ba8-9372-ca5767f5d689"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.496214 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-kxwsm"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.503146 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-kxwsm"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.510878 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-c27gh"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.526745 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-c27gh"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.539861 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-9bb1-account-create-update-dbdlg"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.544187 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zbrd5"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.551361 4902 scope.go:117] "RemoveContainer" containerID="c3136e4ad34aa1ed13927876b4fa6d4fd6063bfd87d28178ef6e44c14e4cf73a" Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.556380 4902 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 14:57:14 crc kubenswrapper[4902]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: if [ -n "barbican" ]; then Jan 21 14:57:14 crc kubenswrapper[4902]: GRANT_DATABASE="barbican" Jan 21 14:57:14 crc kubenswrapper[4902]: else Jan 21 14:57:14 crc kubenswrapper[4902]: GRANT_DATABASE="*" Jan 21 14:57:14 crc kubenswrapper[4902]: fi Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: # going for maximum compatibility here: Jan 21 14:57:14 crc kubenswrapper[4902]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 14:57:14 crc kubenswrapper[4902]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 14:57:14 crc kubenswrapper[4902]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 14:57:14 crc kubenswrapper[4902]: # support updates Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: $MYSQL_CMD < logger="UnhandledError" Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.557617 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-9bb1-account-create-update-dbdlg" podUID="5a189ccd-729c-4453-8adf-7ef08834d320" Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.558453 4902 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 14:57:14 crc kubenswrapper[4902]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: if [ -n "" ]; then Jan 21 14:57:14 crc kubenswrapper[4902]: GRANT_DATABASE="" Jan 21 14:57:14 crc kubenswrapper[4902]: else Jan 21 14:57:14 crc kubenswrapper[4902]: GRANT_DATABASE="*" Jan 21 14:57:14 crc kubenswrapper[4902]: fi Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: # going for maximum compatibility here: Jan 21 14:57:14 crc kubenswrapper[4902]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 14:57:14 crc kubenswrapper[4902]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 14:57:14 crc kubenswrapper[4902]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 14:57:14 crc kubenswrapper[4902]: # support updates Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: $MYSQL_CMD < logger="UnhandledError" Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.560261 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-zbrd5" podUID="8e00e8be-96f7-4457-821f-440694bd8692" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.580050 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-scripts\") pod \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.580111 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9q6s\" (UniqueName: \"kubernetes.io/projected/55191d4e-0310-4e6a-a10c-902e0cc8a209-kube-api-access-g9q6s\") pod \"55191d4e-0310-4e6a-a10c-902e0cc8a209\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.580148 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.580203 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-combined-ca-bundle\") pod \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.580229 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-ovsdb-rundir\") pod \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.580270 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-combined-ca-bundle\") pod \"55191d4e-0310-4e6a-a10c-902e0cc8a209\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.580290 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-ovsdbserver-sb-tls-certs\") pod \"55191d4e-0310-4e6a-a10c-902e0cc8a209\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.580334 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-config\") pod \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.581539 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-ovsdbserver-nb-tls-certs\") pod \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.581564 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"55191d4e-0310-4e6a-a10c-902e0cc8a209\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.581635 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-metrics-certs-tls-certs\") pod \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.581652 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d8zs\" (UniqueName: \"kubernetes.io/projected/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-kube-api-access-5d8zs\") pod \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.581671 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-metrics-certs-tls-certs\") pod \"55191d4e-0310-4e6a-a10c-902e0cc8a209\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.581689 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/55191d4e-0310-4e6a-a10c-902e0cc8a209-ovsdb-rundir\") pod \"55191d4e-0310-4e6a-a10c-902e0cc8a209\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.582220 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55191d4e-0310-4e6a-a10c-902e0cc8a209-config\") pod \"55191d4e-0310-4e6a-a10c-902e0cc8a209\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.582282 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55191d4e-0310-4e6a-a10c-902e0cc8a209-scripts\") pod \"55191d4e-0310-4e6a-a10c-902e0cc8a209\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.585307 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.585338 4902 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.586212 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-scripts" (OuterVolumeSpecName: "scripts") pod "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" (UID: "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.588125 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" (UID: "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.588984 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55191d4e-0310-4e6a-a10c-902e0cc8a209-scripts" (OuterVolumeSpecName: "scripts") pod "55191d4e-0310-4e6a-a10c-902e0cc8a209" (UID: "55191d4e-0310-4e6a-a10c-902e0cc8a209"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.591841 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-config" (OuterVolumeSpecName: "config") pod "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" (UID: "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.592401 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55191d4e-0310-4e6a-a10c-902e0cc8a209-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "55191d4e-0310-4e6a-a10c-902e0cc8a209" (UID: "55191d4e-0310-4e6a-a10c-902e0cc8a209"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.593800 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55191d4e-0310-4e6a-a10c-902e0cc8a209-kube-api-access-g9q6s" (OuterVolumeSpecName: "kube-api-access-g9q6s") pod "55191d4e-0310-4e6a-a10c-902e0cc8a209" (UID: "55191d4e-0310-4e6a-a10c-902e0cc8a209"). InnerVolumeSpecName "kube-api-access-g9q6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.594239 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55191d4e-0310-4e6a-a10c-902e0cc8a209-config" (OuterVolumeSpecName: "config") pod "55191d4e-0310-4e6a-a10c-902e0cc8a209" (UID: "55191d4e-0310-4e6a-a10c-902e0cc8a209"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.619108 4902 scope.go:117] "RemoveContainer" containerID="c3136e4ad34aa1ed13927876b4fa6d4fd6063bfd87d28178ef6e44c14e4cf73a" Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.620205 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3136e4ad34aa1ed13927876b4fa6d4fd6063bfd87d28178ef6e44c14e4cf73a\": container with ID starting with c3136e4ad34aa1ed13927876b4fa6d4fd6063bfd87d28178ef6e44c14e4cf73a not found: ID does not exist" containerID="c3136e4ad34aa1ed13927876b4fa6d4fd6063bfd87d28178ef6e44c14e4cf73a" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.620274 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3136e4ad34aa1ed13927876b4fa6d4fd6063bfd87d28178ef6e44c14e4cf73a"} err="failed to get container status \"c3136e4ad34aa1ed13927876b4fa6d4fd6063bfd87d28178ef6e44c14e4cf73a\": rpc error: code = NotFound desc = could not find container \"c3136e4ad34aa1ed13927876b4fa6d4fd6063bfd87d28178ef6e44c14e4cf73a\": container with ID starting with c3136e4ad34aa1ed13927876b4fa6d4fd6063bfd87d28178ef6e44c14e4cf73a not found: ID does not exist" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.620300 4902 scope.go:117] "RemoveContainer" containerID="339126d2349790760c7b3087cf9fa15cd976581645c959f56ddb41d46b290f7c" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.620396 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "55191d4e-0310-4e6a-a10c-902e0cc8a209" (UID: "55191d4e-0310-4e6a-a10c-902e0cc8a209"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.627518 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-kube-api-access-5d8zs" (OuterVolumeSpecName: "kube-api-access-5d8zs") pod "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" (UID: "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3"). InnerVolumeSpecName "kube-api-access-5d8zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.630618 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" (UID: "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.691888 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d8zs\" (UniqueName: \"kubernetes.io/projected/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-kube-api-access-5d8zs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.691910 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/55191d4e-0310-4e6a-a10c-902e0cc8a209-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.691918 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55191d4e-0310-4e6a-a10c-902e0cc8a209-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.691926 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55191d4e-0310-4e6a-a10c-902e0cc8a209-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.691934 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.691943 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9q6s\" (UniqueName: \"kubernetes.io/projected/55191d4e-0310-4e6a-a10c-902e0cc8a209-kube-api-access-g9q6s\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.691962 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.691970 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.691978 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.691989 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.721551 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-6f87-account-create-update-rx9dv"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.749244 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.750503 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.755304 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a133e1c90783c83c710a2eea26c02cb7d28a759bac5a441a7e04e3644c54f5fe" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.760562 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a133e1c90783c83c710a2eea26c02cb7d28a759bac5a441a7e04e3644c54f5fe" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.764600 4902 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 14:57:14 crc kubenswrapper[4902]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: if [ -n "nova_api" ]; then Jan 21 14:57:14 crc kubenswrapper[4902]: GRANT_DATABASE="nova_api" Jan 21 14:57:14 crc kubenswrapper[4902]: else Jan 21 14:57:14 crc kubenswrapper[4902]: GRANT_DATABASE="*" Jan 21 14:57:14 crc kubenswrapper[4902]: fi Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: # going for maximum compatibility here: Jan 21 14:57:14 crc kubenswrapper[4902]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 14:57:14 crc kubenswrapper[4902]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 14:57:14 crc kubenswrapper[4902]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 14:57:14 crc kubenswrapper[4902]: # support updates Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: $MYSQL_CMD < logger="UnhandledError" Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.767137 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-6f87-account-create-update-rx9dv" podUID="8c005e52-f6e5-413f-ba23-cb99e461cb66" Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.775737 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a133e1c90783c83c710a2eea26c02cb7d28a759bac5a441a7e04e3644c54f5fe" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.775789 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="359a818e-1c34-4dfd-bb59-0e72280a85a0" containerName="nova-cell0-conductor-conductor" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.793187 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7226-account-create-update-dvfjh"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.798000 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.799698 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.811829 4902 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 14:57:14 crc kubenswrapper[4902]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: if [ -n "cinder" ]; then Jan 21 14:57:14 crc kubenswrapper[4902]: GRANT_DATABASE="cinder" Jan 21 14:57:14 crc kubenswrapper[4902]: else Jan 21 14:57:14 crc kubenswrapper[4902]: GRANT_DATABASE="*" Jan 21 14:57:14 crc kubenswrapper[4902]: fi Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: # going for maximum compatibility here: Jan 21 14:57:14 crc kubenswrapper[4902]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 14:57:14 crc kubenswrapper[4902]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 14:57:14 crc kubenswrapper[4902]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 14:57:14 crc kubenswrapper[4902]: # support updates Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: $MYSQL_CMD < logger="UnhandledError" Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.815262 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-7226-account-create-update-dvfjh" podUID="6a8bdead-378c-4db8-acfe-a0b449c69e8a" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.819960 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "55191d4e-0310-4e6a-a10c-902e0cc8a209" (UID: "55191d4e-0310-4e6a-a10c-902e0cc8a209"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.823007 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7df7-account-create-update-vrg52"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.867257 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55191d4e-0310-4e6a-a10c-902e0cc8a209" (UID: "55191d4e-0310-4e6a-a10c-902e0cc8a209"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.880252 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" (UID: "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.892454 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "55191d4e-0310-4e6a-a10c-902e0cc8a209" (UID: "55191d4e-0310-4e6a-a10c-902e0cc8a209"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.893826 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" (UID: "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.898082 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" (UID: "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.901345 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.901375 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.901386 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.901395 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.901404 4902 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.901412 4902 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: W0121 14:57:14.966276 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a61b8e1_9a04_429b_9439_bee181301046.slice/crio-c1a2c01a0ef6d99a3d9e8bb7f28626beef1ad4446ccdbb5bcacd934ce87ecd23 WatchSource:0}: Error finding container c1a2c01a0ef6d99a3d9e8bb7f28626beef1ad4446ccdbb5bcacd934ce87ecd23: Status 404 returned error can't find the container with id c1a2c01a0ef6d99a3d9e8bb7f28626beef1ad4446ccdbb5bcacd934ce87ecd23 Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.010350 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7887695489-rtxbl" podUID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.159:9696/\": dial tcp 10.217.0.159:9696: connect: connection refused" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.018491 4902 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 14:57:15 crc kubenswrapper[4902]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 14:57:15 crc kubenswrapper[4902]: Jan 21 14:57:15 crc kubenswrapper[4902]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 14:57:15 crc kubenswrapper[4902]: Jan 21 14:57:15 crc kubenswrapper[4902]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 14:57:15 crc kubenswrapper[4902]: Jan 21 14:57:15 crc kubenswrapper[4902]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 14:57:15 crc kubenswrapper[4902]: Jan 21 14:57:15 crc kubenswrapper[4902]: if [ -n "nova_cell0" ]; then Jan 21 14:57:15 crc kubenswrapper[4902]: GRANT_DATABASE="nova_cell0" Jan 21 14:57:15 crc kubenswrapper[4902]: else Jan 21 14:57:15 crc kubenswrapper[4902]: GRANT_DATABASE="*" Jan 21 14:57:15 crc kubenswrapper[4902]: fi Jan 21 14:57:15 crc kubenswrapper[4902]: Jan 21 14:57:15 crc kubenswrapper[4902]: # going for maximum compatibility here: Jan 21 14:57:15 crc kubenswrapper[4902]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 14:57:15 crc kubenswrapper[4902]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 14:57:15 crc kubenswrapper[4902]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 14:57:15 crc kubenswrapper[4902]: # support updates Jan 21 14:57:15 crc kubenswrapper[4902]: Jan 21 14:57:15 crc kubenswrapper[4902]: $MYSQL_CMD < logger="UnhandledError" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.019666 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-7df7-account-create-update-vrg52" podUID="1a61b8e1-9a04-429b-9439-bee181301046" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.110342 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-54bc9cbc97-hx966"] Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.110562 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-54bc9cbc97-hx966" podUID="3389852b-01f7-4dc9-b7c2-73c858ba1268" containerName="proxy-httpd" containerID="cri-o://03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398" gracePeriod=30 Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.110931 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-54bc9cbc97-hx966" podUID="3389852b-01f7-4dc9-b7c2-73c858ba1268" containerName="proxy-server" containerID="cri-o://2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb" gracePeriod=30 Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.112415 4902 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.112481 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data podName:8d7103bd-b24b-4a0c-b68a-17373307f1aa nodeName:}" failed. No retries permitted until 2026-01-21 14:57:19.112467602 +0000 UTC m=+1401.189300631 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data") pod "rabbitmq-cell1-server-0" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa") : configmap "rabbitmq-cell1-config-data" not found Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.419282 4902 generic.go:334] "Generic (PLEG): container finished" podID="3389852b-01f7-4dc9-b7c2-73c858ba1268" containerID="03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398" exitCode=0 Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.419399 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54bc9cbc97-hx966" event={"ID":"3389852b-01f7-4dc9-b7c2-73c858ba1268","Type":"ContainerDied","Data":"03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398"} Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.437188 4902 generic.go:334] "Generic (PLEG): container finished" podID="d59e8c8f-5bf6-4dd1-835a-b2ed93e81044" containerID="51f3e0557ba29d0e459dc32f45c40c004e66a2616c90bcc78b93663bdae1ff99" exitCode=0 Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.437293 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044","Type":"ContainerDied","Data":"51f3e0557ba29d0e459dc32f45c40c004e66a2616c90bcc78b93663bdae1ff99"} Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.452013 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9bb1-account-create-update-dbdlg" event={"ID":"5a189ccd-729c-4453-8adf-7ef08834d320","Type":"ContainerStarted","Data":"1120906563f2c993fa9342e6037fa08fad280845ccc8b73576e20887d9536a97"} Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.474614 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_55191d4e-0310-4e6a-a10c-902e0cc8a209/ovsdbserver-sb/0.log" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.474806 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"55191d4e-0310-4e6a-a10c-902e0cc8a209","Type":"ContainerDied","Data":"7b6bfe3f7296114e25ecf2caceede712b35695e06d9545a4b2270d1cce053ea2"} Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.474876 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.474879 4902 scope.go:117] "RemoveContainer" containerID="f33529c27085ffa8a5953825706b4cb4672e9bfd551a411eede0445f1ce65803" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.478210 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7226-account-create-update-dvfjh" event={"ID":"6a8bdead-378c-4db8-acfe-a0b449c69e8a","Type":"ContainerStarted","Data":"f20441eedce16d5292ec3a928996e2d617be39de9f96a7561e77ab7123507595"} Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.527646 4902 generic.go:334] "Generic (PLEG): container finished" podID="94031dcf-9569-4cf1-90a9-61c962434ae8" containerID="6843f7fdaa415e7e2f0347cd97fdaa8f7eaf2a1c6b75202daa5f85889752389a" exitCode=0 Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.527758 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"94031dcf-9569-4cf1-90a9-61c962434ae8","Type":"ContainerDied","Data":"6843f7fdaa415e7e2f0347cd97fdaa8f7eaf2a1c6b75202daa5f85889752389a"} Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.527797 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"94031dcf-9569-4cf1-90a9-61c962434ae8","Type":"ContainerDied","Data":"0e2225caf36121574255d90227f9966e2a981074b953f7b34948ace2a7d9beae"} Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.527812 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e2225caf36121574255d90227f9966e2a981074b953f7b34948ace2a7d9beae" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.539201 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6f87-account-create-update-rx9dv" event={"ID":"8c005e52-f6e5-413f-ba23-cb99e461cb66","Type":"ContainerStarted","Data":"556a8a0278b939a9c8dbd44294ce0e229928d9c06e0c74dd319ffc8da0bf47de"} Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.544683 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zbrd5" event={"ID":"8e00e8be-96f7-4457-821f-440694bd8692","Type":"ContainerStarted","Data":"3e4a5f1b1b650dc3abf20ac520f3ebf17eba8a0b1800cb370ba3510c73dd619b"} Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.588106 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7df7-account-create-update-vrg52" event={"ID":"1a61b8e1-9a04-429b-9439-bee181301046","Type":"ContainerStarted","Data":"c1a2c01a0ef6d99a3d9e8bb7f28626beef1ad4446ccdbb5bcacd934ce87ecd23"} Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.592538 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.599022 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.602182 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.602303 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.602397 4902 generic.go:334] "Generic (PLEG): container finished" podID="58b4678d-e59b-49d1-b06e-338a42a0e51e" containerID="669110a27652bb9b7b8004db550a35eb0dceaedaf48edf3ca2483cc2449bc57c" exitCode=0 Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.602440 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"58b4678d-e59b-49d1-b06e-338a42a0e51e","Type":"ContainerDied","Data":"669110a27652bb9b7b8004db550a35eb0dceaedaf48edf3ca2483cc2449bc57c"} Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.627731 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.631423 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/94031dcf-9569-4cf1-90a9-61c962434ae8-galera-tls-certs\") pod \"94031dcf-9569-4cf1-90a9-61c962434ae8\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.631581 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58b4678d-e59b-49d1-b06e-338a42a0e51e-etc-machine-id\") pod \"58b4678d-e59b-49d1-b06e-338a42a0e51e\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.631662 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-config-data-custom\") pod \"58b4678d-e59b-49d1-b06e-338a42a0e51e\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.632586 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58b4678d-e59b-49d1-b06e-338a42a0e51e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "58b4678d-e59b-49d1-b06e-338a42a0e51e" (UID: "58b4678d-e59b-49d1-b06e-338a42a0e51e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.635687 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"94031dcf-9569-4cf1-90a9-61c962434ae8\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.635794 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cs94\" (UniqueName: \"kubernetes.io/projected/94031dcf-9569-4cf1-90a9-61c962434ae8-kube-api-access-5cs94\") pod \"94031dcf-9569-4cf1-90a9-61c962434ae8\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.635871 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94031dcf-9569-4cf1-90a9-61c962434ae8-combined-ca-bundle\") pod \"94031dcf-9569-4cf1-90a9-61c962434ae8\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.635931 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-kolla-config\") pod \"94031dcf-9569-4cf1-90a9-61c962434ae8\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.636061 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-config-data-default\") pod \"94031dcf-9569-4cf1-90a9-61c962434ae8\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.636194 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/94031dcf-9569-4cf1-90a9-61c962434ae8-config-data-generated\") pod \"94031dcf-9569-4cf1-90a9-61c962434ae8\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.636270 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-scripts\") pod \"58b4678d-e59b-49d1-b06e-338a42a0e51e\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.636332 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x88d\" (UniqueName: \"kubernetes.io/projected/58b4678d-e59b-49d1-b06e-338a42a0e51e-kube-api-access-9x88d\") pod \"58b4678d-e59b-49d1-b06e-338a42a0e51e\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.636974 4902 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58b4678d-e59b-49d1-b06e-338a42a0e51e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.639934 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "58b4678d-e59b-49d1-b06e-338a42a0e51e" (UID: "58b4678d-e59b-49d1-b06e-338a42a0e51e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.648268 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58b4678d-e59b-49d1-b06e-338a42a0e51e-kube-api-access-9x88d" (OuterVolumeSpecName: "kube-api-access-9x88d") pod "58b4678d-e59b-49d1-b06e-338a42a0e51e" (UID: "58b4678d-e59b-49d1-b06e-338a42a0e51e"). InnerVolumeSpecName "kube-api-access-9x88d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.650768 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "94031dcf-9569-4cf1-90a9-61c962434ae8" (UID: "94031dcf-9569-4cf1-90a9-61c962434ae8"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.655858 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-scripts" (OuterVolumeSpecName: "scripts") pod "58b4678d-e59b-49d1-b06e-338a42a0e51e" (UID: "58b4678d-e59b-49d1-b06e-338a42a0e51e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.659727 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "94031dcf-9569-4cf1-90a9-61c962434ae8" (UID: "94031dcf-9569-4cf1-90a9-61c962434ae8"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.669460 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "mysql-db") pod "94031dcf-9569-4cf1-90a9-61c962434ae8" (UID: "94031dcf-9569-4cf1-90a9-61c962434ae8"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.669573 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94031dcf-9569-4cf1-90a9-61c962434ae8-kube-api-access-5cs94" (OuterVolumeSpecName: "kube-api-access-5cs94") pod "94031dcf-9569-4cf1-90a9-61c962434ae8" (UID: "94031dcf-9569-4cf1-90a9-61c962434ae8"). InnerVolumeSpecName "kube-api-access-5cs94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.669818 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94031dcf-9569-4cf1-90a9-61c962434ae8-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "94031dcf-9569-4cf1-90a9-61c962434ae8" (UID: "94031dcf-9569-4cf1-90a9-61c962434ae8"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.669899 4902 scope.go:117] "RemoveContainer" containerID="00bf7a3928a19891dd7e4eeb9d6cbd183d170218b09cf88bac1204f77dcea9f1" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.709219 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94031dcf-9569-4cf1-90a9-61c962434ae8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94031dcf-9569-4cf1-90a9-61c962434ae8" (UID: "94031dcf-9569-4cf1-90a9-61c962434ae8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.721979 4902 scope.go:117] "RemoveContainer" containerID="c7dbc8dbff5390b63de46436cbdf0b7cd9f0cbbc930ab3a08d07d477a6d55001" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.738650 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-config-data\") pod \"58b4678d-e59b-49d1-b06e-338a42a0e51e\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.738696 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-operator-scripts\") pod \"94031dcf-9569-4cf1-90a9-61c962434ae8\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.738727 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-combined-ca-bundle\") pod \"58b4678d-e59b-49d1-b06e-338a42a0e51e\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.741507 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94031dcf-9569-4cf1-90a9-61c962434ae8" (UID: "94031dcf-9569-4cf1-90a9-61c962434ae8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.741557 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94031dcf-9569-4cf1-90a9-61c962434ae8-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "94031dcf-9569-4cf1-90a9-61c962434ae8" (UID: "94031dcf-9569-4cf1-90a9-61c962434ae8"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.752579 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.752618 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cs94\" (UniqueName: \"kubernetes.io/projected/94031dcf-9569-4cf1-90a9-61c962434ae8-kube-api-access-5cs94\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.752628 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94031dcf-9569-4cf1-90a9-61c962434ae8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.752638 4902 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.752646 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.752657 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/94031dcf-9569-4cf1-90a9-61c962434ae8-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.752666 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.752674 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x88d\" (UniqueName: \"kubernetes.io/projected/58b4678d-e59b-49d1-b06e-338a42a0e51e-kube-api-access-9x88d\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.752682 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.752690 4902 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/94031dcf-9569-4cf1-90a9-61c962434ae8-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.752698 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.762331 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.762862 4902 scope.go:117] "RemoveContainer" containerID="669110a27652bb9b7b8004db550a35eb0dceaedaf48edf3ca2483cc2449bc57c" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.778545 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.799241 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58b4678d-e59b-49d1-b06e-338a42a0e51e" (UID: "58b4678d-e59b-49d1-b06e-338a42a0e51e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810288 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-2c87v"] Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810728 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94031dcf-9569-4cf1-90a9-61c962434ae8" containerName="mysql-bootstrap" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810747 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="94031dcf-9569-4cf1-90a9-61c962434ae8" containerName="mysql-bootstrap" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810757 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55191d4e-0310-4e6a-a10c-902e0cc8a209" containerName="openstack-network-exporter" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810766 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="55191d4e-0310-4e6a-a10c-902e0cc8a209" containerName="openstack-network-exporter" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810782 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d59e8c8f-5bf6-4dd1-835a-b2ed93e81044" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810788 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d59e8c8f-5bf6-4dd1-835a-b2ed93e81044" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810797 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" containerName="ovsdbserver-nb" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810802 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" containerName="ovsdbserver-nb" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810812 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55191d4e-0310-4e6a-a10c-902e0cc8a209" containerName="ovsdbserver-sb" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810819 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="55191d4e-0310-4e6a-a10c-902e0cc8a209" containerName="ovsdbserver-sb" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810838 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ef26f87-2d73-4847-abfb-a3bbda8c01c6" containerName="init" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810844 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ef26f87-2d73-4847-abfb-a3bbda8c01c6" containerName="init" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810855 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8891f80f-6cb0-4dc6-9f92-836d465e1c84" containerName="openstack-network-exporter" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810863 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8891f80f-6cb0-4dc6-9f92-836d465e1c84" containerName="openstack-network-exporter" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810876 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ef26f87-2d73-4847-abfb-a3bbda8c01c6" containerName="dnsmasq-dns" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810884 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ef26f87-2d73-4847-abfb-a3bbda8c01c6" containerName="dnsmasq-dns" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810897 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94031dcf-9569-4cf1-90a9-61c962434ae8" containerName="galera" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810903 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="94031dcf-9569-4cf1-90a9-61c962434ae8" containerName="galera" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810917 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b4678d-e59b-49d1-b06e-338a42a0e51e" containerName="cinder-scheduler" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810922 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b4678d-e59b-49d1-b06e-338a42a0e51e" containerName="cinder-scheduler" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810933 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b4678d-e59b-49d1-b06e-338a42a0e51e" containerName="probe" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810939 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b4678d-e59b-49d1-b06e-338a42a0e51e" containerName="probe" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810949 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8135258-f03d-4c9a-be6f-7dd1dd099188" containerName="ovn-controller" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810956 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8135258-f03d-4c9a-be6f-7dd1dd099188" containerName="ovn-controller" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810967 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" containerName="openstack-network-exporter" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810975 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" containerName="openstack-network-exporter" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811165 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8891f80f-6cb0-4dc6-9f92-836d465e1c84" containerName="openstack-network-exporter" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811181 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="58b4678d-e59b-49d1-b06e-338a42a0e51e" containerName="cinder-scheduler" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811195 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="55191d4e-0310-4e6a-a10c-902e0cc8a209" containerName="ovsdbserver-sb" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811204 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" containerName="openstack-network-exporter" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811217 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d59e8c8f-5bf6-4dd1-835a-b2ed93e81044" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811224 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="94031dcf-9569-4cf1-90a9-61c962434ae8" containerName="galera" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811230 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="55191d4e-0310-4e6a-a10c-902e0cc8a209" containerName="openstack-network-exporter" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811238 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ef26f87-2d73-4847-abfb-a3bbda8c01c6" containerName="dnsmasq-dns" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811244 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" containerName="ovsdbserver-nb" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811252 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="58b4678d-e59b-49d1-b06e-338a42a0e51e" containerName="probe" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811262 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8135258-f03d-4c9a-be6f-7dd1dd099188" containerName="ovn-controller" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811899 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2c87v"] Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811981 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2c87v" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.814613 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.825202 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.837822 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.853793 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-combined-ca-bundle\") pod \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.853857 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-nova-novncproxy-tls-certs\") pod \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.853982 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mw6c\" (UniqueName: \"kubernetes.io/projected/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-kube-api-access-4mw6c\") pod \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.854006 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-config-data\") pod \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.854037 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-vencrypt-tls-certs\") pod \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.854335 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttmff\" (UniqueName: \"kubernetes.io/projected/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2-kube-api-access-ttmff\") pod \"root-account-create-update-2c87v\" (UID: \"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2\") " pod="openstack/root-account-create-update-2c87v" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.854459 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2-operator-scripts\") pod \"root-account-create-update-2c87v\" (UID: \"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2\") " pod="openstack/root-account-create-update-2c87v" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.854604 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.854618 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.874269 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-config-data" (OuterVolumeSpecName: "config-data") pod "58b4678d-e59b-49d1-b06e-338a42a0e51e" (UID: "58b4678d-e59b-49d1-b06e-338a42a0e51e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.882869 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-kube-api-access-4mw6c" (OuterVolumeSpecName: "kube-api-access-4mw6c") pod "d59e8c8f-5bf6-4dd1-835a-b2ed93e81044" (UID: "d59e8c8f-5bf6-4dd1-835a-b2ed93e81044"). InnerVolumeSpecName "kube-api-access-4mw6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.906356 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d59e8c8f-5bf6-4dd1-835a-b2ed93e81044" (UID: "d59e8c8f-5bf6-4dd1-835a-b2ed93e81044"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.924553 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "d59e8c8f-5bf6-4dd1-835a-b2ed93e81044" (UID: "d59e8c8f-5bf6-4dd1-835a-b2ed93e81044"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.939299 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-config-data" (OuterVolumeSpecName: "config-data") pod "d59e8c8f-5bf6-4dd1-835a-b2ed93e81044" (UID: "d59e8c8f-5bf6-4dd1-835a-b2ed93e81044"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.944497 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "d59e8c8f-5bf6-4dd1-835a-b2ed93e81044" (UID: "d59e8c8f-5bf6-4dd1-835a-b2ed93e81044"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.956118 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2-operator-scripts\") pod \"root-account-create-update-2c87v\" (UID: \"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2\") " pod="openstack/root-account-create-update-2c87v" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.956268 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttmff\" (UniqueName: \"kubernetes.io/projected/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2-kube-api-access-ttmff\") pod \"root-account-create-update-2c87v\" (UID: \"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2\") " pod="openstack/root-account-create-update-2c87v" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.956355 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.956378 4902 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.956392 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mw6c\" (UniqueName: \"kubernetes.io/projected/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-kube-api-access-4mw6c\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.956403 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.956414 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.956424 4902 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.968122 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2-operator-scripts\") pod \"root-account-create-update-2c87v\" (UID: \"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2\") " pod="openstack/root-account-create-update-2c87v" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.986645 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttmff\" (UniqueName: \"kubernetes.io/projected/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2-kube-api-access-ttmff\") pod \"root-account-create-update-2c87v\" (UID: \"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2\") " pod="openstack/root-account-create-update-2c87v" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.089115 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9bb1-account-create-update-dbdlg" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.143892 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2c87v" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.159760 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a189ccd-729c-4453-8adf-7ef08834d320-operator-scripts\") pod \"5a189ccd-729c-4453-8adf-7ef08834d320\" (UID: \"5a189ccd-729c-4453-8adf-7ef08834d320\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.159941 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx42c\" (UniqueName: \"kubernetes.io/projected/5a189ccd-729c-4453-8adf-7ef08834d320-kube-api-access-bx42c\") pod \"5a189ccd-729c-4453-8adf-7ef08834d320\" (UID: \"5a189ccd-729c-4453-8adf-7ef08834d320\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.160457 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a189ccd-729c-4453-8adf-7ef08834d320-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a189ccd-729c-4453-8adf-7ef08834d320" (UID: "5a189ccd-729c-4453-8adf-7ef08834d320"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.166379 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a189ccd-729c-4453-8adf-7ef08834d320-kube-api-access-bx42c" (OuterVolumeSpecName: "kube-api-access-bx42c") pod "5a189ccd-729c-4453-8adf-7ef08834d320" (UID: "5a189ccd-729c-4453-8adf-7ef08834d320"). InnerVolumeSpecName "kube-api-access-bx42c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.262142 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a189ccd-729c-4453-8adf-7ef08834d320-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.262484 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx42c\" (UniqueName: \"kubernetes.io/projected/5a189ccd-729c-4453-8adf-7ef08834d320-kube-api-access-bx42c\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.314289 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="169597ed-1e1f-490a-8d17-0d6520ae39d1" path="/var/lib/kubelet/pods/169597ed-1e1f-490a-8d17-0d6520ae39d1/volumes" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.315124 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55191d4e-0310-4e6a-a10c-902e0cc8a209" path="/var/lib/kubelet/pods/55191d4e-0310-4e6a-a10c-902e0cc8a209/volumes" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.316500 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8891f80f-6cb0-4dc6-9f92-836d465e1c84" path="/var/lib/kubelet/pods/8891f80f-6cb0-4dc6-9f92-836d465e1c84/volumes" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.317638 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b14dfbd1-cf80-4ba8-9372-ca5767f5d689" path="/var/lib/kubelet/pods/b14dfbd1-cf80-4ba8-9372-ca5767f5d689/volumes" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.318275 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" path="/var/lib/kubelet/pods/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3/volumes" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.319735 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8135258-f03d-4c9a-be6f-7dd1dd099188" path="/var/lib/kubelet/pods/e8135258-f03d-4c9a-be6f-7dd1dd099188/volumes" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.321712 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6ab900b-a76f-495c-a309-f597e2d835a8" path="/var/lib/kubelet/pods/f6ab900b-a76f-495c-a309-f597e2d835a8/volumes" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.354478 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7226-account-create-update-dvfjh" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.364615 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a8bdead-378c-4db8-acfe-a0b449c69e8a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6a8bdead-378c-4db8-acfe-a0b449c69e8a" (UID: "6a8bdead-378c-4db8-acfe-a0b449c69e8a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.364031 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a8bdead-378c-4db8-acfe-a0b449c69e8a-operator-scripts\") pod \"6a8bdead-378c-4db8-acfe-a0b449c69e8a\" (UID: \"6a8bdead-378c-4db8-acfe-a0b449c69e8a\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.365598 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8zjl\" (UniqueName: \"kubernetes.io/projected/6a8bdead-378c-4db8-acfe-a0b449c69e8a-kube-api-access-n8zjl\") pod \"6a8bdead-378c-4db8-acfe-a0b449c69e8a\" (UID: \"6a8bdead-378c-4db8-acfe-a0b449c69e8a\") " Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.366642 4902 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.366936 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data podName:67f50f65-9151-4444-9680-f86e0f256069 nodeName:}" failed. No retries permitted until 2026-01-21 14:57:20.366880477 +0000 UTC m=+1402.443713506 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data") pod "rabbitmq-server-0" (UID: "67f50f65-9151-4444-9680-f86e0f256069") : configmap "rabbitmq-config-data" not found Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.367216 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a8bdead-378c-4db8-acfe-a0b449c69e8a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.390389 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a8bdead-378c-4db8-acfe-a0b449c69e8a-kube-api-access-n8zjl" (OuterVolumeSpecName: "kube-api-access-n8zjl") pod "6a8bdead-378c-4db8-acfe-a0b449c69e8a" (UID: "6a8bdead-378c-4db8-acfe-a0b449c69e8a"). InnerVolumeSpecName "kube-api-access-n8zjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.422883 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7df7-account-create-update-vrg52" Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.443481 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.445004 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.445088 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.449690 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.449743 4902 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.453192 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zbrd5" Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.458507 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.469776 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6f87-account-create-update-rx9dv" Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.471414 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.471472 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovs-vswitchd" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.472461 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cln9f\" (UniqueName: \"kubernetes.io/projected/1a61b8e1-9a04-429b-9439-bee181301046-kube-api-access-cln9f\") pod \"1a61b8e1-9a04-429b-9439-bee181301046\" (UID: \"1a61b8e1-9a04-429b-9439-bee181301046\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.472524 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7gjv\" (UniqueName: \"kubernetes.io/projected/8e00e8be-96f7-4457-821f-440694bd8692-kube-api-access-p7gjv\") pod \"8e00e8be-96f7-4457-821f-440694bd8692\" (UID: \"8e00e8be-96f7-4457-821f-440694bd8692\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.472553 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e00e8be-96f7-4457-821f-440694bd8692-operator-scripts\") pod \"8e00e8be-96f7-4457-821f-440694bd8692\" (UID: \"8e00e8be-96f7-4457-821f-440694bd8692\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.472572 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c005e52-f6e5-413f-ba23-cb99e461cb66-operator-scripts\") pod \"8c005e52-f6e5-413f-ba23-cb99e461cb66\" (UID: \"8c005e52-f6e5-413f-ba23-cb99e461cb66\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.472587 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvjnr\" (UniqueName: \"kubernetes.io/projected/8c005e52-f6e5-413f-ba23-cb99e461cb66-kube-api-access-rvjnr\") pod \"8c005e52-f6e5-413f-ba23-cb99e461cb66\" (UID: \"8c005e52-f6e5-413f-ba23-cb99e461cb66\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.472896 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8zjl\" (UniqueName: \"kubernetes.io/projected/6a8bdead-378c-4db8-acfe-a0b449c69e8a-kube-api-access-n8zjl\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.477473 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e00e8be-96f7-4457-821f-440694bd8692-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e00e8be-96f7-4457-821f-440694bd8692" (UID: "8e00e8be-96f7-4457-821f-440694bd8692"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.480325 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c005e52-f6e5-413f-ba23-cb99e461cb66-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c005e52-f6e5-413f-ba23-cb99e461cb66" (UID: "8c005e52-f6e5-413f-ba23-cb99e461cb66"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.485709 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c005e52-f6e5-413f-ba23-cb99e461cb66-kube-api-access-rvjnr" (OuterVolumeSpecName: "kube-api-access-rvjnr") pod "8c005e52-f6e5-413f-ba23-cb99e461cb66" (UID: "8c005e52-f6e5-413f-ba23-cb99e461cb66"). InnerVolumeSpecName "kube-api-access-rvjnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.486445 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e00e8be-96f7-4457-821f-440694bd8692-kube-api-access-p7gjv" (OuterVolumeSpecName: "kube-api-access-p7gjv") pod "8e00e8be-96f7-4457-821f-440694bd8692" (UID: "8e00e8be-96f7-4457-821f-440694bd8692"). InnerVolumeSpecName "kube-api-access-p7gjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.490005 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.490201 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a61b8e1-9a04-429b-9439-bee181301046-kube-api-access-cln9f" (OuterVolumeSpecName: "kube-api-access-cln9f") pod "1a61b8e1-9a04-429b-9439-bee181301046" (UID: "1a61b8e1-9a04-429b-9439-bee181301046"). InnerVolumeSpecName "kube-api-access-cln9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.573321 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a61b8e1-9a04-429b-9439-bee181301046-operator-scripts\") pod \"1a61b8e1-9a04-429b-9439-bee181301046\" (UID: \"1a61b8e1-9a04-429b-9439-bee181301046\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.573778 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-config-data\") pod \"3389852b-01f7-4dc9-b7c2-73c858ba1268\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.573818 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msjhn\" (UniqueName: \"kubernetes.io/projected/3389852b-01f7-4dc9-b7c2-73c858ba1268-kube-api-access-msjhn\") pod \"3389852b-01f7-4dc9-b7c2-73c858ba1268\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.573848 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3389852b-01f7-4dc9-b7c2-73c858ba1268-log-httpd\") pod \"3389852b-01f7-4dc9-b7c2-73c858ba1268\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.573874 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-combined-ca-bundle\") pod \"3389852b-01f7-4dc9-b7c2-73c858ba1268\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.573910 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-internal-tls-certs\") pod \"3389852b-01f7-4dc9-b7c2-73c858ba1268\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.573937 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3389852b-01f7-4dc9-b7c2-73c858ba1268-run-httpd\") pod \"3389852b-01f7-4dc9-b7c2-73c858ba1268\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.573952 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-public-tls-certs\") pod \"3389852b-01f7-4dc9-b7c2-73c858ba1268\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.573974 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a61b8e1-9a04-429b-9439-bee181301046-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1a61b8e1-9a04-429b-9439-bee181301046" (UID: "1a61b8e1-9a04-429b-9439-bee181301046"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.574213 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cln9f\" (UniqueName: \"kubernetes.io/projected/1a61b8e1-9a04-429b-9439-bee181301046-kube-api-access-cln9f\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.574226 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7gjv\" (UniqueName: \"kubernetes.io/projected/8e00e8be-96f7-4457-821f-440694bd8692-kube-api-access-p7gjv\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.574234 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e00e8be-96f7-4457-821f-440694bd8692-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.574243 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c005e52-f6e5-413f-ba23-cb99e461cb66-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.574252 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvjnr\" (UniqueName: \"kubernetes.io/projected/8c005e52-f6e5-413f-ba23-cb99e461cb66-kube-api-access-rvjnr\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.574260 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a61b8e1-9a04-429b-9439-bee181301046-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.574500 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3389852b-01f7-4dc9-b7c2-73c858ba1268-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3389852b-01f7-4dc9-b7c2-73c858ba1268" (UID: "3389852b-01f7-4dc9-b7c2-73c858ba1268"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.578724 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3389852b-01f7-4dc9-b7c2-73c858ba1268-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3389852b-01f7-4dc9-b7c2-73c858ba1268" (UID: "3389852b-01f7-4dc9-b7c2-73c858ba1268"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.584672 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3389852b-01f7-4dc9-b7c2-73c858ba1268-kube-api-access-msjhn" (OuterVolumeSpecName: "kube-api-access-msjhn") pod "3389852b-01f7-4dc9-b7c2-73c858ba1268" (UID: "3389852b-01f7-4dc9-b7c2-73c858ba1268"). InnerVolumeSpecName "kube-api-access-msjhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.636537 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3389852b-01f7-4dc9-b7c2-73c858ba1268" (UID: "3389852b-01f7-4dc9-b7c2-73c858ba1268"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.645245 4902 generic.go:334] "Generic (PLEG): container finished" podID="ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" containerID="29a7ab7f1ceb1b7248d2507a5eb6085cbee233d8230ecf775819b6f6ce78389e" exitCode=0 Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.645409 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f","Type":"ContainerDied","Data":"29a7ab7f1ceb1b7248d2507a5eb6085cbee233d8230ecf775819b6f6ce78389e"} Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.645521 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-config-data" (OuterVolumeSpecName: "config-data") pod "3389852b-01f7-4dc9-b7c2-73c858ba1268" (UID: "3389852b-01f7-4dc9-b7c2-73c858ba1268"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.647476 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9bb1-account-create-update-dbdlg" event={"ID":"5a189ccd-729c-4453-8adf-7ef08834d320","Type":"ContainerDied","Data":"1120906563f2c993fa9342e6037fa08fad280845ccc8b73576e20887d9536a97"} Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.647521 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9bb1-account-create-update-dbdlg" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.653299 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3389852b-01f7-4dc9-b7c2-73c858ba1268" (UID: "3389852b-01f7-4dc9-b7c2-73c858ba1268"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.654656 4902 generic.go:334] "Generic (PLEG): container finished" podID="db4d047b-49f4-4b55-a053-081f1be632b7" containerID="4c05b52bed8146e4b813b72bd57efca7be3d0268ea82de7f8102940d78d0f674" exitCode=0 Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.654736 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"db4d047b-49f4-4b55-a053-081f1be632b7","Type":"ContainerDied","Data":"4c05b52bed8146e4b813b72bd57efca7be3d0268ea82de7f8102940d78d0f674"} Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.662678 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"58b4678d-e59b-49d1-b06e-338a42a0e51e","Type":"ContainerDied","Data":"83d1b2eb20981f2a9a2a1eda26c8252ba222ee4a68dd3f7546c40138c8e10370"} Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.662759 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.665815 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6f87-account-create-update-rx9dv" event={"ID":"8c005e52-f6e5-413f-ba23-cb99e461cb66","Type":"ContainerDied","Data":"556a8a0278b939a9c8dbd44294ce0e229928d9c06e0c74dd319ffc8da0bf47de"} Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.665919 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6f87-account-create-update-rx9dv" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.668949 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zbrd5" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.669007 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zbrd5" event={"ID":"8e00e8be-96f7-4457-821f-440694bd8692","Type":"ContainerDied","Data":"3e4a5f1b1b650dc3abf20ac520f3ebf17eba8a0b1800cb370ba3510c73dd619b"} Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.670899 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044","Type":"ContainerDied","Data":"140924a047cb28624865b0efcf1a901932347a50fbd34bbfa1c4027f44fbc891"} Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.670941 4902 scope.go:117] "RemoveContainer" containerID="51f3e0557ba29d0e459dc32f45c40c004e66a2616c90bcc78b93663bdae1ff99" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.671087 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.675264 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3389852b-01f7-4dc9-b7c2-73c858ba1268-etc-swift\") pod \"3389852b-01f7-4dc9-b7c2-73c858ba1268\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.675763 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.675784 4902 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3389852b-01f7-4dc9-b7c2-73c858ba1268-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.675797 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.675809 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.675821 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msjhn\" (UniqueName: \"kubernetes.io/projected/3389852b-01f7-4dc9-b7c2-73c858ba1268-kube-api-access-msjhn\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.675835 4902 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3389852b-01f7-4dc9-b7c2-73c858ba1268-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.676119 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7226-account-create-update-dvfjh" event={"ID":"6a8bdead-378c-4db8-acfe-a0b449c69e8a","Type":"ContainerDied","Data":"f20441eedce16d5292ec3a928996e2d617be39de9f96a7561e77ab7123507595"} Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.676179 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7226-account-create-update-dvfjh" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.677323 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3389852b-01f7-4dc9-b7c2-73c858ba1268" (UID: "3389852b-01f7-4dc9-b7c2-73c858ba1268"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.679436 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3389852b-01f7-4dc9-b7c2-73c858ba1268-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3389852b-01f7-4dc9-b7c2-73c858ba1268" (UID: "3389852b-01f7-4dc9-b7c2-73c858ba1268"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.679470 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7df7-account-create-update-vrg52" event={"ID":"1a61b8e1-9a04-429b-9439-bee181301046","Type":"ContainerDied","Data":"c1a2c01a0ef6d99a3d9e8bb7f28626beef1ad4446ccdbb5bcacd934ce87ecd23"} Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.679488 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7df7-account-create-update-vrg52" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.701410 4902 generic.go:334] "Generic (PLEG): container finished" podID="3389852b-01f7-4dc9-b7c2-73c858ba1268" containerID="2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb" exitCode=0 Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.701496 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.702232 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.702372 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54bc9cbc97-hx966" event={"ID":"3389852b-01f7-4dc9-b7c2-73c858ba1268","Type":"ContainerDied","Data":"2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb"} Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.702422 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54bc9cbc97-hx966" event={"ID":"3389852b-01f7-4dc9-b7c2-73c858ba1268","Type":"ContainerDied","Data":"e83ca63bcfd9328da7616c6b5c09b31fc0bd4751ea531f09a2e1f38c1a7f3d76"} Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.750876 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c4adcc4c76cce96e7677ca792f5ca78a7e382164071237e39c2950d3440922c3" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.752304 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c4adcc4c76cce96e7677ca792f5ca78a7e382164071237e39c2950d3440922c3" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.753427 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c4adcc4c76cce96e7677ca792f5ca78a7e382164071237e39c2950d3440922c3" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.753455 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" containerName="ovn-northd" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.766640 4902 scope.go:117] "RemoveContainer" containerID="2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.768375 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-9bb1-account-create-update-dbdlg"] Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.778721 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-9bb1-account-create-update-dbdlg"] Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.781020 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.781055 4902 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3389852b-01f7-4dc9-b7c2-73c858ba1268-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.785481 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.816360 4902 scope.go:117] "RemoveContainer" containerID="03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.832271 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.944728 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7226-account-create-update-dvfjh"] Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.949076 4902 scope.go:117] "RemoveContainer" containerID="2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb" Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.949518 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb\": container with ID starting with 2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb not found: ID does not exist" containerID="2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.949551 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb"} err="failed to get container status \"2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb\": rpc error: code = NotFound desc = could not find container \"2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb\": container with ID starting with 2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb not found: ID does not exist" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.949582 4902 scope.go:117] "RemoveContainer" containerID="03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398" Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.949812 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398\": container with ID starting with 03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398 not found: ID does not exist" containerID="03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.949837 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398"} err="failed to get container status \"03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398\": rpc error: code = NotFound desc = could not find container \"03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398\": container with ID starting with 03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398 not found: ID does not exist" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.952316 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-7226-account-create-update-dvfjh"] Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.977669 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": read tcp 10.217.0.2:54538->10.217.0.203:8775: read: connection reset by peer" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.980722 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": read tcp 10.217.0.2:54536->10.217.0.203:8775: read: connection reset by peer" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.980944 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-6f87-account-create-update-rx9dv"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:16.999007 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-6f87-account-create-update-rx9dv"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.021486 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zbrd5"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.035921 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-zbrd5"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.043314 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.050911 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.061469 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7df7-account-create-update-vrg52"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.075661 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7df7-account-create-update-vrg52"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.083751 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-54bc9cbc97-hx966"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.092691 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-54bc9cbc97-hx966"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.101262 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.105248 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.116468 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2c87v"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.285585 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.285899 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="ceilometer-central-agent" containerID="cri-o://49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110" gracePeriod=30 Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.286716 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="proxy-httpd" containerID="cri-o://d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0" gracePeriod=30 Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.286803 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="sg-core" containerID="cri-o://91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881" gracePeriod=30 Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.286827 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="ceilometer-notification-agent" containerID="cri-o://c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67" gracePeriod=30 Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.307067 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.359202 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.359421 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="b52494a8-ff56-449e-a274-b37eb4bad43d" containerName="kube-state-metrics" containerID="cri-o://af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723" gracePeriod=30 Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.361506 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.387341 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411353 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db4d047b-49f4-4b55-a053-081f1be632b7-etc-machine-id\") pod \"db4d047b-49f4-4b55-a053-081f1be632b7\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411398 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-scripts\") pod \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411456 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-public-tls-certs\") pod \"db4d047b-49f4-4b55-a053-081f1be632b7\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411476 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-scripts\") pod \"b71fc896-318c-4277-bb32-70e3424a26c9\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411501 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-combined-ca-bundle\") pod \"b71fc896-318c-4277-bb32-70e3424a26c9\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411533 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411553 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-config-data-custom\") pod \"db4d047b-49f4-4b55-a053-081f1be632b7\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411575 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db4d047b-49f4-4b55-a053-081f1be632b7-logs\") pod \"db4d047b-49f4-4b55-a053-081f1be632b7\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411601 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b71fc896-318c-4277-bb32-70e3424a26c9-logs\") pod \"b71fc896-318c-4277-bb32-70e3424a26c9\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411622 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nm9x\" (UniqueName: \"kubernetes.io/projected/db4d047b-49f4-4b55-a053-081f1be632b7-kube-api-access-6nm9x\") pod \"db4d047b-49f4-4b55-a053-081f1be632b7\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411648 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-internal-tls-certs\") pod \"b71fc896-318c-4277-bb32-70e3424a26c9\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411668 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-combined-ca-bundle\") pod \"db4d047b-49f4-4b55-a053-081f1be632b7\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411689 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-config-data\") pod \"b71fc896-318c-4277-bb32-70e3424a26c9\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411710 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-config-data\") pod \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411732 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btb6d\" (UniqueName: \"kubernetes.io/projected/b71fc896-318c-4277-bb32-70e3424a26c9-kube-api-access-btb6d\") pod \"b71fc896-318c-4277-bb32-70e3424a26c9\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.412789 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-logs\") pod \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.412814 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-scripts\") pod \"db4d047b-49f4-4b55-a053-081f1be632b7\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.412836 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-public-tls-certs\") pod \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.412857 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-httpd-run\") pod \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.412884 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-internal-tls-certs\") pod \"db4d047b-49f4-4b55-a053-081f1be632b7\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.412902 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-combined-ca-bundle\") pod \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.412928 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtwj7\" (UniqueName: \"kubernetes.io/projected/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-kube-api-access-xtwj7\") pod \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.412949 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-public-tls-certs\") pod \"b71fc896-318c-4277-bb32-70e3424a26c9\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.412975 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-config-data\") pod \"db4d047b-49f4-4b55-a053-081f1be632b7\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.460618 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db4d047b-49f4-4b55-a053-081f1be632b7-logs" (OuterVolumeSpecName: "logs") pod "db4d047b-49f4-4b55-a053-081f1be632b7" (UID: "db4d047b-49f4-4b55-a053-081f1be632b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.465414 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-logs" (OuterVolumeSpecName: "logs") pod "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" (UID: "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.471901 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b71fc896-318c-4277-bb32-70e3424a26c9-logs" (OuterVolumeSpecName: "logs") pod "b71fc896-318c-4277-bb32-70e3424a26c9" (UID: "b71fc896-318c-4277-bb32-70e3424a26c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.472190 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" (UID: "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.478185 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db4d047b-49f4-4b55-a053-081f1be632b7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "db4d047b-49f4-4b55-a053-081f1be632b7" (UID: "db4d047b-49f4-4b55-a053-081f1be632b7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.519652 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db4d047b-49f4-4b55-a053-081f1be632b7-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.519676 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b71fc896-318c-4277-bb32-70e3424a26c9-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.519687 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.519694 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.519702 4902 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db4d047b-49f4-4b55-a053-081f1be632b7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.608348 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b2af-account-create-update-g4dvb"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.693102 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.693352 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="2c70bcdb-316e-4246-b333-ddaf6438c6ee" containerName="memcached" containerID="cri-o://c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2" gracePeriod=30 Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.760379 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b2af-account-create-update-g4dvb"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.784765 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b71fc896-318c-4277-bb32-70e3424a26c9-kube-api-access-btb6d" (OuterVolumeSpecName: "kube-api-access-btb6d") pod "b71fc896-318c-4277-bb32-70e3424a26c9" (UID: "b71fc896-318c-4277-bb32-70e3424a26c9"). InnerVolumeSpecName "kube-api-access-btb6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.792259 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2c87v" event={"ID":"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2","Type":"ContainerStarted","Data":"24e8d59ec3c64b717babfef7f378c16dbc7782ee7d0c22d80830c614a5f49681"} Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.798846 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" (UID: "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.799303 4902 generic.go:334] "Generic (PLEG): container finished" podID="561efc1e-a930-440f-83b1-a75217a11f32" containerID="709dea640199a3e29bbff0c5bd046ca78f3c55c233e1043ae28cc59e518b7cd2" exitCode=0 Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.799389 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5df595696d-2ftxp" event={"ID":"561efc1e-a930-440f-83b1-a75217a11f32","Type":"ContainerDied","Data":"709dea640199a3e29bbff0c5bd046ca78f3c55c233e1043ae28cc59e518b7cd2"} Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.799476 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-kube-api-access-xtwj7" (OuterVolumeSpecName: "kube-api-access-xtwj7") pod "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" (UID: "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f"). InnerVolumeSpecName "kube-api-access-xtwj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.799895 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="67f50f65-9151-4444-9680-f86e0f256069" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.805771 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db4d047b-49f4-4b55-a053-081f1be632b7-kube-api-access-6nm9x" (OuterVolumeSpecName: "kube-api-access-6nm9x") pod "db4d047b-49f4-4b55-a053-081f1be632b7" (UID: "db4d047b-49f4-4b55-a053-081f1be632b7"). InnerVolumeSpecName "kube-api-access-6nm9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.806159 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-scripts" (OuterVolumeSpecName: "scripts") pod "b71fc896-318c-4277-bb32-70e3424a26c9" (UID: "b71fc896-318c-4277-bb32-70e3424a26c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.807518 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b2af-account-create-update-852lx"] Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:17.807973 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71fc896-318c-4277-bb32-70e3424a26c9" containerName="placement-log" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808000 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71fc896-318c-4277-bb32-70e3424a26c9" containerName="placement-log" Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:17.808018 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71fc896-318c-4277-bb32-70e3424a26c9" containerName="placement-api" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808025 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71fc896-318c-4277-bb32-70e3424a26c9" containerName="placement-api" Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:17.808108 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3389852b-01f7-4dc9-b7c2-73c858ba1268" containerName="proxy-server" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808118 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3389852b-01f7-4dc9-b7c2-73c858ba1268" containerName="proxy-server" Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:17.808128 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4d047b-49f4-4b55-a053-081f1be632b7" containerName="cinder-api-log" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808134 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4d047b-49f4-4b55-a053-081f1be632b7" containerName="cinder-api-log" Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:17.808145 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4d047b-49f4-4b55-a053-081f1be632b7" containerName="cinder-api" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808150 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4d047b-49f4-4b55-a053-081f1be632b7" containerName="cinder-api" Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:17.808181 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" containerName="glance-httpd" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808187 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" containerName="glance-httpd" Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:17.808196 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3389852b-01f7-4dc9-b7c2-73c858ba1268" containerName="proxy-httpd" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808201 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3389852b-01f7-4dc9-b7c2-73c858ba1268" containerName="proxy-httpd" Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:17.808215 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" containerName="glance-log" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808221 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" containerName="glance-log" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808424 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" containerName="glance-httpd" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808435 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71fc896-318c-4277-bb32-70e3424a26c9" containerName="placement-log" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808446 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4d047b-49f4-4b55-a053-081f1be632b7" containerName="cinder-api-log" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808456 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4d047b-49f4-4b55-a053-081f1be632b7" containerName="cinder-api" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808466 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3389852b-01f7-4dc9-b7c2-73c858ba1268" containerName="proxy-httpd" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808498 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3389852b-01f7-4dc9-b7c2-73c858ba1268" containerName="proxy-server" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808510 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" containerName="glance-log" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808519 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71fc896-318c-4277-bb32-70e3424a26c9" containerName="placement-api" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.809430 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b2af-account-create-update-852lx" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.817166 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.817655 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b2af-account-create-update-852lx"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.818054 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-scripts" (OuterVolumeSpecName: "scripts") pod "db4d047b-49f4-4b55-a053-081f1be632b7" (UID: "db4d047b-49f4-4b55-a053-081f1be632b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.819278 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-scripts" (OuterVolumeSpecName: "scripts") pod "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" (UID: "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.823500 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-c6zzp"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.841050 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfjwj\" (UniqueName: \"kubernetes.io/projected/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-kube-api-access-bfjwj\") pod \"keystone-b2af-account-create-update-852lx\" (UID: \"3bdb84d6-c599-4d87-9c27-cb32ff77d6d9\") " pod="openstack/keystone-b2af-account-create-update-852lx" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.841194 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-operator-scripts\") pod \"keystone-b2af-account-create-update-852lx\" (UID: \"3bdb84d6-c599-4d87-9c27-cb32ff77d6d9\") " pod="openstack/keystone-b2af-account-create-update-852lx" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.841296 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.841307 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtwj7\" (UniqueName: \"kubernetes.io/projected/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-kube-api-access-xtwj7\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.841317 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.841356 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.841375 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.841385 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nm9x\" (UniqueName: \"kubernetes.io/projected/db4d047b-49f4-4b55-a053-081f1be632b7-kube-api-access-6nm9x\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.841394 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btb6d\" (UniqueName: \"kubernetes.io/projected/b71fc896-318c-4277-bb32-70e3424a26c9-kube-api-access-btb6d\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.852026 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-c6zzp"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.854172 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "db4d047b-49f4-4b55-a053-081f1be632b7" (UID: "db4d047b-49f4-4b55-a053-081f1be632b7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.863335 4902 generic.go:334] "Generic (PLEG): container finished" podID="b71fc896-318c-4277-bb32-70e3424a26c9" containerID="51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd" exitCode=0 Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.863415 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ddf9d8f68-jjk7f" event={"ID":"b71fc896-318c-4277-bb32-70e3424a26c9","Type":"ContainerDied","Data":"51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd"} Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.863441 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ddf9d8f68-jjk7f" event={"ID":"b71fc896-318c-4277-bb32-70e3424a26c9","Type":"ContainerDied","Data":"31d5a67184f80e0f8e30cfab691135f2f1fd9f01d89fed99d676f711a03521eb"} Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.863457 4902 scope.go:117] "RemoveContainer" containerID="51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.863584 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.873985 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-8n66z"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.875009 4902 generic.go:334] "Generic (PLEG): container finished" podID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerID="91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881" exitCode=2 Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.875074 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53","Type":"ContainerDied","Data":"91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881"} Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.879100 4902 generic.go:334] "Generic (PLEG): container finished" podID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerID="6bf1eb34ffb8ebb875ad0db959e31364a4f9a1f5a32e44cce848251c4a780377" exitCode=0 Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.879167 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3aa6f350-dd82-4d59-ac24-5460acc2a8a6","Type":"ContainerDied","Data":"6bf1eb34ffb8ebb875ad0db959e31364a4f9a1f5a32e44cce848251c4a780377"} Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.881361 4902 generic.go:334] "Generic (PLEG): container finished" podID="c4168bc0-26cf-4786-9e28-95647462c372" containerID="635d235f3800b93dc934010299b8ed6cf8c1efd38064d7aecd2aa2faa2ae46a0" exitCode=0 Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.881409 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4168bc0-26cf-4786-9e28-95647462c372","Type":"ContainerDied","Data":"635d235f3800b93dc934010299b8ed6cf8c1efd38064d7aecd2aa2faa2ae46a0"} Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.884160 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.884172 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"db4d047b-49f4-4b55-a053-081f1be632b7","Type":"ContainerDied","Data":"cf192cd4c08d4018b743f3dc19c0686fe97811bb3b64651346fb935eec9339db"} Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.890757 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f","Type":"ContainerDied","Data":"fab7af2822b1e0c413efff882a4ddbb2ff2b86596095fd2bcec07bee48c5bf19"} Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.890867 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.895566 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-8n66z"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.914230 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5684459db4-jgdkj"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.914490 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-5684459db4-jgdkj" podUID="8e00c7d5-7199-4602-9d3b-5af4f14124bc" containerName="keystone-api" containerID="cri-o://ea8dbb434ad9bd3e85adcd00febd132baf741c5aae1afe358fb761a39bcb889e" gracePeriod=30 Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.923541 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.930453 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" (UID: "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.937246 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b2af-account-create-update-852lx"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.940765 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db4d047b-49f4-4b55-a053-081f1be632b7" (UID: "db4d047b-49f4-4b55-a053-081f1be632b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.949398 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfjwj\" (UniqueName: \"kubernetes.io/projected/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-kube-api-access-bfjwj\") pod \"keystone-b2af-account-create-update-852lx\" (UID: \"3bdb84d6-c599-4d87-9c27-cb32ff77d6d9\") " pod="openstack/keystone-b2af-account-create-update-852lx" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.949674 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-operator-scripts\") pod \"keystone-b2af-account-create-update-852lx\" (UID: \"3bdb84d6-c599-4d87-9c27-cb32ff77d6d9\") " pod="openstack/keystone-b2af-account-create-update-852lx" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.950079 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.950095 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.950105 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.956422 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-bdp9p"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.956577 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-bdp9p"] Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:17.950276 4902 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:17.962199 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-operator-scripts podName:3bdb84d6-c599-4d87-9c27-cb32ff77d6d9 nodeName:}" failed. No retries permitted until 2026-01-21 14:57:18.462163568 +0000 UTC m=+1400.538996597 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-operator-scripts") pod "keystone-b2af-account-create-update-852lx" (UID: "3bdb84d6-c599-4d87-9c27-cb32ff77d6d9") : configmap "openstack-scripts" not found Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:17.962239 4902 projected.go:194] Error preparing data for projected volume kube-api-access-bfjwj for pod openstack/keystone-b2af-account-create-update-852lx: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:17.962488 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-kube-api-access-bfjwj podName:3bdb84d6-c599-4d87-9c27-cb32ff77d6d9 nodeName:}" failed. No retries permitted until 2026-01-21 14:57:18.462478727 +0000 UTC m=+1400.539311746 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bfjwj" (UniqueName: "kubernetes.io/projected/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-kube-api-access-bfjwj") pod "keystone-b2af-account-create-update-852lx" (UID: "3bdb84d6-c599-4d87-9c27-cb32ff77d6d9") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.984316 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2c87v"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.069560 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.121134 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" (UID: "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.122477 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b71fc896-318c-4277-bb32-70e3424a26c9" (UID: "b71fc896-318c-4277-bb32-70e3424a26c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.133688 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-config-data" (OuterVolumeSpecName: "config-data") pod "db4d047b-49f4-4b55-a053-081f1be632b7" (UID: "db4d047b-49f4-4b55-a053-081f1be632b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.159518 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.159548 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.159561 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.159574 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.169844 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-config-data" (OuterVolumeSpecName: "config-data") pod "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" (UID: "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.220262 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "db4d047b-49f4-4b55-a053-081f1be632b7" (UID: "db4d047b-49f4-4b55-a053-081f1be632b7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.242235 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "db4d047b-49f4-4b55-a053-081f1be632b7" (UID: "db4d047b-49f4-4b55-a053-081f1be632b7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.265778 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.265855 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.265887 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.273596 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b71fc896-318c-4277-bb32-70e3424a26c9" (UID: "b71fc896-318c-4277-bb32-70e3424a26c9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.302530 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b71fc896-318c-4277-bb32-70e3424a26c9" (UID: "b71fc896-318c-4277-bb32-70e3424a26c9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.321077 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a61b8e1-9a04-429b-9439-bee181301046" path="/var/lib/kubelet/pods/1a61b8e1-9a04-429b-9439-bee181301046/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.321583 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3389852b-01f7-4dc9-b7c2-73c858ba1268" path="/var/lib/kubelet/pods/3389852b-01f7-4dc9-b7c2-73c858ba1268/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.322559 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58b4678d-e59b-49d1-b06e-338a42a0e51e" path="/var/lib/kubelet/pods/58b4678d-e59b-49d1-b06e-338a42a0e51e/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.323466 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a189ccd-729c-4453-8adf-7ef08834d320" path="/var/lib/kubelet/pods/5a189ccd-729c-4453-8adf-7ef08834d320/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.323869 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a8bdead-378c-4db8-acfe-a0b449c69e8a" path="/var/lib/kubelet/pods/6a8bdead-378c-4db8-acfe-a0b449c69e8a/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.325261 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c005e52-f6e5-413f-ba23-cb99e461cb66" path="/var/lib/kubelet/pods/8c005e52-f6e5-413f-ba23-cb99e461cb66/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.325695 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e00e8be-96f7-4457-821f-440694bd8692" path="/var/lib/kubelet/pods/8e00e8be-96f7-4457-821f-440694bd8692/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.379445 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.379470 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.385927 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f05425e-47d3-4358-844c-9b661f254e22" path="/var/lib/kubelet/pods/8f05425e-47d3-4358-844c-9b661f254e22/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.387099 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94031dcf-9569-4cf1-90a9-61c962434ae8" path="/var/lib/kubelet/pods/94031dcf-9569-4cf1-90a9-61c962434ae8/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.389180 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="966f492d-0f8f-4bef-b60f-777f25367104" path="/var/lib/kubelet/pods/966f492d-0f8f-4bef-b60f-777f25367104/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.390383 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bb9c4d9-a042-4a60-adca-03be4d8ec42d" path="/var/lib/kubelet/pods/9bb9c4d9-a042-4a60-adca-03be4d8ec42d/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.390676 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-config-data" (OuterVolumeSpecName: "config-data") pod "b71fc896-318c-4277-bb32-70e3424a26c9" (UID: "b71fc896-318c-4277-bb32-70e3424a26c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.391381 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d59e8c8f-5bf6-4dd1-835a-b2ed93e81044" path="/var/lib/kubelet/pods/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.392017 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd5b13a8-7950-40cf-9255-d2c9f34c6add" path="/var/lib/kubelet/pods/fd5b13a8-7950-40cf-9255-d2c9f34c6add/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.485134 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfjwj\" (UniqueName: \"kubernetes.io/projected/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-kube-api-access-bfjwj\") pod \"keystone-b2af-account-create-update-852lx\" (UID: \"3bdb84d6-c599-4d87-9c27-cb32ff77d6d9\") " pod="openstack/keystone-b2af-account-create-update-852lx" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.485230 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-operator-scripts\") pod \"keystone-b2af-account-create-update-852lx\" (UID: \"3bdb84d6-c599-4d87-9c27-cb32ff77d6d9\") " pod="openstack/keystone-b2af-account-create-update-852lx" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.485290 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:18.485347 4902 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:18.485390 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-operator-scripts podName:3bdb84d6-c599-4d87-9c27-cb32ff77d6d9 nodeName:}" failed. No retries permitted until 2026-01-21 14:57:19.485375195 +0000 UTC m=+1401.562208224 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-operator-scripts") pod "keystone-b2af-account-create-update-852lx" (UID: "3bdb84d6-c599-4d87-9c27-cb32ff77d6d9") : configmap "openstack-scripts" not found Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:18.488591 4902 projected.go:194] Error preparing data for projected volume kube-api-access-bfjwj for pod openstack/keystone-b2af-account-create-update-852lx: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:18.488673 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-kube-api-access-bfjwj podName:3bdb84d6-c599-4d87-9c27-cb32ff77d6d9 nodeName:}" failed. No retries permitted until 2026-01-21 14:57:19.488634603 +0000 UTC m=+1401.565467632 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bfjwj" (UniqueName: "kubernetes.io/projected/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-kube-api-access-bfjwj") pod "keystone-b2af-account-create-update-852lx" (UID: "3bdb84d6-c599-4d87-9c27-cb32ff77d6d9") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.511316 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="19a933f8-5063-4cd1-8d3d-420e82d4e1fd" containerName="galera" containerID="cri-o://21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9" gracePeriod=30 Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.793664 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="8d7103bd-b24b-4a0c-b68a-17373307f1aa" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.829294 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:18.834459 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-bfjwj operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-b2af-account-create-update-852lx" podUID="3bdb84d6-c599-4d87-9c27-cb32ff77d6d9" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.835585 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.848612 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.861061 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.861545 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.865698 4902 scope.go:117] "RemoveContainer" containerID="bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.884162 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.886995 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.888535 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897506 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cgzm\" (UniqueName: \"kubernetes.io/projected/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-kube-api-access-9cgzm\") pod \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897555 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-public-tls-certs\") pod \"561efc1e-a930-440f-83b1-a75217a11f32\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897582 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-combined-ca-bundle\") pod \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897631 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-state-metrics-tls-config\") pod \"b52494a8-ff56-449e-a274-b37eb4bad43d\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897657 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c653ffa0-195e-4eda-8c25-cfcff2715bdf-logs\") pod \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897677 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-combined-ca-bundle\") pod \"c4168bc0-26cf-4786-9e28-95647462c372\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897692 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4168bc0-26cf-4786-9e28-95647462c372-httpd-run\") pod \"c4168bc0-26cf-4786-9e28-95647462c372\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897712 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-scripts\") pod \"c4168bc0-26cf-4786-9e28-95647462c372\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897751 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn54b\" (UniqueName: \"kubernetes.io/projected/c4168bc0-26cf-4786-9e28-95647462c372-kube-api-access-kn54b\") pod \"c4168bc0-26cf-4786-9e28-95647462c372\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897787 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvkf7\" (UniqueName: \"kubernetes.io/projected/561efc1e-a930-440f-83b1-a75217a11f32-kube-api-access-jvkf7\") pod \"561efc1e-a930-440f-83b1-a75217a11f32\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897820 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-combined-ca-bundle\") pod \"561efc1e-a930-440f-83b1-a75217a11f32\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897845 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-config-data-custom\") pod \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897864 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-combined-ca-bundle\") pod \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897890 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-nova-metadata-tls-certs\") pod \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897914 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-state-metrics-tls-certs\") pod \"b52494a8-ff56-449e-a274-b37eb4bad43d\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897933 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-config-data\") pod \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897947 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-config-data\") pod \"561efc1e-a930-440f-83b1-a75217a11f32\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897972 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/561efc1e-a930-440f-83b1-a75217a11f32-logs\") pod \"561efc1e-a930-440f-83b1-a75217a11f32\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897992 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4sgn\" (UniqueName: \"kubernetes.io/projected/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-api-access-t4sgn\") pod \"b52494a8-ff56-449e-a274-b37eb4bad43d\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.898008 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-combined-ca-bundle\") pod \"b52494a8-ff56-449e-a274-b37eb4bad43d\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.898025 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-config-data-custom\") pod \"561efc1e-a930-440f-83b1-a75217a11f32\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.903665 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-config-data\") pod \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.903769 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-internal-tls-certs\") pod \"c4168bc0-26cf-4786-9e28-95647462c372\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.903797 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d74jj\" (UniqueName: \"kubernetes.io/projected/c653ffa0-195e-4eda-8c25-cfcff2715bdf-kube-api-access-d74jj\") pod \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.903834 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4168bc0-26cf-4786-9e28-95647462c372-logs\") pod \"c4168bc0-26cf-4786-9e28-95647462c372\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.903863 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-config-data\") pod \"c4168bc0-26cf-4786-9e28-95647462c372\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.903915 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-logs\") pod \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.903945 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-internal-tls-certs\") pod \"561efc1e-a930-440f-83b1-a75217a11f32\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.903972 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"c4168bc0-26cf-4786-9e28-95647462c372\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.906091 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4168bc0-26cf-4786-9e28-95647462c372-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c4168bc0-26cf-4786-9e28-95647462c372" (UID: "c4168bc0-26cf-4786-9e28-95647462c372"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.906796 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.909006 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4168bc0-26cf-4786-9e28-95647462c372-logs" (OuterVolumeSpecName: "logs") pod "c4168bc0-26cf-4786-9e28-95647462c372" (UID: "c4168bc0-26cf-4786-9e28-95647462c372"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.916613 4902 scope.go:117] "RemoveContainer" containerID="51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.917371 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/561efc1e-a930-440f-83b1-a75217a11f32-logs" (OuterVolumeSpecName: "logs") pod "561efc1e-a930-440f-83b1-a75217a11f32" (UID: "561efc1e-a930-440f-83b1-a75217a11f32"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.922933 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-scripts" (OuterVolumeSpecName: "scripts") pod "c4168bc0-26cf-4786-9e28-95647462c372" (UID: "c4168bc0-26cf-4786-9e28-95647462c372"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.923311 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "561efc1e-a930-440f-83b1-a75217a11f32" (UID: "561efc1e-a930-440f-83b1-a75217a11f32"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.924576 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-logs" (OuterVolumeSpecName: "logs") pod "3aa6f350-dd82-4d59-ac24-5460acc2a8a6" (UID: "3aa6f350-dd82-4d59-ac24-5460acc2a8a6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.926263 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-api-access-t4sgn" (OuterVolumeSpecName: "kube-api-access-t4sgn") pod "b52494a8-ff56-449e-a274-b37eb4bad43d" (UID: "b52494a8-ff56-449e-a274-b37eb4bad43d"). InnerVolumeSpecName "kube-api-access-t4sgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.926533 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.936805 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.937395 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/561efc1e-a930-440f-83b1-a75217a11f32-kube-api-access-jvkf7" (OuterVolumeSpecName: "kube-api-access-jvkf7") pod "561efc1e-a930-440f-83b1-a75217a11f32" (UID: "561efc1e-a930-440f-83b1-a75217a11f32"). InnerVolumeSpecName "kube-api-access-jvkf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:18.937761 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd\": container with ID starting with 51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd not found: ID does not exist" containerID="51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.937866 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd"} err="failed to get container status \"51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd\": rpc error: code = NotFound desc = could not find container \"51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd\": container with ID starting with 51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd not found: ID does not exist" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.937967 4902 scope.go:117] "RemoveContainer" containerID="bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.943676 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c653ffa0-195e-4eda-8c25-cfcff2715bdf-logs" (OuterVolumeSpecName: "logs") pod "c653ffa0-195e-4eda-8c25-cfcff2715bdf" (UID: "c653ffa0-195e-4eda-8c25-cfcff2715bdf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:18.945382 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924\": container with ID starting with bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924 not found: ID does not exist" containerID="bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.945417 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924"} err="failed to get container status \"bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924\": rpc error: code = NotFound desc = could not find container \"bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924\": container with ID starting with bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924 not found: ID does not exist" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.945445 4902 scope.go:117] "RemoveContainer" containerID="4c05b52bed8146e4b813b72bd57efca7be3d0268ea82de7f8102940d78d0f674" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.953225 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c653ffa0-195e-4eda-8c25-cfcff2715bdf" (UID: "c653ffa0-195e-4eda-8c25-cfcff2715bdf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.964350 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7ddf9d8f68-jjk7f"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.967509 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c653ffa0-195e-4eda-8c25-cfcff2715bdf-kube-api-access-d74jj" (OuterVolumeSpecName: "kube-api-access-d74jj") pod "c653ffa0-195e-4eda-8c25-cfcff2715bdf" (UID: "c653ffa0-195e-4eda-8c25-cfcff2715bdf"). InnerVolumeSpecName "kube-api-access-d74jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.972318 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7ddf9d8f68-jjk7f"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.974287 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4168bc0-26cf-4786-9e28-95647462c372-kube-api-access-kn54b" (OuterVolumeSpecName: "kube-api-access-kn54b") pod "c4168bc0-26cf-4786-9e28-95647462c372" (UID: "c4168bc0-26cf-4786-9e28-95647462c372"). InnerVolumeSpecName "kube-api-access-kn54b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.974972 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2ff2c3d8-2d68-4255-a175-21f0df1b9276/ovn-northd/0.log" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.975017 4902 generic.go:334] "Generic (PLEG): container finished" podID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" containerID="c4adcc4c76cce96e7677ca792f5ca78a7e382164071237e39c2950d3440922c3" exitCode=139 Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.975188 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2ff2c3d8-2d68-4255-a175-21f0df1b9276","Type":"ContainerDied","Data":"c4adcc4c76cce96e7677ca792f5ca78a7e382164071237e39c2950d3440922c3"} Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.980478 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-kube-api-access-9cgzm" (OuterVolumeSpecName: "kube-api-access-9cgzm") pod "3aa6f350-dd82-4d59-ac24-5460acc2a8a6" (UID: "3aa6f350-dd82-4d59-ac24-5460acc2a8a6"). InnerVolumeSpecName "kube-api-access-9cgzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.986186 4902 generic.go:334] "Generic (PLEG): container finished" podID="2acfa57e-c4e9-4809-b5cb-109f1bbb64f2" containerID="507ce1ddbb5ab555237835c4f50235305caf3b3ba28a9df9ec0892a88f2b0f8f" exitCode=1 Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.986267 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2c87v" event={"ID":"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2","Type":"ContainerDied","Data":"507ce1ddbb5ab555237835c4f50235305caf3b3ba28a9df9ec0892a88f2b0f8f"} Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.995004 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "c4168bc0-26cf-4786-9e28-95647462c372" (UID: "c4168bc0-26cf-4786-9e28-95647462c372"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.000126 4902 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924" err="rpc error: code = NotFound desc = could not find container \"bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924\": container with ID starting with bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924 not found: ID does not exist" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.000168 4902 kuberuntime_gc.go:389] "Failed to remove container log dead symlink" err="remove /var/log/containers/placement-7ddf9d8f68-jjk7f_openstack_placement-log-bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924.log: no such file or directory" path="/var/log/containers/placement-7ddf9d8f68-jjk7f_openstack_placement-log-bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924.log" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.003192 4902 generic.go:334] "Generic (PLEG): container finished" podID="365d6c18-395e-4a62-939d-a04927ffa8aa" containerID="c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8" exitCode=0 Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.003241 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.003281 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" event={"ID":"365d6c18-395e-4a62-939d-a04927ffa8aa","Type":"ContainerDied","Data":"c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.003311 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" event={"ID":"365d6c18-395e-4a62-939d-a04927ffa8aa","Type":"ContainerDied","Data":"d7ec9f34e635f9308b93f9d0dc6cda96b623b10532da8d7eb05383f6117459ce"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.004946 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b52494a8-ff56-449e-a274-b37eb4bad43d" (UID: "b52494a8-ff56-449e-a274-b37eb4bad43d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.005385 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c366100e-d2a0-4be9-965f-ef7b7ad39f78-config-data\") pod \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\" (UID: \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.005480 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96b9v\" (UniqueName: \"kubernetes.io/projected/365d6c18-395e-4a62-939d-a04927ffa8aa-kube-api-access-96b9v\") pod \"365d6c18-395e-4a62-939d-a04927ffa8aa\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.005518 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c366100e-d2a0-4be9-965f-ef7b7ad39f78-combined-ca-bundle\") pod \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\" (UID: \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.005644 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/365d6c18-395e-4a62-939d-a04927ffa8aa-logs\") pod \"365d6c18-395e-4a62-939d-a04927ffa8aa\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.005701 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-config-data\") pod \"365d6c18-395e-4a62-939d-a04927ffa8aa\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.005797 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-combined-ca-bundle\") pod \"365d6c18-395e-4a62-939d-a04927ffa8aa\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.005867 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvj2m\" (UniqueName: \"kubernetes.io/projected/c366100e-d2a0-4be9-965f-ef7b7ad39f78-kube-api-access-lvj2m\") pod \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\" (UID: \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.005964 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-config-data-custom\") pod \"365d6c18-395e-4a62-939d-a04927ffa8aa\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.008911 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "365d6c18-395e-4a62-939d-a04927ffa8aa" (UID: "365d6c18-395e-4a62-939d-a04927ffa8aa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018317 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/365d6c18-395e-4a62-939d-a04927ffa8aa-logs" (OuterVolumeSpecName: "logs") pod "365d6c18-395e-4a62-939d-a04927ffa8aa" (UID: "365d6c18-395e-4a62-939d-a04927ffa8aa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018607 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018642 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/561efc1e-a930-440f-83b1-a75217a11f32-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018660 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4sgn\" (UniqueName: \"kubernetes.io/projected/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-api-access-t4sgn\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018675 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018687 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018698 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d74jj\" (UniqueName: \"kubernetes.io/projected/c653ffa0-195e-4eda-8c25-cfcff2715bdf-kube-api-access-d74jj\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018706 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4168bc0-26cf-4786-9e28-95647462c372-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018715 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018724 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018746 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018756 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cgzm\" (UniqueName: \"kubernetes.io/projected/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-kube-api-access-9cgzm\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018764 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c653ffa0-195e-4eda-8c25-cfcff2715bdf-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018773 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4168bc0-26cf-4786-9e28-95647462c372-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018782 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018790 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/365d6c18-395e-4a62-939d-a04927ffa8aa-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018798 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn54b\" (UniqueName: \"kubernetes.io/projected/c4168bc0-26cf-4786-9e28-95647462c372-kube-api-access-kn54b\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018807 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvkf7\" (UniqueName: \"kubernetes.io/projected/561efc1e-a930-440f-83b1-a75217a11f32-kube-api-access-jvkf7\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.022675 4902 generic.go:334] "Generic (PLEG): container finished" podID="b52494a8-ff56-449e-a274-b37eb4bad43d" containerID="af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723" exitCode=2 Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.022750 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b52494a8-ff56-449e-a274-b37eb4bad43d","Type":"ContainerDied","Data":"af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.022776 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b52494a8-ff56-449e-a274-b37eb4bad43d","Type":"ContainerDied","Data":"b11f8ee0923ff98e0291569b03ef8eeccd15dca9bc3a6e79246d5a184580c3ae"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.022826 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.023471 4902 scope.go:117] "RemoveContainer" containerID="d81b469d4bfe4317399c28b768091ee1e4d32b1ffeb38b5ab40fde67bdde4b7f" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.035904 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/365d6c18-395e-4a62-939d-a04927ffa8aa-kube-api-access-96b9v" (OuterVolumeSpecName: "kube-api-access-96b9v") pod "365d6c18-395e-4a62-939d-a04927ffa8aa" (UID: "365d6c18-395e-4a62-939d-a04927ffa8aa"). InnerVolumeSpecName "kube-api-access-96b9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.047613 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c366100e-d2a0-4be9-965f-ef7b7ad39f78-kube-api-access-lvj2m" (OuterVolumeSpecName: "kube-api-access-lvj2m") pod "c366100e-d2a0-4be9-965f-ef7b7ad39f78" (UID: "c366100e-d2a0-4be9-965f-ef7b7ad39f78"). InnerVolumeSpecName "kube-api-access-lvj2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.050875 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5df595696d-2ftxp" event={"ID":"561efc1e-a930-440f-83b1-a75217a11f32","Type":"ContainerDied","Data":"596188194bb88a2f6c89003cb099ac4ba874000a54cf1ceffb7115b26f061225"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.050974 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.052372 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-config-data" (OuterVolumeSpecName: "config-data") pod "3aa6f350-dd82-4d59-ac24-5460acc2a8a6" (UID: "3aa6f350-dd82-4d59-ac24-5460acc2a8a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.053924 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c653ffa0-195e-4eda-8c25-cfcff2715bdf" (UID: "c653ffa0-195e-4eda-8c25-cfcff2715bdf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.060835 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4168bc0-26cf-4786-9e28-95647462c372","Type":"ContainerDied","Data":"7f3690d2641b9d3eb31fce9c2db367653c8289ff406af6ce68593f803e401401"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.060948 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.072365 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3aa6f350-dd82-4d59-ac24-5460acc2a8a6","Type":"ContainerDied","Data":"34e826e9786b7ad724ed0dc96336ea0075c6129a9fc9742797a8ae0fd3c41773"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.072562 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.091693 4902 scope.go:117] "RemoveContainer" containerID="29a7ab7f1ceb1b7248d2507a5eb6085cbee233d8230ecf775819b6f6ce78389e" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.100343 4902 generic.go:334] "Generic (PLEG): container finished" podID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerID="d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0" exitCode=0 Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.100376 4902 generic.go:334] "Generic (PLEG): container finished" podID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerID="49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110" exitCode=0 Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.100393 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53","Type":"ContainerDied","Data":"d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.100437 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53","Type":"ContainerDied","Data":"49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.102499 4902 generic.go:334] "Generic (PLEG): container finished" podID="c653ffa0-195e-4eda-8c25-cfcff2715bdf" containerID="5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0" exitCode=0 Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.102558 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68564cb5c-bh98h" event={"ID":"c653ffa0-195e-4eda-8c25-cfcff2715bdf","Type":"ContainerDied","Data":"5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.102584 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68564cb5c-bh98h" event={"ID":"c653ffa0-195e-4eda-8c25-cfcff2715bdf","Type":"ContainerDied","Data":"0adea585b27eb9363f63f38b86e1f0b5aee1a5b47c7b1b2342897a2515892311"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.102672 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.106795 4902 generic.go:334] "Generic (PLEG): container finished" podID="c366100e-d2a0-4be9-965f-ef7b7ad39f78" containerID="421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812" exitCode=0 Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.106870 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b2af-account-create-update-852lx" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.107416 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.107619 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c366100e-d2a0-4be9-965f-ef7b7ad39f78","Type":"ContainerDied","Data":"421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.107656 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c366100e-d2a0-4be9-965f-ef7b7ad39f78","Type":"ContainerDied","Data":"f71b431a165886dfcb60b7772fbf29ab480085d500ccd4f828f82ea85ca3c58b"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.116251 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "b52494a8-ff56-449e-a274-b37eb4bad43d" (UID: "b52494a8-ff56-449e-a274-b37eb4bad43d"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.122251 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvj2m\" (UniqueName: \"kubernetes.io/projected/c366100e-d2a0-4be9-965f-ef7b7ad39f78-kube-api-access-lvj2m\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.122288 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.122303 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96b9v\" (UniqueName: \"kubernetes.io/projected/365d6c18-395e-4a62-939d-a04927ffa8aa-kube-api-access-96b9v\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.122316 4902 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.122330 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.122384 4902 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.122428 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data podName:8d7103bd-b24b-4a0c-b68a-17373307f1aa nodeName:}" failed. No retries permitted until 2026-01-21 14:57:27.122415928 +0000 UTC m=+1409.199248957 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data") pod "rabbitmq-cell1-server-0" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa") : configmap "rabbitmq-cell1-config-data" not found Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.142256 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "b52494a8-ff56-449e-a274-b37eb4bad43d" (UID: "b52494a8-ff56-449e-a274-b37eb4bad43d"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.152823 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "561efc1e-a930-440f-83b1-a75217a11f32" (UID: "561efc1e-a930-440f-83b1-a75217a11f32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.157848 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3aa6f350-dd82-4d59-ac24-5460acc2a8a6" (UID: "3aa6f350-dd82-4d59-ac24-5460acc2a8a6"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.157964 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.161251 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-config-data" (OuterVolumeSpecName: "config-data") pod "561efc1e-a930-440f-83b1-a75217a11f32" (UID: "561efc1e-a930-440f-83b1-a75217a11f32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.174127 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3aa6f350-dd82-4d59-ac24-5460acc2a8a6" (UID: "3aa6f350-dd82-4d59-ac24-5460acc2a8a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.190252 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-config-data" (OuterVolumeSpecName: "config-data") pod "c653ffa0-195e-4eda-8c25-cfcff2715bdf" (UID: "c653ffa0-195e-4eda-8c25-cfcff2715bdf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.194231 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c366100e-d2a0-4be9-965f-ef7b7ad39f78-config-data" (OuterVolumeSpecName: "config-data") pod "c366100e-d2a0-4be9-965f-ef7b7ad39f78" (UID: "c366100e-d2a0-4be9-965f-ef7b7ad39f78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.211650 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "561efc1e-a930-440f-83b1-a75217a11f32" (UID: "561efc1e-a930-440f-83b1-a75217a11f32"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.229888 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.230225 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.230294 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.230347 4902 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.230401 4902 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.230473 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.230524 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.230720 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c366100e-d2a0-4be9-965f-ef7b7ad39f78-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.230777 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.232202 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c4168bc0-26cf-4786-9e28-95647462c372" (UID: "c4168bc0-26cf-4786-9e28-95647462c372"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.233805 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-config-data" (OuterVolumeSpecName: "config-data") pod "c4168bc0-26cf-4786-9e28-95647462c372" (UID: "c4168bc0-26cf-4786-9e28-95647462c372"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.235188 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "365d6c18-395e-4a62-939d-a04927ffa8aa" (UID: "365d6c18-395e-4a62-939d-a04927ffa8aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.242395 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c366100e-d2a0-4be9-965f-ef7b7ad39f78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c366100e-d2a0-4be9-965f-ef7b7ad39f78" (UID: "c366100e-d2a0-4be9-965f-ef7b7ad39f78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.242640 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "561efc1e-a930-440f-83b1-a75217a11f32" (UID: "561efc1e-a930-440f-83b1-a75217a11f32"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.244849 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4168bc0-26cf-4786-9e28-95647462c372" (UID: "c4168bc0-26cf-4786-9e28-95647462c372"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.258949 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-config-data" (OuterVolumeSpecName: "config-data") pod "365d6c18-395e-4a62-939d-a04927ffa8aa" (UID: "365d6c18-395e-4a62-939d-a04927ffa8aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.280003 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.282027 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.283331 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.283434 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="dbc235c8-beef-433d-b663-e1d09b6a9b65" containerName="nova-cell1-conductor-conductor" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.334577 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.334617 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.334629 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c366100e-d2a0-4be9-965f-ef7b7ad39f78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.334638 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.334650 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.334661 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.334670 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.472599 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b2af-account-create-update-852lx" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.481237 4902 scope.go:117] "RemoveContainer" containerID="11db3a976cf5ea9322be5da7913baf9b9709079192d4b3c588596ad2459819bd" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.497211 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2c87v" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.514201 4902 scope.go:117] "RemoveContainer" containerID="c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.537778 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttmff\" (UniqueName: \"kubernetes.io/projected/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2-kube-api-access-ttmff\") pod \"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2\" (UID: \"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.538012 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2-operator-scripts\") pod \"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2\" (UID: \"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.538188 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-operator-scripts\") pod \"keystone-b2af-account-create-update-852lx\" (UID: \"3bdb84d6-c599-4d87-9c27-cb32ff77d6d9\") " pod="openstack/keystone-b2af-account-create-update-852lx" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.538300 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfjwj\" (UniqueName: \"kubernetes.io/projected/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-kube-api-access-bfjwj\") pod \"keystone-b2af-account-create-update-852lx\" (UID: \"3bdb84d6-c599-4d87-9c27-cb32ff77d6d9\") " pod="openstack/keystone-b2af-account-create-update-852lx" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.538907 4902 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.538954 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-operator-scripts podName:3bdb84d6-c599-4d87-9c27-cb32ff77d6d9 nodeName:}" failed. No retries permitted until 2026-01-21 14:57:21.53894122 +0000 UTC m=+1403.615774239 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-operator-scripts") pod "keystone-b2af-account-create-update-852lx" (UID: "3bdb84d6-c599-4d87-9c27-cb32ff77d6d9") : configmap "openstack-scripts" not found Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.538996 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2acfa57e-c4e9-4809-b5cb-109f1bbb64f2" (UID: "2acfa57e-c4e9-4809-b5cb-109f1bbb64f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.543985 4902 scope.go:117] "RemoveContainer" containerID="f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.544376 4902 projected.go:194] Error preparing data for projected volume kube-api-access-bfjwj for pod openstack/keystone-b2af-account-create-update-852lx: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.544440 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-kube-api-access-bfjwj podName:3bdb84d6-c599-4d87-9c27-cb32ff77d6d9 nodeName:}" failed. No retries permitted until 2026-01-21 14:57:21.544422068 +0000 UTC m=+1403.621255097 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bfjwj" (UniqueName: "kubernetes.io/projected/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-kube-api-access-bfjwj") pod "keystone-b2af-account-create-update-852lx" (UID: "3bdb84d6-c599-4d87-9c27-cb32ff77d6d9") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.548888 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2-kube-api-access-ttmff" (OuterVolumeSpecName: "kube-api-access-ttmff") pod "2acfa57e-c4e9-4809-b5cb-109f1bbb64f2" (UID: "2acfa57e-c4e9-4809-b5cb-109f1bbb64f2"). InnerVolumeSpecName "kube-api-access-ttmff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.550898 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.571408 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2ff2c3d8-2d68-4255-a175-21f0df1b9276/ovn-northd/0.log" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.571505 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.573648 4902 scope.go:117] "RemoveContainer" containerID="c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.573959 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8\": container with ID starting with c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8 not found: ID does not exist" containerID="c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.573986 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8"} err="failed to get container status \"c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8\": rpc error: code = NotFound desc = could not find container \"c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8\": container with ID starting with c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8 not found: ID does not exist" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.574006 4902 scope.go:117] "RemoveContainer" containerID="f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.577231 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48\": container with ID starting with f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48 not found: ID does not exist" containerID="f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.577242 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.577266 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.577263 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48"} err="failed to get container status \"f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48\": rpc error: code = NotFound desc = could not find container \"f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48\": container with ID starting with f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48 not found: ID does not exist" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.577286 4902 scope.go:117] "RemoveContainer" containerID="af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.588308 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-b755cd77b-nd6p7"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.595220 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-b755cd77b-nd6p7"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.598146 4902 scope.go:117] "RemoveContainer" containerID="af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.601222 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723\": container with ID starting with af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723 not found: ID does not exist" containerID="af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.601270 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723"} err="failed to get container status \"af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723\": rpc error: code = NotFound desc = could not find container \"af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723\": container with ID starting with af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723 not found: ID does not exist" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.601297 4902 scope.go:117] "RemoveContainer" containerID="709dea640199a3e29bbff0c5bd046ca78f3c55c233e1043ae28cc59e518b7cd2" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.601412 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.606457 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.638783 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5df595696d-2ftxp"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639168 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2ff2c3d8-2d68-4255-a175-21f0df1b9276-ovn-rundir\") pod \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639231 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-ovn-northd-tls-certs\") pod \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639260 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff2c3d8-2d68-4255-a175-21f0df1b9276-config\") pod \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639285 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-metrics-certs-tls-certs\") pod \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639319 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2c70bcdb-316e-4246-b333-ddaf6438c6ee-kolla-config\") pod \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639356 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ff2c3d8-2d68-4255-a175-21f0df1b9276-scripts\") pod \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639420 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xllj\" (UniqueName: \"kubernetes.io/projected/2c70bcdb-316e-4246-b333-ddaf6438c6ee-kube-api-access-5xllj\") pod \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639491 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c70bcdb-316e-4246-b333-ddaf6438c6ee-memcached-tls-certs\") pod \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639617 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c70bcdb-316e-4246-b333-ddaf6438c6ee-combined-ca-bundle\") pod \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639682 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-combined-ca-bundle\") pod \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639771 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j24b6\" (UniqueName: \"kubernetes.io/projected/2ff2c3d8-2d68-4255-a175-21f0df1b9276-kube-api-access-j24b6\") pod \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639815 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2c70bcdb-316e-4246-b333-ddaf6438c6ee-config-data\") pod \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639507 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ff2c3d8-2d68-4255-a175-21f0df1b9276-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "2ff2c3d8-2d68-4255-a175-21f0df1b9276" (UID: "2ff2c3d8-2d68-4255-a175-21f0df1b9276"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.640124 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ff2c3d8-2d68-4255-a175-21f0df1b9276-config" (OuterVolumeSpecName: "config") pod "2ff2c3d8-2d68-4255-a175-21f0df1b9276" (UID: "2ff2c3d8-2d68-4255-a175-21f0df1b9276"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.640686 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ff2c3d8-2d68-4255-a175-21f0df1b9276-scripts" (OuterVolumeSpecName: "scripts") pod "2ff2c3d8-2d68-4255-a175-21f0df1b9276" (UID: "2ff2c3d8-2d68-4255-a175-21f0df1b9276"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.640743 4902 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2ff2c3d8-2d68-4255-a175-21f0df1b9276-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.640772 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.640793 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttmff\" (UniqueName: \"kubernetes.io/projected/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2-kube-api-access-ttmff\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.641174 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c70bcdb-316e-4246-b333-ddaf6438c6ee-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "2c70bcdb-316e-4246-b333-ddaf6438c6ee" (UID: "2c70bcdb-316e-4246-b333-ddaf6438c6ee"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.644099 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5df595696d-2ftxp"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.644775 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c70bcdb-316e-4246-b333-ddaf6438c6ee-config-data" (OuterVolumeSpecName: "config-data") pod "2c70bcdb-316e-4246-b333-ddaf6438c6ee" (UID: "2c70bcdb-316e-4246-b333-ddaf6438c6ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.648334 4902 scope.go:117] "RemoveContainer" containerID="b91bda9e24415f053bbf7e3136ae0eb36d0535911dff5c3a69ee2c9fd40feb34" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.648363 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ff2c3d8-2d68-4255-a175-21f0df1b9276-kube-api-access-j24b6" (OuterVolumeSpecName: "kube-api-access-j24b6") pod "2ff2c3d8-2d68-4255-a175-21f0df1b9276" (UID: "2ff2c3d8-2d68-4255-a175-21f0df1b9276"). InnerVolumeSpecName "kube-api-access-j24b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.648673 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.650535 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c70bcdb-316e-4246-b333-ddaf6438c6ee-kube-api-access-5xllj" (OuterVolumeSpecName: "kube-api-access-5xllj") pod "2c70bcdb-316e-4246-b333-ddaf6438c6ee" (UID: "2c70bcdb-316e-4246-b333-ddaf6438c6ee"). InnerVolumeSpecName "kube-api-access-5xllj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.653916 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.659358 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.666229 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.668959 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-68564cb5c-bh98h"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.674801 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-68564cb5c-bh98h"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.687897 4902 scope.go:117] "RemoveContainer" containerID="635d235f3800b93dc934010299b8ed6cf8c1efd38064d7aecd2aa2faa2ae46a0" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.699494 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c70bcdb-316e-4246-b333-ddaf6438c6ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c70bcdb-316e-4246-b333-ddaf6438c6ee" (UID: "2c70bcdb-316e-4246-b333-ddaf6438c6ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.701493 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ff2c3d8-2d68-4255-a175-21f0df1b9276" (UID: "2ff2c3d8-2d68-4255-a175-21f0df1b9276"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.702789 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a133e1c90783c83c710a2eea26c02cb7d28a759bac5a441a7e04e3644c54f5fe" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.712194 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "2ff2c3d8-2d68-4255-a175-21f0df1b9276" (UID: "2ff2c3d8-2d68-4255-a175-21f0df1b9276"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.712353 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a133e1c90783c83c710a2eea26c02cb7d28a759bac5a441a7e04e3644c54f5fe" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.716354 4902 scope.go:117] "RemoveContainer" containerID="baf5060a9be38be6557c2e269eeef0d7067b99a8ffc55de9fabcd6c3d7fd4375" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.728722 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a133e1c90783c83c710a2eea26c02cb7d28a759bac5a441a7e04e3644c54f5fe" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.729004 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="359a818e-1c34-4dfd-bb59-0e72280a85a0" containerName="nova-cell0-conductor-conductor" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.735382 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "2ff2c3d8-2d68-4255-a175-21f0df1b9276" (UID: "2ff2c3d8-2d68-4255-a175-21f0df1b9276"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.742242 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j24b6\" (UniqueName: \"kubernetes.io/projected/2ff2c3d8-2d68-4255-a175-21f0df1b9276-kube-api-access-j24b6\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.742269 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2c70bcdb-316e-4246-b333-ddaf6438c6ee-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.742278 4902 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.742289 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff2c3d8-2d68-4255-a175-21f0df1b9276-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.742298 4902 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.742305 4902 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2c70bcdb-316e-4246-b333-ddaf6438c6ee-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.742314 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ff2c3d8-2d68-4255-a175-21f0df1b9276-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.742322 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xllj\" (UniqueName: \"kubernetes.io/projected/2c70bcdb-316e-4246-b333-ddaf6438c6ee-kube-api-access-5xllj\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.742331 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c70bcdb-316e-4246-b333-ddaf6438c6ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.742339 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.743346 4902 scope.go:117] "RemoveContainer" containerID="6bf1eb34ffb8ebb875ad0db959e31364a4f9a1f5a32e44cce848251c4a780377" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.744389 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c70bcdb-316e-4246-b333-ddaf6438c6ee-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "2c70bcdb-316e-4246-b333-ddaf6438c6ee" (UID: "2c70bcdb-316e-4246-b333-ddaf6438c6ee"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.766242 4902 scope.go:117] "RemoveContainer" containerID="090f15138593116ea5509f9b1db81b64387863cddd781c3e2ec064762515d25e" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.784538 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.789445 4902 scope.go:117] "RemoveContainer" containerID="5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.822951 4902 scope.go:117] "RemoveContainer" containerID="43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.843756 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-public-tls-certs\") pod \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.843828 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ea9ca5b-2e24-41de-8a99-a882ec11c222-logs\") pod \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.843878 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-config-data\") pod \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.843965 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-internal-tls-certs\") pod \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.844056 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-combined-ca-bundle\") pod \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.844087 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk9n4\" (UniqueName: \"kubernetes.io/projected/0ea9ca5b-2e24-41de-8a99-a882ec11c222-kube-api-access-xk9n4\") pod \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.844406 4902 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c70bcdb-316e-4246-b333-ddaf6438c6ee-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.844398 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ea9ca5b-2e24-41de-8a99-a882ec11c222-logs" (OuterVolumeSpecName: "logs") pod "0ea9ca5b-2e24-41de-8a99-a882ec11c222" (UID: "0ea9ca5b-2e24-41de-8a99-a882ec11c222"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.852563 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ea9ca5b-2e24-41de-8a99-a882ec11c222-kube-api-access-xk9n4" (OuterVolumeSpecName: "kube-api-access-xk9n4") pod "0ea9ca5b-2e24-41de-8a99-a882ec11c222" (UID: "0ea9ca5b-2e24-41de-8a99-a882ec11c222"). InnerVolumeSpecName "kube-api-access-xk9n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.857390 4902 scope.go:117] "RemoveContainer" containerID="5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.865422 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0\": container with ID starting with 5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0 not found: ID does not exist" containerID="5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.865517 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0"} err="failed to get container status \"5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0\": rpc error: code = NotFound desc = could not find container \"5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0\": container with ID starting with 5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0 not found: ID does not exist" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.865552 4902 scope.go:117] "RemoveContainer" containerID="43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.866182 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb\": container with ID starting with 43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb not found: ID does not exist" containerID="43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.866215 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb"} err="failed to get container status \"43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb\": rpc error: code = NotFound desc = could not find container \"43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb\": container with ID starting with 43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb not found: ID does not exist" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.866240 4902 scope.go:117] "RemoveContainer" containerID="421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.871744 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-config-data" (OuterVolumeSpecName: "config-data") pod "0ea9ca5b-2e24-41de-8a99-a882ec11c222" (UID: "0ea9ca5b-2e24-41de-8a99-a882ec11c222"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.876579 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ea9ca5b-2e24-41de-8a99-a882ec11c222" (UID: "0ea9ca5b-2e24-41de-8a99-a882ec11c222"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.889368 4902 scope.go:117] "RemoveContainer" containerID="421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.891499 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812\": container with ID starting with 421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812 not found: ID does not exist" containerID="421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.891543 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812"} err="failed to get container status \"421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812\": rpc error: code = NotFound desc = could not find container \"421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812\": container with ID starting with 421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812 not found: ID does not exist" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.895475 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0ea9ca5b-2e24-41de-8a99-a882ec11c222" (UID: "0ea9ca5b-2e24-41de-8a99-a882ec11c222"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.903829 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0ea9ca5b-2e24-41de-8a99-a882ec11c222" (UID: "0ea9ca5b-2e24-41de-8a99-a882ec11c222"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.948640 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.948931 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ea9ca5b-2e24-41de-8a99-a882ec11c222-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.948968 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.948979 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.948993 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.949001 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk9n4\" (UniqueName: \"kubernetes.io/projected/0ea9ca5b-2e24-41de-8a99-a882ec11c222-kube-api-access-xk9n4\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.124004 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" containerID="fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748" exitCode=0 Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.124107 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0ea9ca5b-2e24-41de-8a99-a882ec11c222","Type":"ContainerDied","Data":"fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748"} Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.124158 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0ea9ca5b-2e24-41de-8a99-a882ec11c222","Type":"ContainerDied","Data":"6a0fa8e1aa73ccaec735410bd00188e5105d8445c279b0829562f3033236ffec"} Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.124186 4902 scope.go:117] "RemoveContainer" containerID="fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.124350 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.135007 4902 generic.go:334] "Generic (PLEG): container finished" podID="2c70bcdb-316e-4246-b333-ddaf6438c6ee" containerID="c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2" exitCode=0 Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.135107 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.135209 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2c70bcdb-316e-4246-b333-ddaf6438c6ee","Type":"ContainerDied","Data":"c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2"} Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.135234 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2c70bcdb-316e-4246-b333-ddaf6438c6ee","Type":"ContainerDied","Data":"012af9c88121ed6a56a653b1c142d5e67759c3d8ac9efeda00265ffdb3f91980"} Jan 21 14:57:20 crc kubenswrapper[4902]: E0121 14:57:20.135535 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 21 14:57:20 crc kubenswrapper[4902]: E0121 14:57:20.137115 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 21 14:57:20 crc kubenswrapper[4902]: E0121 14:57:20.140230 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 21 14:57:20 crc kubenswrapper[4902]: E0121 14:57:20.140279 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="19a933f8-5063-4cd1-8d3d-420e82d4e1fd" containerName="galera" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.141443 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2ff2c3d8-2d68-4255-a175-21f0df1b9276/ovn-northd/0.log" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.141559 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.142228 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2ff2c3d8-2d68-4255-a175-21f0df1b9276","Type":"ContainerDied","Data":"710e2e791f44aa4a7534510792c8ca7893edb756d648bcd8efc2a038da9f4e30"} Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.161638 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2c87v" event={"ID":"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2","Type":"ContainerDied","Data":"24e8d59ec3c64b717babfef7f378c16dbc7782ee7d0c22d80830c614a5f49681"} Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.161891 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2c87v" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.162695 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b2af-account-create-update-852lx" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.245385 4902 scope.go:117] "RemoveContainer" containerID="155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.310687 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="365d6c18-395e-4a62-939d-a04927ffa8aa" path="/var/lib/kubelet/pods/365d6c18-395e-4a62-939d-a04927ffa8aa/volumes" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.311688 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" path="/var/lib/kubelet/pods/3aa6f350-dd82-4d59-ac24-5460acc2a8a6/volumes" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.312636 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="561efc1e-a930-440f-83b1-a75217a11f32" path="/var/lib/kubelet/pods/561efc1e-a930-440f-83b1-a75217a11f32/volumes" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.314196 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b52494a8-ff56-449e-a274-b37eb4bad43d" path="/var/lib/kubelet/pods/b52494a8-ff56-449e-a274-b37eb4bad43d/volumes" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.314829 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b71fc896-318c-4277-bb32-70e3424a26c9" path="/var/lib/kubelet/pods/b71fc896-318c-4277-bb32-70e3424a26c9/volumes" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.324723 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c366100e-d2a0-4be9-965f-ef7b7ad39f78" path="/var/lib/kubelet/pods/c366100e-d2a0-4be9-965f-ef7b7ad39f78/volumes" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.326098 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4168bc0-26cf-4786-9e28-95647462c372" path="/var/lib/kubelet/pods/c4168bc0-26cf-4786-9e28-95647462c372/volumes" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.328603 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c653ffa0-195e-4eda-8c25-cfcff2715bdf" path="/var/lib/kubelet/pods/c653ffa0-195e-4eda-8c25-cfcff2715bdf/volumes" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.330614 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db4d047b-49f4-4b55-a053-081f1be632b7" path="/var/lib/kubelet/pods/db4d047b-49f4-4b55-a053-081f1be632b7/volumes" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.331839 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" path="/var/lib/kubelet/pods/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f/volumes" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.334343 4902 scope.go:117] "RemoveContainer" containerID="fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748" Jan 21 14:57:20 crc kubenswrapper[4902]: E0121 14:57:20.338900 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748\": container with ID starting with fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748 not found: ID does not exist" containerID="fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.338962 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748"} err="failed to get container status \"fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748\": rpc error: code = NotFound desc = could not find container \"fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748\": container with ID starting with fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748 not found: ID does not exist" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.338997 4902 scope.go:117] "RemoveContainer" containerID="155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.339629 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.339684 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.339703 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2c87v"] Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.339714 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-2c87v"] Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.339724 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.339768 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 14:57:20 crc kubenswrapper[4902]: E0121 14:57:20.343497 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f\": container with ID starting with 155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f not found: ID does not exist" containerID="155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.343535 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f"} err="failed to get container status \"155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f\": rpc error: code = NotFound desc = could not find container \"155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f\": container with ID starting with 155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f not found: ID does not exist" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.343556 4902 scope.go:117] "RemoveContainer" containerID="c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.371310 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b2af-account-create-update-852lx"] Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.375281 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b2af-account-create-update-852lx"] Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.391584 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.396151 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.398274 4902 scope.go:117] "RemoveContainer" containerID="c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2" Jan 21 14:57:20 crc kubenswrapper[4902]: E0121 14:57:20.398717 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2\": container with ID starting with c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2 not found: ID does not exist" containerID="c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.398743 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2"} err="failed to get container status \"c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2\": rpc error: code = NotFound desc = could not find container \"c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2\": container with ID starting with c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2 not found: ID does not exist" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.398765 4902 scope.go:117] "RemoveContainer" containerID="e8a096d5f6a2e59562479be65d1cff285382747948d319ddcc17f47f718069db" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.419792 4902 scope.go:117] "RemoveContainer" containerID="c4adcc4c76cce96e7677ca792f5ca78a7e382164071237e39c2950d3440922c3" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.443517 4902 scope.go:117] "RemoveContainer" containerID="507ce1ddbb5ab555237835c4f50235305caf3b3ba28a9df9ec0892a88f2b0f8f" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.457655 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.457702 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfjwj\" (UniqueName: \"kubernetes.io/projected/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-kube-api-access-bfjwj\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: E0121 14:57:20.457732 4902 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 21 14:57:20 crc kubenswrapper[4902]: E0121 14:57:20.457801 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data podName:67f50f65-9151-4444-9680-f86e0f256069 nodeName:}" failed. No retries permitted until 2026-01-21 14:57:28.457784145 +0000 UTC m=+1410.534617174 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data") pod "rabbitmq-server-0" (UID: "67f50f65-9151-4444-9680-f86e0f256069") : configmap "rabbitmq-config-data" not found Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.732323 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.763903 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-plugins\") pod \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.763953 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-plugins-conf\") pod \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.763991 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d7103bd-b24b-4a0c-b68a-17373307f1aa-erlang-cookie-secret\") pod \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.764019 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-erlang-cookie\") pod \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.764241 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d7103bd-b24b-4a0c-b68a-17373307f1aa-pod-info\") pod \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.764284 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-tls\") pod \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.764337 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkc98\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-kube-api-access-rkc98\") pod \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.764397 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.764421 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-confd\") pod \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.764457 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-server-conf\") pod \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.764502 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data\") pod \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.766335 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8d7103bd-b24b-4a0c-b68a-17373307f1aa" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.767196 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8d7103bd-b24b-4a0c-b68a-17373307f1aa" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.771080 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-kube-api-access-rkc98" (OuterVolumeSpecName: "kube-api-access-rkc98") pod "8d7103bd-b24b-4a0c-b68a-17373307f1aa" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa"). InnerVolumeSpecName "kube-api-access-rkc98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.771089 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8d7103bd-b24b-4a0c-b68a-17373307f1aa" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.771709 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8d7103bd-b24b-4a0c-b68a-17373307f1aa" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.772852 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "8d7103bd-b24b-4a0c-b68a-17373307f1aa" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.772881 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d7103bd-b24b-4a0c-b68a-17373307f1aa-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8d7103bd-b24b-4a0c-b68a-17373307f1aa" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.774627 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8d7103bd-b24b-4a0c-b68a-17373307f1aa-pod-info" (OuterVolumeSpecName: "pod-info") pod "8d7103bd-b24b-4a0c-b68a-17373307f1aa" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.787106 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data" (OuterVolumeSpecName: "config-data") pod "8d7103bd-b24b-4a0c-b68a-17373307f1aa" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.810392 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-server-conf" (OuterVolumeSpecName: "server-conf") pod "8d7103bd-b24b-4a0c-b68a-17373307f1aa" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.862156 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.865776 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-kolla-config\") pod \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.865888 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-config-data-generated\") pod \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.865945 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-config-data-default\") pod \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.865967 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.865996 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-operator-scripts\") pod \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.866035 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x82cz\" (UniqueName: \"kubernetes.io/projected/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-kube-api-access-x82cz\") pod \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.866137 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-galera-tls-certs\") pod \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.866184 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-combined-ca-bundle\") pod \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.868180 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "19a933f8-5063-4cd1-8d3d-420e82d4e1fd" (UID: "19a933f8-5063-4cd1-8d3d-420e82d4e1fd"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.869455 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "19a933f8-5063-4cd1-8d3d-420e82d4e1fd" (UID: "19a933f8-5063-4cd1-8d3d-420e82d4e1fd"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.870428 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "19a933f8-5063-4cd1-8d3d-420e82d4e1fd" (UID: "19a933f8-5063-4cd1-8d3d-420e82d4e1fd"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.873992 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkc98\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-kube-api-access-rkc98\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.874050 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.874064 4902 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.874074 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.874083 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.874091 4902 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.874100 4902 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d7103bd-b24b-4a0c-b68a-17373307f1aa-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.874109 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.874118 4902 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d7103bd-b24b-4a0c-b68a-17373307f1aa-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.874126 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.875271 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19a933f8-5063-4cd1-8d3d-420e82d4e1fd" (UID: "19a933f8-5063-4cd1-8d3d-420e82d4e1fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.877272 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "mysql-db") pod "19a933f8-5063-4cd1-8d3d-420e82d4e1fd" (UID: "19a933f8-5063-4cd1-8d3d-420e82d4e1fd"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.878514 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8d7103bd-b24b-4a0c-b68a-17373307f1aa" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.897188 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-kube-api-access-x82cz" (OuterVolumeSpecName: "kube-api-access-x82cz") pod "19a933f8-5063-4cd1-8d3d-420e82d4e1fd" (UID: "19a933f8-5063-4cd1-8d3d-420e82d4e1fd"). InnerVolumeSpecName "kube-api-access-x82cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.906266 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.908598 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19a933f8-5063-4cd1-8d3d-420e82d4e1fd" (UID: "19a933f8-5063-4cd1-8d3d-420e82d4e1fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.948278 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "19a933f8-5063-4cd1-8d3d-420e82d4e1fd" (UID: "19a933f8-5063-4cd1-8d3d-420e82d4e1fd"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.974295 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.975322 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.975364 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.975375 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.975385 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x82cz\" (UniqueName: \"kubernetes.io/projected/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-kube-api-access-x82cz\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.975395 4902 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.975402 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.975411 4902 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.975419 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.975438 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.975448 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.993480 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.082551 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data\") pod \"67f50f65-9151-4444-9680-f86e0f256069\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.082607 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-plugins\") pod \"67f50f65-9151-4444-9680-f86e0f256069\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.082658 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67f50f65-9151-4444-9680-f86e0f256069-erlang-cookie-secret\") pod \"67f50f65-9151-4444-9680-f86e0f256069\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.082707 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-erlang-cookie\") pod \"67f50f65-9151-4444-9680-f86e0f256069\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.082776 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sth8r\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-kube-api-access-sth8r\") pod \"67f50f65-9151-4444-9680-f86e0f256069\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.082803 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-tls\") pod \"67f50f65-9151-4444-9680-f86e0f256069\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.082853 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"67f50f65-9151-4444-9680-f86e0f256069\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.082928 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-confd\") pod \"67f50f65-9151-4444-9680-f86e0f256069\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.082962 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-plugins-conf\") pod \"67f50f65-9151-4444-9680-f86e0f256069\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.082989 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67f50f65-9151-4444-9680-f86e0f256069-pod-info\") pod \"67f50f65-9151-4444-9680-f86e0f256069\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.083097 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-server-conf\") pod \"67f50f65-9151-4444-9680-f86e0f256069\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.083449 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.083697 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "67f50f65-9151-4444-9680-f86e0f256069" (UID: "67f50f65-9151-4444-9680-f86e0f256069"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.089098 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "67f50f65-9151-4444-9680-f86e0f256069" (UID: "67f50f65-9151-4444-9680-f86e0f256069"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.090260 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "67f50f65-9151-4444-9680-f86e0f256069" (UID: "67f50f65-9151-4444-9680-f86e0f256069"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.090402 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "67f50f65-9151-4444-9680-f86e0f256069" (UID: "67f50f65-9151-4444-9680-f86e0f256069"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.092798 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "67f50f65-9151-4444-9680-f86e0f256069" (UID: "67f50f65-9151-4444-9680-f86e0f256069"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.093257 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-kube-api-access-sth8r" (OuterVolumeSpecName: "kube-api-access-sth8r") pod "67f50f65-9151-4444-9680-f86e0f256069" (UID: "67f50f65-9151-4444-9680-f86e0f256069"). InnerVolumeSpecName "kube-api-access-sth8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.098782 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67f50f65-9151-4444-9680-f86e0f256069-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "67f50f65-9151-4444-9680-f86e0f256069" (UID: "67f50f65-9151-4444-9680-f86e0f256069"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.104694 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/67f50f65-9151-4444-9680-f86e0f256069-pod-info" (OuterVolumeSpecName: "pod-info") pod "67f50f65-9151-4444-9680-f86e0f256069" (UID: "67f50f65-9151-4444-9680-f86e0f256069"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.105940 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data" (OuterVolumeSpecName: "config-data") pod "67f50f65-9151-4444-9680-f86e0f256069" (UID: "67f50f65-9151-4444-9680-f86e0f256069"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.131396 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-server-conf" (OuterVolumeSpecName: "server-conf") pod "67f50f65-9151-4444-9680-f86e0f256069" (UID: "67f50f65-9151-4444-9680-f86e0f256069"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.184831 4902 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67f50f65-9151-4444-9680-f86e0f256069-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.184858 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.184870 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sth8r\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-kube-api-access-sth8r\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.184878 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.184898 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.184908 4902 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.184917 4902 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67f50f65-9151-4444-9680-f86e0f256069-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.184925 4902 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.184933 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.184941 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.186358 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "67f50f65-9151-4444-9680-f86e0f256069" (UID: "67f50f65-9151-4444-9680-f86e0f256069"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.189485 4902 generic.go:334] "Generic (PLEG): container finished" podID="67f50f65-9151-4444-9680-f86e0f256069" containerID="d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852" exitCode=0 Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.189607 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.190745 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67f50f65-9151-4444-9680-f86e0f256069","Type":"ContainerDied","Data":"d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852"} Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.190814 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67f50f65-9151-4444-9680-f86e0f256069","Type":"ContainerDied","Data":"08ee02c4a3aa1bd9f0c6f8daed756e3d6ec0c75c1f2a0da20740a10a51dd17d5"} Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.190833 4902 scope.go:117] "RemoveContainer" containerID="d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.194199 4902 generic.go:334] "Generic (PLEG): container finished" podID="19a933f8-5063-4cd1-8d3d-420e82d4e1fd" containerID="21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9" exitCode=0 Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.194247 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"19a933f8-5063-4cd1-8d3d-420e82d4e1fd","Type":"ContainerDied","Data":"21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9"} Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.194304 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"19a933f8-5063-4cd1-8d3d-420e82d4e1fd","Type":"ContainerDied","Data":"bf2f4711a987253bd77a78040ec2bd0cf16012bd15444fb1b640251be787c875"} Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.194271 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.196015 4902 generic.go:334] "Generic (PLEG): container finished" podID="8e00c7d5-7199-4602-9d3b-5af4f14124bc" containerID="ea8dbb434ad9bd3e85adcd00febd132baf741c5aae1afe358fb761a39bcb889e" exitCode=0 Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.196093 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5684459db4-jgdkj" event={"ID":"8e00c7d5-7199-4602-9d3b-5af4f14124bc","Type":"ContainerDied","Data":"ea8dbb434ad9bd3e85adcd00febd132baf741c5aae1afe358fb761a39bcb889e"} Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.199524 4902 generic.go:334] "Generic (PLEG): container finished" podID="8d7103bd-b24b-4a0c-b68a-17373307f1aa" containerID="9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70" exitCode=0 Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.199585 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.199589 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d7103bd-b24b-4a0c-b68a-17373307f1aa","Type":"ContainerDied","Data":"9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70"} Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.199695 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d7103bd-b24b-4a0c-b68a-17373307f1aa","Type":"ContainerDied","Data":"43205feda26dd86650cc6a1b706524efcf814c15daa6ef3c2cb46d3126d049ac"} Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.245036 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.290485 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.290522 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.315788 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.320588 4902 scope.go:117] "RemoveContainer" containerID="61d699bd9d7518ccbc0b054c17af7f1dd18564a940361db0161ef5daafb50c80" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.324291 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.348198 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.358397 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.368477 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.380533 4902 scope.go:117] "RemoveContainer" containerID="d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852" Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.381193 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852\": container with ID starting with d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852 not found: ID does not exist" containerID="d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.381224 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852"} err="failed to get container status \"d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852\": rpc error: code = NotFound desc = could not find container \"d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852\": container with ID starting with d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852 not found: ID does not exist" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.381247 4902 scope.go:117] "RemoveContainer" containerID="61d699bd9d7518ccbc0b054c17af7f1dd18564a940361db0161ef5daafb50c80" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.381268 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.381715 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61d699bd9d7518ccbc0b054c17af7f1dd18564a940361db0161ef5daafb50c80\": container with ID starting with 61d699bd9d7518ccbc0b054c17af7f1dd18564a940361db0161ef5daafb50c80 not found: ID does not exist" containerID="61d699bd9d7518ccbc0b054c17af7f1dd18564a940361db0161ef5daafb50c80" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.381755 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61d699bd9d7518ccbc0b054c17af7f1dd18564a940361db0161ef5daafb50c80"} err="failed to get container status \"61d699bd9d7518ccbc0b054c17af7f1dd18564a940361db0161ef5daafb50c80\": rpc error: code = NotFound desc = could not find container \"61d699bd9d7518ccbc0b054c17af7f1dd18564a940361db0161ef5daafb50c80\": container with ID starting with 61d699bd9d7518ccbc0b054c17af7f1dd18564a940361db0161ef5daafb50c80 not found: ID does not exist" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.381785 4902 scope.go:117] "RemoveContainer" containerID="21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.420339 4902 scope.go:117] "RemoveContainer" containerID="231c6e3ba9f72e73ba62f1ebc540eb1af701534ca3cefe4ed6a2a60507ad8659" Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.446325 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.447123 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.448663 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.449930 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.449999 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.450062 4902 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.451239 4902 scope.go:117] "RemoveContainer" containerID="21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9" Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.459494 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.459586 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovs-vswitchd" Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.459525 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9\": container with ID starting with 21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9 not found: ID does not exist" containerID="21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.459721 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9"} err="failed to get container status \"21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9\": rpc error: code = NotFound desc = could not find container \"21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9\": container with ID starting with 21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9 not found: ID does not exist" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.459755 4902 scope.go:117] "RemoveContainer" containerID="231c6e3ba9f72e73ba62f1ebc540eb1af701534ca3cefe4ed6a2a60507ad8659" Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.460588 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"231c6e3ba9f72e73ba62f1ebc540eb1af701534ca3cefe4ed6a2a60507ad8659\": container with ID starting with 231c6e3ba9f72e73ba62f1ebc540eb1af701534ca3cefe4ed6a2a60507ad8659 not found: ID does not exist" containerID="231c6e3ba9f72e73ba62f1ebc540eb1af701534ca3cefe4ed6a2a60507ad8659" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.460620 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"231c6e3ba9f72e73ba62f1ebc540eb1af701534ca3cefe4ed6a2a60507ad8659"} err="failed to get container status \"231c6e3ba9f72e73ba62f1ebc540eb1af701534ca3cefe4ed6a2a60507ad8659\": rpc error: code = NotFound desc = could not find container \"231c6e3ba9f72e73ba62f1ebc540eb1af701534ca3cefe4ed6a2a60507ad8659\": container with ID starting with 231c6e3ba9f72e73ba62f1ebc540eb1af701534ca3cefe4ed6a2a60507ad8659 not found: ID does not exist" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.460639 4902 scope.go:117] "RemoveContainer" containerID="9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.502985 4902 scope.go:117] "RemoveContainer" containerID="92e5d5d244ea3ad93a7e80ee12639d5b07fffbb547e78aa8ac616bbb354c0c54" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.514666 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.528881 4902 scope.go:117] "RemoveContainer" containerID="9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70" Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.533713 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70\": container with ID starting with 9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70 not found: ID does not exist" containerID="9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.533760 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70"} err="failed to get container status \"9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70\": rpc error: code = NotFound desc = could not find container \"9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70\": container with ID starting with 9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70 not found: ID does not exist" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.533785 4902 scope.go:117] "RemoveContainer" containerID="92e5d5d244ea3ad93a7e80ee12639d5b07fffbb547e78aa8ac616bbb354c0c54" Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.534210 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92e5d5d244ea3ad93a7e80ee12639d5b07fffbb547e78aa8ac616bbb354c0c54\": container with ID starting with 92e5d5d244ea3ad93a7e80ee12639d5b07fffbb547e78aa8ac616bbb354c0c54 not found: ID does not exist" containerID="92e5d5d244ea3ad93a7e80ee12639d5b07fffbb547e78aa8ac616bbb354c0c54" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.534234 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92e5d5d244ea3ad93a7e80ee12639d5b07fffbb547e78aa8ac616bbb354c0c54"} err="failed to get container status \"92e5d5d244ea3ad93a7e80ee12639d5b07fffbb547e78aa8ac616bbb354c0c54\": rpc error: code = NotFound desc = could not find container \"92e5d5d244ea3ad93a7e80ee12639d5b07fffbb547e78aa8ac616bbb354c0c54\": container with ID starting with 92e5d5d244ea3ad93a7e80ee12639d5b07fffbb547e78aa8ac616bbb354c0c54 not found: ID does not exist" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.604599 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-fernet-keys\") pod \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.604676 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-combined-ca-bundle\") pod \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.604705 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-credential-keys\") pod \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.604734 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-config-data\") pod \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.604787 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-scripts\") pod \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.604842 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-public-tls-certs\") pod \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.604893 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-internal-tls-certs\") pod \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.604909 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtlrk\" (UniqueName: \"kubernetes.io/projected/8e00c7d5-7199-4602-9d3b-5af4f14124bc-kube-api-access-rtlrk\") pod \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.634578 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8e00c7d5-7199-4602-9d3b-5af4f14124bc" (UID: "8e00c7d5-7199-4602-9d3b-5af4f14124bc"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.637228 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-scripts" (OuterVolumeSpecName: "scripts") pod "8e00c7d5-7199-4602-9d3b-5af4f14124bc" (UID: "8e00c7d5-7199-4602-9d3b-5af4f14124bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.650277 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8e00c7d5-7199-4602-9d3b-5af4f14124bc" (UID: "8e00c7d5-7199-4602-9d3b-5af4f14124bc"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.650463 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e00c7d5-7199-4602-9d3b-5af4f14124bc-kube-api-access-rtlrk" (OuterVolumeSpecName: "kube-api-access-rtlrk") pod "8e00c7d5-7199-4602-9d3b-5af4f14124bc" (UID: "8e00c7d5-7199-4602-9d3b-5af4f14124bc"). InnerVolumeSpecName "kube-api-access-rtlrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.706911 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtlrk\" (UniqueName: \"kubernetes.io/projected/8e00c7d5-7199-4602-9d3b-5af4f14124bc-kube-api-access-rtlrk\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.706948 4902 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.706958 4902 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.706967 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.743008 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-config-data" (OuterVolumeSpecName: "config-data") pod "8e00c7d5-7199-4602-9d3b-5af4f14124bc" (UID: "8e00c7d5-7199-4602-9d3b-5af4f14124bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.747973 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e00c7d5-7199-4602-9d3b-5af4f14124bc" (UID: "8e00c7d5-7199-4602-9d3b-5af4f14124bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.774277 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8e00c7d5-7199-4602-9d3b-5af4f14124bc" (UID: "8e00c7d5-7199-4602-9d3b-5af4f14124bc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.789324 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8e00c7d5-7199-4602-9d3b-5af4f14124bc" (UID: "8e00c7d5-7199-4602-9d3b-5af4f14124bc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.808018 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.808357 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.808451 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.808505 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.050562 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.115522 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc235c8-beef-433d-b663-e1d09b6a9b65-combined-ca-bundle\") pod \"dbc235c8-beef-433d-b663-e1d09b6a9b65\" (UID: \"dbc235c8-beef-433d-b663-e1d09b6a9b65\") " Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.115705 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tbt5\" (UniqueName: \"kubernetes.io/projected/dbc235c8-beef-433d-b663-e1d09b6a9b65-kube-api-access-8tbt5\") pod \"dbc235c8-beef-433d-b663-e1d09b6a9b65\" (UID: \"dbc235c8-beef-433d-b663-e1d09b6a9b65\") " Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.115803 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbc235c8-beef-433d-b663-e1d09b6a9b65-config-data\") pod \"dbc235c8-beef-433d-b663-e1d09b6a9b65\" (UID: \"dbc235c8-beef-433d-b663-e1d09b6a9b65\") " Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.119330 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbc235c8-beef-433d-b663-e1d09b6a9b65-kube-api-access-8tbt5" (OuterVolumeSpecName: "kube-api-access-8tbt5") pod "dbc235c8-beef-433d-b663-e1d09b6a9b65" (UID: "dbc235c8-beef-433d-b663-e1d09b6a9b65"). InnerVolumeSpecName "kube-api-access-8tbt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.139187 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbc235c8-beef-433d-b663-e1d09b6a9b65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbc235c8-beef-433d-b663-e1d09b6a9b65" (UID: "dbc235c8-beef-433d-b663-e1d09b6a9b65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.142192 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbc235c8-beef-433d-b663-e1d09b6a9b65-config-data" (OuterVolumeSpecName: "config-data") pod "dbc235c8-beef-433d-b663-e1d09b6a9b65" (UID: "dbc235c8-beef-433d-b663-e1d09b6a9b65"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.217099 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc235c8-beef-433d-b663-e1d09b6a9b65-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.217127 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tbt5\" (UniqueName: \"kubernetes.io/projected/dbc235c8-beef-433d-b663-e1d09b6a9b65-kube-api-access-8tbt5\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.217142 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbc235c8-beef-433d-b663-e1d09b6a9b65-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.221809 4902 generic.go:334] "Generic (PLEG): container finished" podID="dbc235c8-beef-433d-b663-e1d09b6a9b65" containerID="357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae" exitCode=0 Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.221881 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.221906 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"dbc235c8-beef-433d-b663-e1d09b6a9b65","Type":"ContainerDied","Data":"357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae"} Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.221962 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"dbc235c8-beef-433d-b663-e1d09b6a9b65","Type":"ContainerDied","Data":"b81adfeafc100f247345bb4dc1ec0bbf1a637bdabc4a363633412eb4f663c5f6"} Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.221986 4902 scope.go:117] "RemoveContainer" containerID="357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.223078 4902 generic.go:334] "Generic (PLEG): container finished" podID="359a818e-1c34-4dfd-bb59-0e72280a85a0" containerID="a133e1c90783c83c710a2eea26c02cb7d28a759bac5a441a7e04e3644c54f5fe" exitCode=0 Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.223132 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"359a818e-1c34-4dfd-bb59-0e72280a85a0","Type":"ContainerDied","Data":"a133e1c90783c83c710a2eea26c02cb7d28a759bac5a441a7e04e3644c54f5fe"} Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.227635 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5684459db4-jgdkj" event={"ID":"8e00c7d5-7199-4602-9d3b-5af4f14124bc","Type":"ContainerDied","Data":"a7b81b6927c5878e4864d8eea63ac6db97be31623e53b2291bbb5d03097d4cf8"} Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.227649 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.229940 4902 generic.go:334] "Generic (PLEG): container finished" podID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" containerID="51583e6b97e071d7cf96bdf513ff863344bb3712ef59fd993cdce4376b16aa3c" exitCode=0 Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.230170 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887695489-rtxbl" event={"ID":"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5","Type":"ContainerDied","Data":"51583e6b97e071d7cf96bdf513ff863344bb3712ef59fd993cdce4376b16aa3c"} Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.230273 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887695489-rtxbl" event={"ID":"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5","Type":"ContainerDied","Data":"b6ec2a7ebbd2aee467c0043661c91112bee51c7e8687af847e64a040bb7767f9"} Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.230364 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6ec2a7ebbd2aee467c0043661c91112bee51c7e8687af847e64a040bb7767f9" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.252553 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.257559 4902 scope.go:117] "RemoveContainer" containerID="357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae" Jan 21 14:57:22 crc kubenswrapper[4902]: E0121 14:57:22.257890 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae\": container with ID starting with 357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae not found: ID does not exist" containerID="357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.257917 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae"} err="failed to get container status \"357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae\": rpc error: code = NotFound desc = could not find container \"357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae\": container with ID starting with 357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae not found: ID does not exist" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.257937 4902 scope.go:117] "RemoveContainer" containerID="ea8dbb434ad9bd3e85adcd00febd132baf741c5aae1afe358fb761a39bcb889e" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.315955 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" path="/var/lib/kubelet/pods/0ea9ca5b-2e24-41de-8a99-a882ec11c222/volumes" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.317512 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-config\") pod \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.317556 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hptjz\" (UniqueName: \"kubernetes.io/projected/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-kube-api-access-hptjz\") pod \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.317606 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-combined-ca-bundle\") pod \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.317647 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-httpd-config\") pod \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.317672 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-internal-tls-certs\") pod \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.317705 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-public-tls-certs\") pod \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.317792 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-ovndb-tls-certs\") pod \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.318877 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19a933f8-5063-4cd1-8d3d-420e82d4e1fd" path="/var/lib/kubelet/pods/19a933f8-5063-4cd1-8d3d-420e82d4e1fd/volumes" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.320101 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2acfa57e-c4e9-4809-b5cb-109f1bbb64f2" path="/var/lib/kubelet/pods/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2/volumes" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.001553 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5df595696d-2ftxp" podUID="561efc1e-a930-440f-83b1-a75217a11f32" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": dial tcp 10.217.0.160:9311: i/o timeout" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.001977 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5df595696d-2ftxp" podUID="561efc1e-a930-440f-83b1-a75217a11f32" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.088028 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" (UID: "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.088308 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-kube-api-access-hptjz" (OuterVolumeSpecName: "kube-api-access-hptjz") pod "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" (UID: "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5"). InnerVolumeSpecName "kube-api-access-hptjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.088968 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c70bcdb-316e-4246-b333-ddaf6438c6ee" path="/var/lib/kubelet/pods/2c70bcdb-316e-4246-b333-ddaf6438c6ee/volumes" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.092221 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" path="/var/lib/kubelet/pods/2ff2c3d8-2d68-4255-a175-21f0df1b9276/volumes" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.105019 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bdb84d6-c599-4d87-9c27-cb32ff77d6d9" path="/var/lib/kubelet/pods/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9/volumes" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.111753 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67f50f65-9151-4444-9680-f86e0f256069" path="/var/lib/kubelet/pods/67f50f65-9151-4444-9680-f86e0f256069/volumes" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.117768 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d7103bd-b24b-4a0c-b68a-17373307f1aa" path="/var/lib/kubelet/pods/8d7103bd-b24b-4a0c-b68a-17373307f1aa/volumes" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.152942 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.152979 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hptjz\" (UniqueName: \"kubernetes.io/projected/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-kube-api-access-hptjz\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.166399 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" (UID: "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.166438 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-config" (OuterVolumeSpecName: "config") pod "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" (UID: "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.170564 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" (UID: "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.171251 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" (UID: "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.196814 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" (UID: "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.244955 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.254088 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.254157 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.254170 4902 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.254184 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.254197 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.288731 4902 kubelet_pods.go:2476] "Failed to reduce cpu time for pod pending volume cleanup" podUID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" err="openat2 /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9dc3ac42_826c_4f25_a3f7_d1ab2eb8cbf5.slice/cgroup.controllers: no such file or directory" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.288795 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.288818 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.288832 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5684459db4-jgdkj"] Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.288843 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-5684459db4-jgdkj"] Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.288857 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"359a818e-1c34-4dfd-bb59-0e72280a85a0","Type":"ContainerDied","Data":"4ffea13c5b1ca8a19fa0ab7ab117654ce080a9b7f7c854db7559f017b9ca3c40"} Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.288878 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ffea13c5b1ca8a19fa0ab7ab117654ce080a9b7f7c854db7559f017b9ca3c40" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.302199 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.310576 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7887695489-rtxbl"] Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.315556 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7887695489-rtxbl"] Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.354784 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-combined-ca-bundle\") pod \"359a818e-1c34-4dfd-bb59-0e72280a85a0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.354884 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-config-data\") pod \"359a818e-1c34-4dfd-bb59-0e72280a85a0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.355008 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww8pt\" (UniqueName: \"kubernetes.io/projected/359a818e-1c34-4dfd-bb59-0e72280a85a0-kube-api-access-ww8pt\") pod \"359a818e-1c34-4dfd-bb59-0e72280a85a0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.363113 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/359a818e-1c34-4dfd-bb59-0e72280a85a0-kube-api-access-ww8pt" (OuterVolumeSpecName: "kube-api-access-ww8pt") pod "359a818e-1c34-4dfd-bb59-0e72280a85a0" (UID: "359a818e-1c34-4dfd-bb59-0e72280a85a0"). InnerVolumeSpecName "kube-api-access-ww8pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:23 crc kubenswrapper[4902]: E0121 14:57:23.370672 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-config-data podName:359a818e-1c34-4dfd-bb59-0e72280a85a0 nodeName:}" failed. No retries permitted until 2026-01-21 14:57:23.870644513 +0000 UTC m=+1405.947477542 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-config-data") pod "359a818e-1c34-4dfd-bb59-0e72280a85a0" (UID: "359a818e-1c34-4dfd-bb59-0e72280a85a0") : error deleting /var/lib/kubelet/pods/359a818e-1c34-4dfd-bb59-0e72280a85a0/volume-subpaths: remove /var/lib/kubelet/pods/359a818e-1c34-4dfd-bb59-0e72280a85a0/volume-subpaths: no such file or directory Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.372901 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "359a818e-1c34-4dfd-bb59-0e72280a85a0" (UID: "359a818e-1c34-4dfd-bb59-0e72280a85a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.456829 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww8pt\" (UniqueName: \"kubernetes.io/projected/359a818e-1c34-4dfd-bb59-0e72280a85a0-kube-api-access-ww8pt\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.457157 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.963819 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-config-data\") pod \"359a818e-1c34-4dfd-bb59-0e72280a85a0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.968626 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-config-data" (OuterVolumeSpecName: "config-data") pod "359a818e-1c34-4dfd-bb59-0e72280a85a0" (UID: "359a818e-1c34-4dfd-bb59-0e72280a85a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:24 crc kubenswrapper[4902]: I0121 14:57:24.066100 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:24 crc kubenswrapper[4902]: I0121 14:57:24.251927 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 14:57:24 crc kubenswrapper[4902]: I0121 14:57:24.282641 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 14:57:24 crc kubenswrapper[4902]: I0121 14:57:24.289249 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 14:57:24 crc kubenswrapper[4902]: I0121 14:57:24.308949 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="359a818e-1c34-4dfd-bb59-0e72280a85a0" path="/var/lib/kubelet/pods/359a818e-1c34-4dfd-bb59-0e72280a85a0/volumes" Jan 21 14:57:24 crc kubenswrapper[4902]: I0121 14:57:24.309967 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e00c7d5-7199-4602-9d3b-5af4f14124bc" path="/var/lib/kubelet/pods/8e00c7d5-7199-4602-9d3b-5af4f14124bc/volumes" Jan 21 14:57:24 crc kubenswrapper[4902]: I0121 14:57:24.311218 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" path="/var/lib/kubelet/pods/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5/volumes" Jan 21 14:57:24 crc kubenswrapper[4902]: I0121 14:57:24.313196 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbc235c8-beef-433d-b663-e1d09b6a9b65" path="/var/lib/kubelet/pods/dbc235c8-beef-433d-b663-e1d09b6a9b65/volumes" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.019443 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.080243 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-ceilometer-tls-certs\") pod \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.080305 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-run-httpd\") pod \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.080381 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-config-data\") pod \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.080433 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5c9s\" (UniqueName: \"kubernetes.io/projected/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-kube-api-access-r5c9s\") pod \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.080470 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-combined-ca-bundle\") pod \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.080522 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-log-httpd\") pod \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.080560 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-scripts\") pod \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.080643 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-sg-core-conf-yaml\") pod \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.082033 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" (UID: "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.082329 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" (UID: "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.086147 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-kube-api-access-r5c9s" (OuterVolumeSpecName: "kube-api-access-r5c9s") pod "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" (UID: "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53"). InnerVolumeSpecName "kube-api-access-r5c9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.097446 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-scripts" (OuterVolumeSpecName: "scripts") pod "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" (UID: "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.102280 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" (UID: "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.119653 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" (UID: "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.146357 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" (UID: "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.162506 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-config-data" (OuterVolumeSpecName: "config-data") pod "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" (UID: "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.182344 4902 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.182381 4902 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.182390 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.182400 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5c9s\" (UniqueName: \"kubernetes.io/projected/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-kube-api-access-r5c9s\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.182411 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.182419 4902 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.182430 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.182439 4902 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.262903 4902 generic.go:334] "Generic (PLEG): container finished" podID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerID="c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67" exitCode=0 Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.262955 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53","Type":"ContainerDied","Data":"c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67"} Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.262982 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.262997 4902 scope.go:117] "RemoveContainer" containerID="d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.262985 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53","Type":"ContainerDied","Data":"c200d00278992d8d2cca7e33c912295c7207132824d0c1563c30e02dcd83a48e"} Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.297274 4902 scope.go:117] "RemoveContainer" containerID="91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.303338 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.308612 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.314565 4902 scope.go:117] "RemoveContainer" containerID="c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.334689 4902 scope.go:117] "RemoveContainer" containerID="49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.359091 4902 scope.go:117] "RemoveContainer" containerID="d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0" Jan 21 14:57:25 crc kubenswrapper[4902]: E0121 14:57:25.359426 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0\": container with ID starting with d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0 not found: ID does not exist" containerID="d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.359454 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0"} err="failed to get container status \"d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0\": rpc error: code = NotFound desc = could not find container \"d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0\": container with ID starting with d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0 not found: ID does not exist" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.359476 4902 scope.go:117] "RemoveContainer" containerID="91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881" Jan 21 14:57:25 crc kubenswrapper[4902]: E0121 14:57:25.359795 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881\": container with ID starting with 91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881 not found: ID does not exist" containerID="91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.359851 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881"} err="failed to get container status \"91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881\": rpc error: code = NotFound desc = could not find container \"91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881\": container with ID starting with 91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881 not found: ID does not exist" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.359944 4902 scope.go:117] "RemoveContainer" containerID="c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67" Jan 21 14:57:25 crc kubenswrapper[4902]: E0121 14:57:25.360278 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67\": container with ID starting with c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67 not found: ID does not exist" containerID="c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.360312 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67"} err="failed to get container status \"c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67\": rpc error: code = NotFound desc = could not find container \"c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67\": container with ID starting with c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67 not found: ID does not exist" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.360331 4902 scope.go:117] "RemoveContainer" containerID="49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110" Jan 21 14:57:25 crc kubenswrapper[4902]: E0121 14:57:25.360587 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110\": container with ID starting with 49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110 not found: ID does not exist" containerID="49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.360608 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110"} err="failed to get container status \"49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110\": rpc error: code = NotFound desc = could not find container \"49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110\": container with ID starting with 49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110 not found: ID does not exist" Jan 21 14:57:26 crc kubenswrapper[4902]: I0121 14:57:26.304270 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" path="/var/lib/kubelet/pods/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53/volumes" Jan 21 14:57:26 crc kubenswrapper[4902]: E0121 14:57:26.443152 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:26 crc kubenswrapper[4902]: E0121 14:57:26.443590 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:26 crc kubenswrapper[4902]: E0121 14:57:26.443883 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:26 crc kubenswrapper[4902]: E0121 14:57:26.443924 4902 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server" Jan 21 14:57:26 crc kubenswrapper[4902]: E0121 14:57:26.444361 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:26 crc kubenswrapper[4902]: E0121 14:57:26.446262 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:26 crc kubenswrapper[4902]: E0121 14:57:26.449918 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:26 crc kubenswrapper[4902]: E0121 14:57:26.450007 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovs-vswitchd" Jan 21 14:57:31 crc kubenswrapper[4902]: E0121 14:57:31.442518 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:31 crc kubenswrapper[4902]: E0121 14:57:31.443386 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:31 crc kubenswrapper[4902]: E0121 14:57:31.443855 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:31 crc kubenswrapper[4902]: E0121 14:57:31.443891 4902 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server" Jan 21 14:57:31 crc kubenswrapper[4902]: E0121 14:57:31.444000 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:31 crc kubenswrapper[4902]: E0121 14:57:31.445035 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:31 crc kubenswrapper[4902]: E0121 14:57:31.446036 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:31 crc kubenswrapper[4902]: E0121 14:57:31.446085 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovs-vswitchd" Jan 21 14:57:36 crc kubenswrapper[4902]: E0121 14:57:36.442539 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:36 crc kubenswrapper[4902]: E0121 14:57:36.444233 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:36 crc kubenswrapper[4902]: E0121 14:57:36.444277 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:36 crc kubenswrapper[4902]: E0121 14:57:36.444974 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:36 crc kubenswrapper[4902]: E0121 14:57:36.445059 4902 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server" Jan 21 14:57:36 crc kubenswrapper[4902]: E0121 14:57:36.446526 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:36 crc kubenswrapper[4902]: E0121 14:57:36.447925 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:36 crc kubenswrapper[4902]: E0121 14:57:36.447984 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovs-vswitchd" Jan 21 14:57:41 crc kubenswrapper[4902]: E0121 14:57:41.442467 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:41 crc kubenswrapper[4902]: E0121 14:57:41.444225 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:41 crc kubenswrapper[4902]: E0121 14:57:41.444318 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:41 crc kubenswrapper[4902]: E0121 14:57:41.444809 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:41 crc kubenswrapper[4902]: E0121 14:57:41.444885 4902 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server" Jan 21 14:57:41 crc kubenswrapper[4902]: E0121 14:57:41.446436 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:41 crc kubenswrapper[4902]: E0121 14:57:41.450371 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:41 crc kubenswrapper[4902]: E0121 14:57:41.450528 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovs-vswitchd" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.089738 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.166545 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqvxq\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-kube-api-access-hqvxq\") pod \"ee214fec-083a-4abd-b65e-003bccee24fa\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.166631 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift\") pod \"ee214fec-083a-4abd-b65e-003bccee24fa\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.166699 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ee214fec-083a-4abd-b65e-003bccee24fa-cache\") pod \"ee214fec-083a-4abd-b65e-003bccee24fa\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.166758 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ee214fec-083a-4abd-b65e-003bccee24fa\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.166812 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ee214fec-083a-4abd-b65e-003bccee24fa-lock\") pod \"ee214fec-083a-4abd-b65e-003bccee24fa\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.167435 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee214fec-083a-4abd-b65e-003bccee24fa-lock" (OuterVolumeSpecName: "lock") pod "ee214fec-083a-4abd-b65e-003bccee24fa" (UID: "ee214fec-083a-4abd-b65e-003bccee24fa"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.167562 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee214fec-083a-4abd-b65e-003bccee24fa-cache" (OuterVolumeSpecName: "cache") pod "ee214fec-083a-4abd-b65e-003bccee24fa" (UID: "ee214fec-083a-4abd-b65e-003bccee24fa"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.167821 4902 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ee214fec-083a-4abd-b65e-003bccee24fa-cache\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.167843 4902 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ee214fec-083a-4abd-b65e-003bccee24fa-lock\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.174459 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "swift") pod "ee214fec-083a-4abd-b65e-003bccee24fa" (UID: "ee214fec-083a-4abd-b65e-003bccee24fa"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.188661 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-kube-api-access-hqvxq" (OuterVolumeSpecName: "kube-api-access-hqvxq") pod "ee214fec-083a-4abd-b65e-003bccee24fa" (UID: "ee214fec-083a-4abd-b65e-003bccee24fa"). InnerVolumeSpecName "kube-api-access-hqvxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.197228 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "ee214fec-083a-4abd-b65e-003bccee24fa" (UID: "ee214fec-083a-4abd-b65e-003bccee24fa"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.268968 4902 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.269026 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.269036 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqvxq\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-kube-api-access-hqvxq\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.289329 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.345291 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4sm9h_bfa512c9-b91a-4a30-8a23-548ef53b094e/ovs-vswitchd/0.log" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.346500 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.370169 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.440414 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc" exitCode=137 Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.440489 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc"} Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.440719 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"6c463f82994bcd8248458f35757eded9002826e57bff7f1770ee0560e5c7ce9d"} Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.440537 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.440771 4902 scope.go:117] "RemoveContainer" containerID="71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.450919 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4sm9h_bfa512c9-b91a-4a30-8a23-548ef53b094e/ovs-vswitchd/0.log" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.467065 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.467444 4902 scope.go:117] "RemoveContainer" containerID="a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.468005 4902 generic.go:334] "Generic (PLEG): container finished" podID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" exitCode=137 Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.468307 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4sm9h" event={"ID":"bfa512c9-b91a-4a30-8a23-548ef53b094e","Type":"ContainerDied","Data":"0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1"} Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.468389 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4sm9h" event={"ID":"bfa512c9-b91a-4a30-8a23-548ef53b094e","Type":"ContainerDied","Data":"802447b9b93240937e871b9f5fd717abb6508a7f8537087545c7900d7f4a54d8"} Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.470684 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-log\") pod \"bfa512c9-b91a-4a30-8a23-548ef53b094e\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.470768 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa512c9-b91a-4a30-8a23-548ef53b094e-scripts\") pod \"bfa512c9-b91a-4a30-8a23-548ef53b094e\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.470868 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-488bn\" (UniqueName: \"kubernetes.io/projected/bfa512c9-b91a-4a30-8a23-548ef53b094e-kube-api-access-488bn\") pod \"bfa512c9-b91a-4a30-8a23-548ef53b094e\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.470906 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-etc-ovs\") pod \"bfa512c9-b91a-4a30-8a23-548ef53b094e\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.470930 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-lib\") pod \"bfa512c9-b91a-4a30-8a23-548ef53b094e\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.470983 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-run\") pod \"bfa512c9-b91a-4a30-8a23-548ef53b094e\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.471359 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-run" (OuterVolumeSpecName: "var-run") pod "bfa512c9-b91a-4a30-8a23-548ef53b094e" (UID: "bfa512c9-b91a-4a30-8a23-548ef53b094e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.471399 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "bfa512c9-b91a-4a30-8a23-548ef53b094e" (UID: "bfa512c9-b91a-4a30-8a23-548ef53b094e"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.471426 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-log" (OuterVolumeSpecName: "var-log") pod "bfa512c9-b91a-4a30-8a23-548ef53b094e" (UID: "bfa512c9-b91a-4a30-8a23-548ef53b094e"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.471405 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-lib" (OuterVolumeSpecName: "var-lib") pod "bfa512c9-b91a-4a30-8a23-548ef53b094e" (UID: "bfa512c9-b91a-4a30-8a23-548ef53b094e"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.474436 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfa512c9-b91a-4a30-8a23-548ef53b094e-scripts" (OuterVolumeSpecName: "scripts") pod "bfa512c9-b91a-4a30-8a23-548ef53b094e" (UID: "bfa512c9-b91a-4a30-8a23-548ef53b094e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.480399 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfa512c9-b91a-4a30-8a23-548ef53b094e-kube-api-access-488bn" (OuterVolumeSpecName: "kube-api-access-488bn") pod "bfa512c9-b91a-4a30-8a23-548ef53b094e" (UID: "bfa512c9-b91a-4a30-8a23-548ef53b094e"). InnerVolumeSpecName "kube-api-access-488bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.491707 4902 scope.go:117] "RemoveContainer" containerID="589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.509306 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.514221 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.520591 4902 scope.go:117] "RemoveContainer" containerID="6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.538481 4902 scope.go:117] "RemoveContainer" containerID="0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.556011 4902 scope.go:117] "RemoveContainer" containerID="a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.572540 4902 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.572570 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa512c9-b91a-4a30-8a23-548ef53b094e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.572582 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-488bn\" (UniqueName: \"kubernetes.io/projected/bfa512c9-b91a-4a30-8a23-548ef53b094e-kube-api-access-488bn\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.572594 4902 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.572604 4902 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-lib\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.572614 4902 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.573203 4902 scope.go:117] "RemoveContainer" containerID="fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.587059 4902 scope.go:117] "RemoveContainer" containerID="eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.605230 4902 scope.go:117] "RemoveContainer" containerID="756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.625287 4902 scope.go:117] "RemoveContainer" containerID="c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.643297 4902 scope.go:117] "RemoveContainer" containerID="df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.659724 4902 scope.go:117] "RemoveContainer" containerID="b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.679380 4902 scope.go:117] "RemoveContainer" containerID="723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.695008 4902 scope.go:117] "RemoveContainer" containerID="ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.710287 4902 scope.go:117] "RemoveContainer" containerID="69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.731445 4902 scope.go:117] "RemoveContainer" containerID="71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.731972 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc\": container with ID starting with 71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc not found: ID does not exist" containerID="71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.732016 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc"} err="failed to get container status \"71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc\": rpc error: code = NotFound desc = could not find container \"71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc\": container with ID starting with 71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.732070 4902 scope.go:117] "RemoveContainer" containerID="a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.732465 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f\": container with ID starting with a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f not found: ID does not exist" containerID="a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.732494 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f"} err="failed to get container status \"a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f\": rpc error: code = NotFound desc = could not find container \"a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f\": container with ID starting with a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.732613 4902 scope.go:117] "RemoveContainer" containerID="589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.733032 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e\": container with ID starting with 589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e not found: ID does not exist" containerID="589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.733175 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e"} err="failed to get container status \"589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e\": rpc error: code = NotFound desc = could not find container \"589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e\": container with ID starting with 589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.733290 4902 scope.go:117] "RemoveContainer" containerID="6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.733672 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179\": container with ID starting with 6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179 not found: ID does not exist" containerID="6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.733694 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179"} err="failed to get container status \"6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179\": rpc error: code = NotFound desc = could not find container \"6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179\": container with ID starting with 6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179 not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.733708 4902 scope.go:117] "RemoveContainer" containerID="0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.734186 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a\": container with ID starting with 0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a not found: ID does not exist" containerID="0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.734218 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a"} err="failed to get container status \"0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a\": rpc error: code = NotFound desc = could not find container \"0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a\": container with ID starting with 0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.734259 4902 scope.go:117] "RemoveContainer" containerID="a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.734543 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9\": container with ID starting with a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9 not found: ID does not exist" containerID="a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.734568 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9"} err="failed to get container status \"a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9\": rpc error: code = NotFound desc = could not find container \"a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9\": container with ID starting with a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9 not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.734583 4902 scope.go:117] "RemoveContainer" containerID="fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.735125 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f\": container with ID starting with fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f not found: ID does not exist" containerID="fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.735144 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f"} err="failed to get container status \"fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f\": rpc error: code = NotFound desc = could not find container \"fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f\": container with ID starting with fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.735159 4902 scope.go:117] "RemoveContainer" containerID="eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.735690 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e\": container with ID starting with eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e not found: ID does not exist" containerID="eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.735710 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e"} err="failed to get container status \"eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e\": rpc error: code = NotFound desc = could not find container \"eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e\": container with ID starting with eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.735723 4902 scope.go:117] "RemoveContainer" containerID="756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.736405 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606\": container with ID starting with 756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606 not found: ID does not exist" containerID="756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.736424 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606"} err="failed to get container status \"756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606\": rpc error: code = NotFound desc = could not find container \"756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606\": container with ID starting with 756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606 not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.736439 4902 scope.go:117] "RemoveContainer" containerID="c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.736925 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a\": container with ID starting with c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a not found: ID does not exist" containerID="c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.736970 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a"} err="failed to get container status \"c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a\": rpc error: code = NotFound desc = could not find container \"c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a\": container with ID starting with c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.736989 4902 scope.go:117] "RemoveContainer" containerID="df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.737460 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157\": container with ID starting with df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157 not found: ID does not exist" containerID="df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.737482 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157"} err="failed to get container status \"df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157\": rpc error: code = NotFound desc = could not find container \"df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157\": container with ID starting with df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157 not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.737590 4902 scope.go:117] "RemoveContainer" containerID="b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.737929 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad\": container with ID starting with b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad not found: ID does not exist" containerID="b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.737976 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad"} err="failed to get container status \"b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad\": rpc error: code = NotFound desc = could not find container \"b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad\": container with ID starting with b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.737993 4902 scope.go:117] "RemoveContainer" containerID="723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.738457 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135\": container with ID starting with 723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135 not found: ID does not exist" containerID="723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.738483 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135"} err="failed to get container status \"723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135\": rpc error: code = NotFound desc = could not find container \"723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135\": container with ID starting with 723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135 not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.738500 4902 scope.go:117] "RemoveContainer" containerID="ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.738824 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5\": container with ID starting with ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5 not found: ID does not exist" containerID="ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.738870 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5"} err="failed to get container status \"ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5\": rpc error: code = NotFound desc = could not find container \"ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5\": container with ID starting with ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5 not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.738887 4902 scope.go:117] "RemoveContainer" containerID="69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.739248 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b\": container with ID starting with 69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b not found: ID does not exist" containerID="69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.739270 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b"} err="failed to get container status \"69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b\": rpc error: code = NotFound desc = could not find container \"69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b\": container with ID starting with 69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.739286 4902 scope.go:117] "RemoveContainer" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.760895 4902 scope.go:117] "RemoveContainer" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.777680 4902 scope.go:117] "RemoveContainer" containerID="e34b32829dca6d57eb471e674b23b27f3959ee5f87b800836461d13aa9a37bcb" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.797508 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-4sm9h"] Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.807465 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-4sm9h"] Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.815022 4902 scope.go:117] "RemoveContainer" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.815655 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1\": container with ID starting with 0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1 not found: ID does not exist" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.815716 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1"} err="failed to get container status \"0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1\": rpc error: code = NotFound desc = could not find container \"0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1\": container with ID starting with 0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1 not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.815748 4902 scope.go:117] "RemoveContainer" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.816168 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8\": container with ID starting with df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 not found: ID does not exist" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.816247 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8"} err="failed to get container status \"df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8\": rpc error: code = NotFound desc = could not find container \"df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8\": container with ID starting with df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.816355 4902 scope.go:117] "RemoveContainer" containerID="e34b32829dca6d57eb471e674b23b27f3959ee5f87b800836461d13aa9a37bcb" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.816663 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e34b32829dca6d57eb471e674b23b27f3959ee5f87b800836461d13aa9a37bcb\": container with ID starting with e34b32829dca6d57eb471e674b23b27f3959ee5f87b800836461d13aa9a37bcb not found: ID does not exist" containerID="e34b32829dca6d57eb471e674b23b27f3959ee5f87b800836461d13aa9a37bcb" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.816725 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e34b32829dca6d57eb471e674b23b27f3959ee5f87b800836461d13aa9a37bcb"} err="failed to get container status \"e34b32829dca6d57eb471e674b23b27f3959ee5f87b800836461d13aa9a37bcb\": rpc error: code = NotFound desc = could not find container \"e34b32829dca6d57eb471e674b23b27f3959ee5f87b800836461d13aa9a37bcb\": container with ID starting with e34b32829dca6d57eb471e674b23b27f3959ee5f87b800836461d13aa9a37bcb not found: ID does not exist" Jan 21 14:57:44 crc kubenswrapper[4902]: I0121 14:57:44.306897 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" path="/var/lib/kubelet/pods/bfa512c9-b91a-4a30-8a23-548ef53b094e/volumes" Jan 21 14:57:44 crc kubenswrapper[4902]: I0121 14:57:44.308490 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" path="/var/lib/kubelet/pods/ee214fec-083a-4abd-b65e-003bccee24fa/volumes" Jan 21 14:57:44 crc kubenswrapper[4902]: I0121 14:57:44.377878 4902 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod5ef26f87-2d73-4847-abfb-a3bbda8c01c6"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod5ef26f87-2d73-4847-abfb-a3bbda8c01c6] : Timed out while waiting for systemd to remove kubepods-besteffort-pod5ef26f87_2d73_4847_abfb_a3bbda8c01c6.slice" Jan 21 14:57:44 crc kubenswrapper[4902]: E0121 14:57:44.378183 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod5ef26f87-2d73-4847-abfb-a3bbda8c01c6] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod5ef26f87-2d73-4847-abfb-a3bbda8c01c6] : Timed out while waiting for systemd to remove kubepods-besteffort-pod5ef26f87_2d73_4847_abfb_a3bbda8c01c6.slice" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" podUID="5ef26f87-2d73-4847-abfb-a3bbda8c01c6" Jan 21 14:57:44 crc kubenswrapper[4902]: I0121 14:57:44.397917 4902 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod8891f80f-6cb0-4dc6-9f92-836d465e1c84"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod8891f80f-6cb0-4dc6-9f92-836d465e1c84] : Timed out while waiting for systemd to remove kubepods-besteffort-pod8891f80f_6cb0_4dc6_9f92_836d465e1c84.slice" Jan 21 14:57:44 crc kubenswrapper[4902]: I0121 14:57:44.419176 4902 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pode8135258-f03d-4c9a-be6f-7dd1dd099188"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pode8135258-f03d-4c9a-be6f-7dd1dd099188] : Timed out while waiting for systemd to remove kubepods-besteffort-pode8135258_f03d_4c9a_be6f_7dd1dd099188.slice" Jan 21 14:57:44 crc kubenswrapper[4902]: I0121 14:57:44.482878 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:57:44 crc kubenswrapper[4902]: I0121 14:57:44.539214 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-gzrwg"] Jan 21 14:57:44 crc kubenswrapper[4902]: I0121 14:57:44.546536 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-gzrwg"] Jan 21 14:57:46 crc kubenswrapper[4902]: I0121 14:57:46.312253 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ef26f87-2d73-4847-abfb-a3bbda8c01c6" path="/var/lib/kubelet/pods/5ef26f87-2d73-4847-abfb-a3bbda8c01c6/volumes" Jan 21 14:57:47 crc kubenswrapper[4902]: I0121 14:57:47.770095 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:57:47 crc kubenswrapper[4902]: I0121 14:57:47.770164 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:58:17 crc kubenswrapper[4902]: I0121 14:58:17.770016 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:58:17 crc kubenswrapper[4902]: I0121 14:58:17.770566 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.026768 4902 scope.go:117] "RemoveContainer" containerID="4de70b4a162bef7d46289abd4a1b9363b5ded88ef279f8bfde6f5eb04e8068c8" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.064821 4902 scope.go:117] "RemoveContainer" containerID="1e365c417d7c9fc9f0e3c50b8df2956ab629924185f3c066a501456bc7f2f244" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.088316 4902 scope.go:117] "RemoveContainer" containerID="04e51686a115d7efa7ccafee00c3c35f348877ed4159bb02ef8fdec725c74808" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.115120 4902 scope.go:117] "RemoveContainer" containerID="1bfce2ecde4206400633bc9ed5a03f89132046bc198571a9ea9d8cdbe7e9aafa" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.150854 4902 scope.go:117] "RemoveContainer" containerID="fa5cddac767f0cfa37e86e0452a0e4172f930485b3055e92e46247cd7dffa247" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.168173 4902 scope.go:117] "RemoveContainer" containerID="f6b39c880fbd40f2782ed02884cfa856d1ecf3dfd90d97c9787d318a34cf7495" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.196951 4902 scope.go:117] "RemoveContainer" containerID="fabbe3c5e36565bf6c2514be460d8e197d15c7ef2a2eaad51eaaf9fc51cd6931" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.219475 4902 scope.go:117] "RemoveContainer" containerID="7d4422a73cd9c69151e982d6a24415a420632cf5387be9a9908b89fae4b7d136" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.243333 4902 scope.go:117] "RemoveContainer" containerID="2e960884dfc54470df60f875a779cf61caa394a9b9eb4b58037a649720bdac73" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.274227 4902 scope.go:117] "RemoveContainer" containerID="f8e614c23f60db2d2289c45f03de6ca360a2d28723c52bf7d5442f33e4ef3cb9" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.297791 4902 scope.go:117] "RemoveContainer" containerID="0db12f9364007deb6067c2c445b04573d37703a8a3c7073268d343c3233327a1" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.323814 4902 scope.go:117] "RemoveContainer" containerID="29527624e52b61188971d77dcdc19feadc4e519866ced3ad0c73f26335294506" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.348653 4902 scope.go:117] "RemoveContainer" containerID="9f4ace11ba250ec7523d0e7ac4b0965a74da40d284c328763aa454874b0606f8" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.369423 4902 scope.go:117] "RemoveContainer" containerID="6843f7fdaa415e7e2f0347cd97fdaa8f7eaf2a1c6b75202daa5f85889752389a" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.390376 4902 scope.go:117] "RemoveContainer" containerID="9c7eb232194bf5acf0b72c5e4e2b10f32410c50f4767d8979981cf5af8e7ed7d" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.414817 4902 scope.go:117] "RemoveContainer" containerID="183c9aacc3759e23732dbe091d0a8125502d61ad06cbf81f3beb450ef89e7614" Jan 21 14:58:47 crc kubenswrapper[4902]: I0121 14:58:47.769878 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:58:47 crc kubenswrapper[4902]: I0121 14:58:47.770507 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:58:47 crc kubenswrapper[4902]: I0121 14:58:47.770558 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:58:47 crc kubenswrapper[4902]: I0121 14:58:47.771390 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"faf0ff0caeac282dde2bef565f9dbd539a4c5633dd4c8ba54b6bd0e6704b0a61"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:58:47 crc kubenswrapper[4902]: I0121 14:58:47.771478 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://faf0ff0caeac282dde2bef565f9dbd539a4c5633dd4c8ba54b6bd0e6704b0a61" gracePeriod=600 Jan 21 14:58:48 crc kubenswrapper[4902]: I0121 14:58:48.014992 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="faf0ff0caeac282dde2bef565f9dbd539a4c5633dd4c8ba54b6bd0e6704b0a61" exitCode=0 Jan 21 14:58:48 crc kubenswrapper[4902]: I0121 14:58:48.015073 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"faf0ff0caeac282dde2bef565f9dbd539a4c5633dd4c8ba54b6bd0e6704b0a61"} Jan 21 14:58:48 crc kubenswrapper[4902]: I0121 14:58:48.015354 4902 scope.go:117] "RemoveContainer" containerID="0203ec0a15ee1aa92f4eb3d8e44c0e52d1043afb244cf40caae4761f1f1ee369" Jan 21 14:58:49 crc kubenswrapper[4902]: I0121 14:58:49.026420 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110"} Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.492442 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tw2g7"] Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.493787 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" containerName="nova-api-api" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.493820 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" containerName="nova-api-api" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.493871 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-reaper" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.493888 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-reaper" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.493919 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f50f65-9151-4444-9680-f86e0f256069" containerName="rabbitmq" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.493935 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f50f65-9151-4444-9680-f86e0f256069" containerName="rabbitmq" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.493950 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="sg-core" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.493964 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="sg-core" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.493989 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-replicator" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494005 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-replicator" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494032 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365d6c18-395e-4a62-939d-a04927ffa8aa" containerName="barbican-keystone-listener" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494087 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="365d6c18-395e-4a62-939d-a04927ffa8aa" containerName="barbican-keystone-listener" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494109 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494127 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494142 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4168bc0-26cf-4786-9e28-95647462c372" containerName="glance-log" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494156 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4168bc0-26cf-4786-9e28-95647462c372" containerName="glance-log" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494178 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-updater" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494196 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-updater" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494217 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c366100e-d2a0-4be9-965f-ef7b7ad39f78" containerName="nova-scheduler-scheduler" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494231 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c366100e-d2a0-4be9-965f-ef7b7ad39f78" containerName="nova-scheduler-scheduler" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494257 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" containerName="openstack-network-exporter" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494273 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" containerName="openstack-network-exporter" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494299 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbc235c8-beef-433d-b663-e1d09b6a9b65" containerName="nova-cell1-conductor-conductor" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494315 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbc235c8-beef-433d-b663-e1d09b6a9b65" containerName="nova-cell1-conductor-conductor" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494338 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-auditor" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494356 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-auditor" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494405 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerName="nova-metadata-metadata" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494423 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerName="nova-metadata-metadata" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494450 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c653ffa0-195e-4eda-8c25-cfcff2715bdf" containerName="barbican-worker-log" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494466 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c653ffa0-195e-4eda-8c25-cfcff2715bdf" containerName="barbican-worker-log" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494490 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-replicator" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494505 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-replicator" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494536 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c653ffa0-195e-4eda-8c25-cfcff2715bdf" containerName="barbican-worker" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494553 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c653ffa0-195e-4eda-8c25-cfcff2715bdf" containerName="barbican-worker" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494573 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d7103bd-b24b-4a0c-b68a-17373307f1aa" containerName="setup-container" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494589 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d7103bd-b24b-4a0c-b68a-17373307f1aa" containerName="setup-container" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494616 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovs-vswitchd" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494631 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovs-vswitchd" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494650 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-server" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494666 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-server" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494684 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365d6c18-395e-4a62-939d-a04927ffa8aa" containerName="barbican-keystone-listener-log" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494700 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="365d6c18-395e-4a62-939d-a04927ffa8aa" containerName="barbican-keystone-listener-log" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494721 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-server" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494736 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-server" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494757 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359a818e-1c34-4dfd-bb59-0e72280a85a0" containerName="nova-cell0-conductor-conductor" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494773 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="359a818e-1c34-4dfd-bb59-0e72280a85a0" containerName="nova-cell0-conductor-conductor" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494800 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server-init" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494815 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server-init" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494836 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="561efc1e-a930-440f-83b1-a75217a11f32" containerName="barbican-api" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494853 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="561efc1e-a930-440f-83b1-a75217a11f32" containerName="barbican-api" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494884 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerName="nova-metadata-log" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.502680 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerName="nova-metadata-log" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.502784 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-auditor" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.502804 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-auditor" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.502827 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="ceilometer-notification-agent" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.502847 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="ceilometer-notification-agent" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.502883 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19a933f8-5063-4cd1-8d3d-420e82d4e1fd" containerName="mysql-bootstrap" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.502902 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a933f8-5063-4cd1-8d3d-420e82d4e1fd" containerName="mysql-bootstrap" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.502931 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="proxy-httpd" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.502946 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="proxy-httpd" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.502974 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="ceilometer-central-agent" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.502989 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="ceilometer-central-agent" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503019 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c70bcdb-316e-4246-b333-ddaf6438c6ee" containerName="memcached" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503033 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c70bcdb-316e-4246-b333-ddaf6438c6ee" containerName="memcached" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503082 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d7103bd-b24b-4a0c-b68a-17373307f1aa" containerName="rabbitmq" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503097 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d7103bd-b24b-4a0c-b68a-17373307f1aa" containerName="rabbitmq" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503117 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-server" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503136 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-server" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503162 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f50f65-9151-4444-9680-f86e0f256069" containerName="setup-container" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503179 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f50f65-9151-4444-9680-f86e0f256069" containerName="setup-container" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503209 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" containerName="neutron-httpd" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503224 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" containerName="neutron-httpd" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503262 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2acfa57e-c4e9-4809-b5cb-109f1bbb64f2" containerName="mariadb-account-create-update" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503284 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2acfa57e-c4e9-4809-b5cb-109f1bbb64f2" containerName="mariadb-account-create-update" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503313 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-replicator" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503329 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-replicator" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503353 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e00c7d5-7199-4602-9d3b-5af4f14124bc" containerName="keystone-api" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503369 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e00c7d5-7199-4602-9d3b-5af4f14124bc" containerName="keystone-api" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503395 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-auditor" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503412 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-auditor" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503477 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" containerName="nova-api-log" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503496 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" containerName="nova-api-log" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503523 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-updater" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503542 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-updater" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503576 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19a933f8-5063-4cd1-8d3d-420e82d4e1fd" containerName="galera" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503594 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a933f8-5063-4cd1-8d3d-420e82d4e1fd" containerName="galera" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503614 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="561efc1e-a930-440f-83b1-a75217a11f32" containerName="barbican-api-log" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503631 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="561efc1e-a930-440f-83b1-a75217a11f32" containerName="barbican-api-log" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503649 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4168bc0-26cf-4786-9e28-95647462c372" containerName="glance-httpd" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503665 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4168bc0-26cf-4786-9e28-95647462c372" containerName="glance-httpd" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503692 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b52494a8-ff56-449e-a274-b37eb4bad43d" containerName="kube-state-metrics" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503709 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b52494a8-ff56-449e-a274-b37eb4bad43d" containerName="kube-state-metrics" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503732 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" containerName="ovn-northd" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503769 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" containerName="ovn-northd" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503791 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-expirer" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503809 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-expirer" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503832 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" containerName="neutron-api" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503848 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" containerName="neutron-api" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503868 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="swift-recon-cron" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503884 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="swift-recon-cron" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503912 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="rsync" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503932 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="rsync" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504392 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-replicator" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504423 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="swift-recon-cron" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504457 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c366100e-d2a0-4be9-965f-ef7b7ad39f78" containerName="nova-scheduler-scheduler" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504489 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-server" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504519 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" containerName="nova-api-log" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504574 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504596 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-server" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504615 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e00c7d5-7199-4602-9d3b-5af4f14124bc" containerName="keystone-api" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504646 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d7103bd-b24b-4a0c-b68a-17373307f1aa" containerName="rabbitmq" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504669 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="561efc1e-a930-440f-83b1-a75217a11f32" containerName="barbican-api" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504695 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbc235c8-beef-433d-b663-e1d09b6a9b65" containerName="nova-cell1-conductor-conductor" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504718 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerName="nova-metadata-log" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504740 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-updater" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504766 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-server" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504784 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" containerName="ovn-northd" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504810 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4168bc0-26cf-4786-9e28-95647462c372" containerName="glance-log" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504829 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b52494a8-ff56-449e-a274-b37eb4bad43d" containerName="kube-state-metrics" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504847 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c653ffa0-195e-4eda-8c25-cfcff2715bdf" containerName="barbican-worker-log" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504874 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="sg-core" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504901 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-replicator" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504931 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="ceilometer-central-agent" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504953 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="proxy-httpd" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504972 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="365d6c18-395e-4a62-939d-a04927ffa8aa" containerName="barbican-keystone-listener-log" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504990 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-updater" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505063 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerName="nova-metadata-metadata" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505091 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-reaper" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505111 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-expirer" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505134 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="19a933f8-5063-4cd1-8d3d-420e82d4e1fd" containerName="galera" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505156 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" containerName="openstack-network-exporter" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505185 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="rsync" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505204 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-replicator" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505223 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c653ffa0-195e-4eda-8c25-cfcff2715bdf" containerName="barbican-worker" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505248 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-auditor" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505266 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" containerName="neutron-api" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505288 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="ceilometer-notification-agent" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505317 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" containerName="nova-api-api" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505335 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="2acfa57e-c4e9-4809-b5cb-109f1bbb64f2" containerName="mariadb-account-create-update" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505356 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c70bcdb-316e-4246-b333-ddaf6438c6ee" containerName="memcached" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505375 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4168bc0-26cf-4786-9e28-95647462c372" containerName="glance-httpd" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505397 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="365d6c18-395e-4a62-939d-a04927ffa8aa" containerName="barbican-keystone-listener" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505415 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="359a818e-1c34-4dfd-bb59-0e72280a85a0" containerName="nova-cell0-conductor-conductor" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505433 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" containerName="neutron-httpd" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505451 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-auditor" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505469 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovs-vswitchd" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505489 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="561efc1e-a930-440f-83b1-a75217a11f32" containerName="barbican-api-log" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505507 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="67f50f65-9151-4444-9680-f86e0f256069" containerName="rabbitmq" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505526 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-auditor" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.507814 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.510818 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tw2g7"] Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.635378 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrvjl\" (UniqueName: \"kubernetes.io/projected/bfc1e018-89a5-4a48-8dc9-6230711c4c49-kube-api-access-qrvjl\") pod \"redhat-marketplace-tw2g7\" (UID: \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\") " pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.635449 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfc1e018-89a5-4a48-8dc9-6230711c4c49-catalog-content\") pod \"redhat-marketplace-tw2g7\" (UID: \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\") " pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.635467 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfc1e018-89a5-4a48-8dc9-6230711c4c49-utilities\") pod \"redhat-marketplace-tw2g7\" (UID: \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\") " pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.736768 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrvjl\" (UniqueName: \"kubernetes.io/projected/bfc1e018-89a5-4a48-8dc9-6230711c4c49-kube-api-access-qrvjl\") pod \"redhat-marketplace-tw2g7\" (UID: \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\") " pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.736911 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfc1e018-89a5-4a48-8dc9-6230711c4c49-catalog-content\") pod \"redhat-marketplace-tw2g7\" (UID: \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\") " pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.736940 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfc1e018-89a5-4a48-8dc9-6230711c4c49-utilities\") pod \"redhat-marketplace-tw2g7\" (UID: \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\") " pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.737596 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfc1e018-89a5-4a48-8dc9-6230711c4c49-utilities\") pod \"redhat-marketplace-tw2g7\" (UID: \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\") " pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.737992 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfc1e018-89a5-4a48-8dc9-6230711c4c49-catalog-content\") pod \"redhat-marketplace-tw2g7\" (UID: \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\") " pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.755111 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrvjl\" (UniqueName: \"kubernetes.io/projected/bfc1e018-89a5-4a48-8dc9-6230711c4c49-kube-api-access-qrvjl\") pod \"redhat-marketplace-tw2g7\" (UID: \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\") " pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.838259 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.886187 4902 scope.go:117] "RemoveContainer" containerID="7692fd62f5f8d970ca1dd253fc5c7512cbe9da4bdb84caf7d56a5669f3d8f303" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.922219 4902 scope.go:117] "RemoveContainer" containerID="276b271b02ab000b334b001c5253fa10542fc6c000e67438f4ac84d47645e83c" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.976914 4902 scope.go:117] "RemoveContainer" containerID="8d87c9d3ad1eb4e5b4f658d4e0a489d56cdaee9fc570202fc41afc9916f4ea6a" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.017637 4902 scope.go:117] "RemoveContainer" containerID="c05aa038a30ca68cb9b9875b1713755a7a748b30cae2fd412e457a921170733c" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.085293 4902 scope.go:117] "RemoveContainer" containerID="2afbcb861df82627e26ab173626f1c8e32c7418b9f0cebb9c30b8e8a773fee20" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.110508 4902 scope.go:117] "RemoveContainer" containerID="885834645f14556231a3e7a784298540883e5e957ef165eb89b1d865e26a97ac" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.138465 4902 scope.go:117] "RemoveContainer" containerID="b979d6e79dba97b3f526cfab4506aea68e0143adfc4356de611547f4493bec9f" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.156147 4902 scope.go:117] "RemoveContainer" containerID="4cc1203e814fc62d40f33869f21d58884c995a732338ba7c6403b666fa8b712d" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.172016 4902 scope.go:117] "RemoveContainer" containerID="e09162a3ec37680929590914b38193023c428285227f1464b2740e369fca6b12" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.187063 4902 scope.go:117] "RemoveContainer" containerID="689584950e8fe70d3a520e19880e648a9cfc4e1dba5d9cf1c7c92f94555adda3" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.204277 4902 scope.go:117] "RemoveContainer" containerID="cff876825001ee2c7fa7f8bdbe379da8527d1a33b467f10b305adc0a8747aa98" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.228172 4902 scope.go:117] "RemoveContainer" containerID="3d6ff1e2aa4c6d25b1afbc1c6226ab9dd8acd5472135388cee9dad4beae1dc39" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.254416 4902 scope.go:117] "RemoveContainer" containerID="493b2b2d4384ed074008865724712fb9ff226fa56a68d6f6b8711c1447a2d13b" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.291396 4902 scope.go:117] "RemoveContainer" containerID="d68914c4c8e15dba0295d1fd9bb40d5fc60aa1162bc79ce24523d135a247b33e" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.320182 4902 scope.go:117] "RemoveContainer" containerID="277691b4cd995bb05532afffdba1de6a3149dc7dc1e0f0e9ce9ba32058b05cf6" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.336187 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tw2g7"] Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.349325 4902 scope.go:117] "RemoveContainer" containerID="af472f5b3bb9010ffaa61382ab0352d28b368f2e713ea44d92c653fb5e095055" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.379180 4902 scope.go:117] "RemoveContainer" containerID="dff0b2c9f0b06182d720253d8f2ef15a7b10dcf34cc35665586623d88b252d47" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.414270 4902 scope.go:117] "RemoveContainer" containerID="b1d80a37b9ccbfaf4f2535ef16320e6b2227313b028f8ea36eaf1aa897c3fa62" Jan 21 14:59:21 crc kubenswrapper[4902]: I0121 14:59:21.356092 4902 generic.go:334] "Generic (PLEG): container finished" podID="bfc1e018-89a5-4a48-8dc9-6230711c4c49" containerID="2cde6e75f7222b067d6b79b31a7ebe5313dd71bd0e2b68973e655c5cf6f0a600" exitCode=0 Jan 21 14:59:21 crc kubenswrapper[4902]: I0121 14:59:21.356157 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tw2g7" event={"ID":"bfc1e018-89a5-4a48-8dc9-6230711c4c49","Type":"ContainerDied","Data":"2cde6e75f7222b067d6b79b31a7ebe5313dd71bd0e2b68973e655c5cf6f0a600"} Jan 21 14:59:21 crc kubenswrapper[4902]: I0121 14:59:21.357975 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tw2g7" event={"ID":"bfc1e018-89a5-4a48-8dc9-6230711c4c49","Type":"ContainerStarted","Data":"7aedcda0876399885a06ed84fcbe0f07fa12b56336f8d179ea7b38ba4421ef37"} Jan 21 14:59:22 crc kubenswrapper[4902]: I0121 14:59:22.370455 4902 generic.go:334] "Generic (PLEG): container finished" podID="bfc1e018-89a5-4a48-8dc9-6230711c4c49" containerID="af29f10ace20181a510bff8176fb59c730f1f35a22ee04c32fe59d5d86239e27" exitCode=0 Jan 21 14:59:22 crc kubenswrapper[4902]: I0121 14:59:22.370530 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tw2g7" event={"ID":"bfc1e018-89a5-4a48-8dc9-6230711c4c49","Type":"ContainerDied","Data":"af29f10ace20181a510bff8176fb59c730f1f35a22ee04c32fe59d5d86239e27"} Jan 21 14:59:23 crc kubenswrapper[4902]: I0121 14:59:23.380201 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tw2g7" event={"ID":"bfc1e018-89a5-4a48-8dc9-6230711c4c49","Type":"ContainerStarted","Data":"3422abd8be70782bbc83b0962ea81e9a80ba6bb8f97ab843aad91552b7ac69ef"} Jan 21 14:59:23 crc kubenswrapper[4902]: I0121 14:59:23.404804 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tw2g7" podStartSLOduration=2.992041671 podStartE2EDuration="4.404780313s" podCreationTimestamp="2026-01-21 14:59:19 +0000 UTC" firstStartedPulling="2026-01-21 14:59:21.357701026 +0000 UTC m=+1523.434534055" lastFinishedPulling="2026-01-21 14:59:22.770439628 +0000 UTC m=+1524.847272697" observedRunningTime="2026-01-21 14:59:23.398075164 +0000 UTC m=+1525.474908203" watchObservedRunningTime="2026-01-21 14:59:23.404780313 +0000 UTC m=+1525.481613362" Jan 21 14:59:29 crc kubenswrapper[4902]: I0121 14:59:29.839188 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:29 crc kubenswrapper[4902]: I0121 14:59:29.841096 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:29 crc kubenswrapper[4902]: I0121 14:59:29.885019 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:30 crc kubenswrapper[4902]: I0121 14:59:30.478720 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:30 crc kubenswrapper[4902]: I0121 14:59:30.532349 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tw2g7"] Jan 21 14:59:32 crc kubenswrapper[4902]: I0121 14:59:32.451552 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tw2g7" podUID="bfc1e018-89a5-4a48-8dc9-6230711c4c49" containerName="registry-server" containerID="cri-o://3422abd8be70782bbc83b0962ea81e9a80ba6bb8f97ab843aad91552b7ac69ef" gracePeriod=2 Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.461437 4902 generic.go:334] "Generic (PLEG): container finished" podID="bfc1e018-89a5-4a48-8dc9-6230711c4c49" containerID="3422abd8be70782bbc83b0962ea81e9a80ba6bb8f97ab843aad91552b7ac69ef" exitCode=0 Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.461637 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tw2g7" event={"ID":"bfc1e018-89a5-4a48-8dc9-6230711c4c49","Type":"ContainerDied","Data":"3422abd8be70782bbc83b0962ea81e9a80ba6bb8f97ab843aad91552b7ac69ef"} Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.461775 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tw2g7" event={"ID":"bfc1e018-89a5-4a48-8dc9-6230711c4c49","Type":"ContainerDied","Data":"7aedcda0876399885a06ed84fcbe0f07fa12b56336f8d179ea7b38ba4421ef37"} Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.461787 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7aedcda0876399885a06ed84fcbe0f07fa12b56336f8d179ea7b38ba4421ef37" Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.502687 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.656355 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrvjl\" (UniqueName: \"kubernetes.io/projected/bfc1e018-89a5-4a48-8dc9-6230711c4c49-kube-api-access-qrvjl\") pod \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\" (UID: \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\") " Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.656424 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfc1e018-89a5-4a48-8dc9-6230711c4c49-utilities\") pod \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\" (UID: \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\") " Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.656493 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfc1e018-89a5-4a48-8dc9-6230711c4c49-catalog-content\") pod \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\" (UID: \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\") " Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.658332 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfc1e018-89a5-4a48-8dc9-6230711c4c49-utilities" (OuterVolumeSpecName: "utilities") pod "bfc1e018-89a5-4a48-8dc9-6230711c4c49" (UID: "bfc1e018-89a5-4a48-8dc9-6230711c4c49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.671280 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfc1e018-89a5-4a48-8dc9-6230711c4c49-kube-api-access-qrvjl" (OuterVolumeSpecName: "kube-api-access-qrvjl") pod "bfc1e018-89a5-4a48-8dc9-6230711c4c49" (UID: "bfc1e018-89a5-4a48-8dc9-6230711c4c49"). InnerVolumeSpecName "kube-api-access-qrvjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.703673 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfc1e018-89a5-4a48-8dc9-6230711c4c49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfc1e018-89a5-4a48-8dc9-6230711c4c49" (UID: "bfc1e018-89a5-4a48-8dc9-6230711c4c49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.759240 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrvjl\" (UniqueName: \"kubernetes.io/projected/bfc1e018-89a5-4a48-8dc9-6230711c4c49-kube-api-access-qrvjl\") on node \"crc\" DevicePath \"\"" Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.759301 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfc1e018-89a5-4a48-8dc9-6230711c4c49-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.759319 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfc1e018-89a5-4a48-8dc9-6230711c4c49-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:59:34 crc kubenswrapper[4902]: I0121 14:59:34.471240 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:34 crc kubenswrapper[4902]: I0121 14:59:34.497717 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tw2g7"] Jan 21 14:59:34 crc kubenswrapper[4902]: I0121 14:59:34.503829 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tw2g7"] Jan 21 14:59:36 crc kubenswrapper[4902]: I0121 14:59:36.304155 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfc1e018-89a5-4a48-8dc9-6230711c4c49" path="/var/lib/kubelet/pods/bfc1e018-89a5-4a48-8dc9-6230711c4c49/volumes" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.127093 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6mpxw"] Jan 21 14:59:45 crc kubenswrapper[4902]: E0121 14:59:45.127869 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfc1e018-89a5-4a48-8dc9-6230711c4c49" containerName="extract-content" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.127885 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc1e018-89a5-4a48-8dc9-6230711c4c49" containerName="extract-content" Jan 21 14:59:45 crc kubenswrapper[4902]: E0121 14:59:45.127898 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfc1e018-89a5-4a48-8dc9-6230711c4c49" containerName="registry-server" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.127906 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc1e018-89a5-4a48-8dc9-6230711c4c49" containerName="registry-server" Jan 21 14:59:45 crc kubenswrapper[4902]: E0121 14:59:45.127920 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfc1e018-89a5-4a48-8dc9-6230711c4c49" containerName="extract-utilities" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.127930 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc1e018-89a5-4a48-8dc9-6230711c4c49" containerName="extract-utilities" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.128180 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfc1e018-89a5-4a48-8dc9-6230711c4c49" containerName="registry-server" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.129436 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.146367 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6mpxw"] Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.217696 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae8d234-1e79-4509-be2f-286368c7e394-utilities\") pod \"community-operators-6mpxw\" (UID: \"cae8d234-1e79-4509-be2f-286368c7e394\") " pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.217820 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvx7m\" (UniqueName: \"kubernetes.io/projected/cae8d234-1e79-4509-be2f-286368c7e394-kube-api-access-rvx7m\") pod \"community-operators-6mpxw\" (UID: \"cae8d234-1e79-4509-be2f-286368c7e394\") " pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.217846 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae8d234-1e79-4509-be2f-286368c7e394-catalog-content\") pod \"community-operators-6mpxw\" (UID: \"cae8d234-1e79-4509-be2f-286368c7e394\") " pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.319621 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvx7m\" (UniqueName: \"kubernetes.io/projected/cae8d234-1e79-4509-be2f-286368c7e394-kube-api-access-rvx7m\") pod \"community-operators-6mpxw\" (UID: \"cae8d234-1e79-4509-be2f-286368c7e394\") " pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.319670 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae8d234-1e79-4509-be2f-286368c7e394-catalog-content\") pod \"community-operators-6mpxw\" (UID: \"cae8d234-1e79-4509-be2f-286368c7e394\") " pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.319761 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae8d234-1e79-4509-be2f-286368c7e394-utilities\") pod \"community-operators-6mpxw\" (UID: \"cae8d234-1e79-4509-be2f-286368c7e394\") " pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.320318 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae8d234-1e79-4509-be2f-286368c7e394-utilities\") pod \"community-operators-6mpxw\" (UID: \"cae8d234-1e79-4509-be2f-286368c7e394\") " pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.320429 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae8d234-1e79-4509-be2f-286368c7e394-catalog-content\") pod \"community-operators-6mpxw\" (UID: \"cae8d234-1e79-4509-be2f-286368c7e394\") " pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.341478 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvx7m\" (UniqueName: \"kubernetes.io/projected/cae8d234-1e79-4509-be2f-286368c7e394-kube-api-access-rvx7m\") pod \"community-operators-6mpxw\" (UID: \"cae8d234-1e79-4509-be2f-286368c7e394\") " pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.459112 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.949179 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6mpxw"] Jan 21 14:59:46 crc kubenswrapper[4902]: I0121 14:59:46.560761 4902 generic.go:334] "Generic (PLEG): container finished" podID="cae8d234-1e79-4509-be2f-286368c7e394" containerID="784ba278c492e2b79e7c4f0a249e38eb1d464723708a8acd99a97cecaa12256f" exitCode=0 Jan 21 14:59:46 crc kubenswrapper[4902]: I0121 14:59:46.560882 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mpxw" event={"ID":"cae8d234-1e79-4509-be2f-286368c7e394","Type":"ContainerDied","Data":"784ba278c492e2b79e7c4f0a249e38eb1d464723708a8acd99a97cecaa12256f"} Jan 21 14:59:46 crc kubenswrapper[4902]: I0121 14:59:46.561054 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mpxw" event={"ID":"cae8d234-1e79-4509-be2f-286368c7e394","Type":"ContainerStarted","Data":"1b25494bfab6f21f8caa559335efe4ed7881aad4a905f3a2da79ccb3ba3a2b88"} Jan 21 14:59:47 crc kubenswrapper[4902]: I0121 14:59:47.569942 4902 generic.go:334] "Generic (PLEG): container finished" podID="cae8d234-1e79-4509-be2f-286368c7e394" containerID="8c3b49ac9b6344761005c36f2d673b0b9ee35d2d9496bfc6071082d0f99ae961" exitCode=0 Jan 21 14:59:47 crc kubenswrapper[4902]: I0121 14:59:47.570087 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mpxw" event={"ID":"cae8d234-1e79-4509-be2f-286368c7e394","Type":"ContainerDied","Data":"8c3b49ac9b6344761005c36f2d673b0b9ee35d2d9496bfc6071082d0f99ae961"} Jan 21 14:59:48 crc kubenswrapper[4902]: I0121 14:59:48.579753 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mpxw" event={"ID":"cae8d234-1e79-4509-be2f-286368c7e394","Type":"ContainerStarted","Data":"fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171"} Jan 21 14:59:48 crc kubenswrapper[4902]: I0121 14:59:48.597845 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6mpxw" podStartSLOduration=1.8876132380000001 podStartE2EDuration="3.597830579s" podCreationTimestamp="2026-01-21 14:59:45 +0000 UTC" firstStartedPulling="2026-01-21 14:59:46.563876461 +0000 UTC m=+1548.640709490" lastFinishedPulling="2026-01-21 14:59:48.274093802 +0000 UTC m=+1550.350926831" observedRunningTime="2026-01-21 14:59:48.595415201 +0000 UTC m=+1550.672248240" watchObservedRunningTime="2026-01-21 14:59:48.597830579 +0000 UTC m=+1550.674663608" Jan 21 14:59:55 crc kubenswrapper[4902]: I0121 14:59:55.459219 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:55 crc kubenswrapper[4902]: I0121 14:59:55.459729 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:55 crc kubenswrapper[4902]: I0121 14:59:55.497892 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:55 crc kubenswrapper[4902]: I0121 14:59:55.695705 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:55 crc kubenswrapper[4902]: I0121 14:59:55.740482 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6mpxw"] Jan 21 14:59:57 crc kubenswrapper[4902]: I0121 14:59:57.658261 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6mpxw" podUID="cae8d234-1e79-4509-be2f-286368c7e394" containerName="registry-server" containerID="cri-o://fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171" gracePeriod=2 Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.584626 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.667656 4902 generic.go:334] "Generic (PLEG): container finished" podID="cae8d234-1e79-4509-be2f-286368c7e394" containerID="fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171" exitCode=0 Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.667704 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mpxw" event={"ID":"cae8d234-1e79-4509-be2f-286368c7e394","Type":"ContainerDied","Data":"fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171"} Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.667728 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.667748 4902 scope.go:117] "RemoveContainer" containerID="fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.667735 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mpxw" event={"ID":"cae8d234-1e79-4509-be2f-286368c7e394","Type":"ContainerDied","Data":"1b25494bfab6f21f8caa559335efe4ed7881aad4a905f3a2da79ccb3ba3a2b88"} Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.687025 4902 scope.go:117] "RemoveContainer" containerID="8c3b49ac9b6344761005c36f2d673b0b9ee35d2d9496bfc6071082d0f99ae961" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.704110 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvx7m\" (UniqueName: \"kubernetes.io/projected/cae8d234-1e79-4509-be2f-286368c7e394-kube-api-access-rvx7m\") pod \"cae8d234-1e79-4509-be2f-286368c7e394\" (UID: \"cae8d234-1e79-4509-be2f-286368c7e394\") " Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.704195 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae8d234-1e79-4509-be2f-286368c7e394-utilities\") pod \"cae8d234-1e79-4509-be2f-286368c7e394\" (UID: \"cae8d234-1e79-4509-be2f-286368c7e394\") " Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.704317 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae8d234-1e79-4509-be2f-286368c7e394-catalog-content\") pod \"cae8d234-1e79-4509-be2f-286368c7e394\" (UID: \"cae8d234-1e79-4509-be2f-286368c7e394\") " Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.705002 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cae8d234-1e79-4509-be2f-286368c7e394-utilities" (OuterVolumeSpecName: "utilities") pod "cae8d234-1e79-4509-be2f-286368c7e394" (UID: "cae8d234-1e79-4509-be2f-286368c7e394"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.707527 4902 scope.go:117] "RemoveContainer" containerID="784ba278c492e2b79e7c4f0a249e38eb1d464723708a8acd99a97cecaa12256f" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.712299 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cae8d234-1e79-4509-be2f-286368c7e394-kube-api-access-rvx7m" (OuterVolumeSpecName: "kube-api-access-rvx7m") pod "cae8d234-1e79-4509-be2f-286368c7e394" (UID: "cae8d234-1e79-4509-be2f-286368c7e394"). InnerVolumeSpecName "kube-api-access-rvx7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.755097 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cae8d234-1e79-4509-be2f-286368c7e394-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cae8d234-1e79-4509-be2f-286368c7e394" (UID: "cae8d234-1e79-4509-be2f-286368c7e394"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.755603 4902 scope.go:117] "RemoveContainer" containerID="fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171" Jan 21 14:59:58 crc kubenswrapper[4902]: E0121 14:59:58.756235 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171\": container with ID starting with fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171 not found: ID does not exist" containerID="fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.756263 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171"} err="failed to get container status \"fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171\": rpc error: code = NotFound desc = could not find container \"fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171\": container with ID starting with fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171 not found: ID does not exist" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.756285 4902 scope.go:117] "RemoveContainer" containerID="8c3b49ac9b6344761005c36f2d673b0b9ee35d2d9496bfc6071082d0f99ae961" Jan 21 14:59:58 crc kubenswrapper[4902]: E0121 14:59:58.756696 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c3b49ac9b6344761005c36f2d673b0b9ee35d2d9496bfc6071082d0f99ae961\": container with ID starting with 8c3b49ac9b6344761005c36f2d673b0b9ee35d2d9496bfc6071082d0f99ae961 not found: ID does not exist" containerID="8c3b49ac9b6344761005c36f2d673b0b9ee35d2d9496bfc6071082d0f99ae961" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.756741 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c3b49ac9b6344761005c36f2d673b0b9ee35d2d9496bfc6071082d0f99ae961"} err="failed to get container status \"8c3b49ac9b6344761005c36f2d673b0b9ee35d2d9496bfc6071082d0f99ae961\": rpc error: code = NotFound desc = could not find container \"8c3b49ac9b6344761005c36f2d673b0b9ee35d2d9496bfc6071082d0f99ae961\": container with ID starting with 8c3b49ac9b6344761005c36f2d673b0b9ee35d2d9496bfc6071082d0f99ae961 not found: ID does not exist" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.756796 4902 scope.go:117] "RemoveContainer" containerID="784ba278c492e2b79e7c4f0a249e38eb1d464723708a8acd99a97cecaa12256f" Jan 21 14:59:58 crc kubenswrapper[4902]: E0121 14:59:58.757165 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"784ba278c492e2b79e7c4f0a249e38eb1d464723708a8acd99a97cecaa12256f\": container with ID starting with 784ba278c492e2b79e7c4f0a249e38eb1d464723708a8acd99a97cecaa12256f not found: ID does not exist" containerID="784ba278c492e2b79e7c4f0a249e38eb1d464723708a8acd99a97cecaa12256f" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.757239 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"784ba278c492e2b79e7c4f0a249e38eb1d464723708a8acd99a97cecaa12256f"} err="failed to get container status \"784ba278c492e2b79e7c4f0a249e38eb1d464723708a8acd99a97cecaa12256f\": rpc error: code = NotFound desc = could not find container \"784ba278c492e2b79e7c4f0a249e38eb1d464723708a8acd99a97cecaa12256f\": container with ID starting with 784ba278c492e2b79e7c4f0a249e38eb1d464723708a8acd99a97cecaa12256f not found: ID does not exist" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.806232 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvx7m\" (UniqueName: \"kubernetes.io/projected/cae8d234-1e79-4509-be2f-286368c7e394-kube-api-access-rvx7m\") on node \"crc\" DevicePath \"\"" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.806267 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae8d234-1e79-4509-be2f-286368c7e394-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.806277 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae8d234-1e79-4509-be2f-286368c7e394-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.993986 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6mpxw"] Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.999307 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6mpxw"] Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.161061 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th"] Jan 21 15:00:00 crc kubenswrapper[4902]: E0121 15:00:00.161719 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae8d234-1e79-4509-be2f-286368c7e394" containerName="extract-utilities" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.161737 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae8d234-1e79-4509-be2f-286368c7e394" containerName="extract-utilities" Jan 21 15:00:00 crc kubenswrapper[4902]: E0121 15:00:00.161757 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae8d234-1e79-4509-be2f-286368c7e394" containerName="registry-server" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.161764 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae8d234-1e79-4509-be2f-286368c7e394" containerName="registry-server" Jan 21 15:00:00 crc kubenswrapper[4902]: E0121 15:00:00.161779 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae8d234-1e79-4509-be2f-286368c7e394" containerName="extract-content" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.161787 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae8d234-1e79-4509-be2f-286368c7e394" containerName="extract-content" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.161934 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae8d234-1e79-4509-be2f-286368c7e394" containerName="registry-server" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.162518 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.165059 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.165240 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.180147 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th"] Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.231518 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ada0d02-9902-4746-b1ad-42b3f9e711a7-secret-volume\") pod \"collect-profiles-29483460-qn2th\" (UID: \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.231584 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcjsw\" (UniqueName: \"kubernetes.io/projected/0ada0d02-9902-4746-b1ad-42b3f9e711a7-kube-api-access-wcjsw\") pod \"collect-profiles-29483460-qn2th\" (UID: \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.231626 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ada0d02-9902-4746-b1ad-42b3f9e711a7-config-volume\") pod \"collect-profiles-29483460-qn2th\" (UID: \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.303569 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cae8d234-1e79-4509-be2f-286368c7e394" path="/var/lib/kubelet/pods/cae8d234-1e79-4509-be2f-286368c7e394/volumes" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.333013 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ada0d02-9902-4746-b1ad-42b3f9e711a7-config-volume\") pod \"collect-profiles-29483460-qn2th\" (UID: \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.333144 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ada0d02-9902-4746-b1ad-42b3f9e711a7-secret-volume\") pod \"collect-profiles-29483460-qn2th\" (UID: \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.333190 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcjsw\" (UniqueName: \"kubernetes.io/projected/0ada0d02-9902-4746-b1ad-42b3f9e711a7-kube-api-access-wcjsw\") pod \"collect-profiles-29483460-qn2th\" (UID: \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.334393 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ada0d02-9902-4746-b1ad-42b3f9e711a7-config-volume\") pod \"collect-profiles-29483460-qn2th\" (UID: \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.346952 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ada0d02-9902-4746-b1ad-42b3f9e711a7-secret-volume\") pod \"collect-profiles-29483460-qn2th\" (UID: \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.349153 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcjsw\" (UniqueName: \"kubernetes.io/projected/0ada0d02-9902-4746-b1ad-42b3f9e711a7-kube-api-access-wcjsw\") pod \"collect-profiles-29483460-qn2th\" (UID: \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.482231 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:01 crc kubenswrapper[4902]: I0121 15:00:01.119793 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th"] Jan 21 15:00:01 crc kubenswrapper[4902]: I0121 15:00:01.692699 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ada0d02-9902-4746-b1ad-42b3f9e711a7" containerID="7ee1e059c9213e4cad45fc2396c6626d215288fb3b3b38f6079f8306a505e407" exitCode=0 Jan 21 15:00:01 crc kubenswrapper[4902]: I0121 15:00:01.692892 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" event={"ID":"0ada0d02-9902-4746-b1ad-42b3f9e711a7","Type":"ContainerDied","Data":"7ee1e059c9213e4cad45fc2396c6626d215288fb3b3b38f6079f8306a505e407"} Jan 21 15:00:01 crc kubenswrapper[4902]: I0121 15:00:01.693008 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" event={"ID":"0ada0d02-9902-4746-b1ad-42b3f9e711a7","Type":"ContainerStarted","Data":"5510a8d9d406797dbc38fff442dbe2988144de0e86c45b04a74234038024e718"} Jan 21 15:00:02 crc kubenswrapper[4902]: I0121 15:00:02.984686 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:03 crc kubenswrapper[4902]: I0121 15:00:03.095749 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ada0d02-9902-4746-b1ad-42b3f9e711a7-config-volume\") pod \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\" (UID: \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\") " Jan 21 15:00:03 crc kubenswrapper[4902]: I0121 15:00:03.095800 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcjsw\" (UniqueName: \"kubernetes.io/projected/0ada0d02-9902-4746-b1ad-42b3f9e711a7-kube-api-access-wcjsw\") pod \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\" (UID: \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\") " Jan 21 15:00:03 crc kubenswrapper[4902]: I0121 15:00:03.095854 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ada0d02-9902-4746-b1ad-42b3f9e711a7-secret-volume\") pod \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\" (UID: \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\") " Jan 21 15:00:03 crc kubenswrapper[4902]: I0121 15:00:03.097031 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ada0d02-9902-4746-b1ad-42b3f9e711a7-config-volume" (OuterVolumeSpecName: "config-volume") pod "0ada0d02-9902-4746-b1ad-42b3f9e711a7" (UID: "0ada0d02-9902-4746-b1ad-42b3f9e711a7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:00:03 crc kubenswrapper[4902]: I0121 15:00:03.101383 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ada0d02-9902-4746-b1ad-42b3f9e711a7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0ada0d02-9902-4746-b1ad-42b3f9e711a7" (UID: "0ada0d02-9902-4746-b1ad-42b3f9e711a7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:00:03 crc kubenswrapper[4902]: I0121 15:00:03.102645 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ada0d02-9902-4746-b1ad-42b3f9e711a7-kube-api-access-wcjsw" (OuterVolumeSpecName: "kube-api-access-wcjsw") pod "0ada0d02-9902-4746-b1ad-42b3f9e711a7" (UID: "0ada0d02-9902-4746-b1ad-42b3f9e711a7"). InnerVolumeSpecName "kube-api-access-wcjsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:00:03 crc kubenswrapper[4902]: I0121 15:00:03.197505 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ada0d02-9902-4746-b1ad-42b3f9e711a7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:00:03 crc kubenswrapper[4902]: I0121 15:00:03.197537 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcjsw\" (UniqueName: \"kubernetes.io/projected/0ada0d02-9902-4746-b1ad-42b3f9e711a7-kube-api-access-wcjsw\") on node \"crc\" DevicePath \"\"" Jan 21 15:00:03 crc kubenswrapper[4902]: I0121 15:00:03.197549 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ada0d02-9902-4746-b1ad-42b3f9e711a7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:00:03 crc kubenswrapper[4902]: I0121 15:00:03.709023 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" event={"ID":"0ada0d02-9902-4746-b1ad-42b3f9e711a7","Type":"ContainerDied","Data":"5510a8d9d406797dbc38fff442dbe2988144de0e86c45b04a74234038024e718"} Jan 21 15:00:03 crc kubenswrapper[4902]: I0121 15:00:03.709096 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5510a8d9d406797dbc38fff442dbe2988144de0e86c45b04a74234038024e718" Jan 21 15:00:03 crc kubenswrapper[4902]: I0121 15:00:03.709093 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:20 crc kubenswrapper[4902]: I0121 15:00:20.657159 4902 scope.go:117] "RemoveContainer" containerID="2c30f8fcf44519868021b999009e6e0a364f65ba9bb5e12d8b816868d45e7ed6" Jan 21 15:00:20 crc kubenswrapper[4902]: I0121 15:00:20.684227 4902 scope.go:117] "RemoveContainer" containerID="b544fa374d13ef6e784a8d5d16f0cdb36de690b191b7cd286db841a786a83df0" Jan 21 15:00:20 crc kubenswrapper[4902]: I0121 15:00:20.736295 4902 scope.go:117] "RemoveContainer" containerID="51583e6b97e071d7cf96bdf513ff863344bb3712ef59fd993cdce4376b16aa3c" Jan 21 15:00:20 crc kubenswrapper[4902]: I0121 15:00:20.764737 4902 scope.go:117] "RemoveContainer" containerID="ae254c62b0513ec3d622f49b853707c7b475818d264ba6a9ceb8efcfd14f5993" Jan 21 15:01:17 crc kubenswrapper[4902]: I0121 15:01:17.769578 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:01:17 crc kubenswrapper[4902]: I0121 15:01:17.770214 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:01:20 crc kubenswrapper[4902]: I0121 15:01:20.844703 4902 scope.go:117] "RemoveContainer" containerID="184ed0c03e177484d5129302f45e661a1a2c46bd5bca5080444db5e2821f6ed4" Jan 21 15:01:20 crc kubenswrapper[4902]: I0121 15:01:20.877237 4902 scope.go:117] "RemoveContainer" containerID="40c9945717c6eed6957b84780ec6e3c2301b7187e2ec047124eab88f68c26607" Jan 21 15:01:20 crc kubenswrapper[4902]: I0121 15:01:20.896590 4902 scope.go:117] "RemoveContainer" containerID="670dee5a8d2ff2f59f49370b068ca6bd9c9b2aa28c545aa7b4fee5f803108537" Jan 21 15:01:20 crc kubenswrapper[4902]: I0121 15:01:20.929406 4902 scope.go:117] "RemoveContainer" containerID="616e23f05d0b14c7f93dad0c321acc148cd9b2f70ea9019e00391345fff5c7ec" Jan 21 15:01:20 crc kubenswrapper[4902]: I0121 15:01:20.977875 4902 scope.go:117] "RemoveContainer" containerID="a133e1c90783c83c710a2eea26c02cb7d28a759bac5a441a7e04e3644c54f5fe" Jan 21 15:01:20 crc kubenswrapper[4902]: I0121 15:01:20.993861 4902 scope.go:117] "RemoveContainer" containerID="e91c9182d83789cb593143e414372ebd78fcb513ff497dbf59abde2ed01e0281" Jan 21 15:01:21 crc kubenswrapper[4902]: I0121 15:01:21.031836 4902 scope.go:117] "RemoveContainer" containerID="7ba71046f87bc5f37e174f3f4e4802a75f487d3b7ef216e3060c7e05c5b07755" Jan 21 15:01:21 crc kubenswrapper[4902]: I0121 15:01:21.049736 4902 scope.go:117] "RemoveContainer" containerID="70aa2cf0840fc5f93cbebf841da43d8a387c82a6f9fae61768e764946c976710" Jan 21 15:01:47 crc kubenswrapper[4902]: I0121 15:01:47.769935 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:01:47 crc kubenswrapper[4902]: I0121 15:01:47.770830 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:02:17 crc kubenswrapper[4902]: I0121 15:02:17.769365 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:02:17 crc kubenswrapper[4902]: I0121 15:02:17.770061 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:02:17 crc kubenswrapper[4902]: I0121 15:02:17.770142 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 15:02:17 crc kubenswrapper[4902]: I0121 15:02:17.770950 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:02:17 crc kubenswrapper[4902]: I0121 15:02:17.771033 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" gracePeriod=600 Jan 21 15:02:17 crc kubenswrapper[4902]: E0121 15:02:17.878443 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6c85cc7_ee09_4640_ab22_ce79d086ad7a.slice/crio-conmon-b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6c85cc7_ee09_4640_ab22_ce79d086ad7a.slice/crio-b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110.scope\": RecentStats: unable to find data in memory cache]" Jan 21 15:02:17 crc kubenswrapper[4902]: E0121 15:02:17.896932 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:02:18 crc kubenswrapper[4902]: I0121 15:02:18.818204 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" exitCode=0 Jan 21 15:02:18 crc kubenswrapper[4902]: I0121 15:02:18.818244 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110"} Jan 21 15:02:18 crc kubenswrapper[4902]: I0121 15:02:18.818277 4902 scope.go:117] "RemoveContainer" containerID="faf0ff0caeac282dde2bef565f9dbd539a4c5633dd4c8ba54b6bd0e6704b0a61" Jan 21 15:02:18 crc kubenswrapper[4902]: I0121 15:02:18.818837 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:02:18 crc kubenswrapper[4902]: E0121 15:02:18.819154 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:02:21 crc kubenswrapper[4902]: I0121 15:02:21.153372 4902 scope.go:117] "RemoveContainer" containerID="878874319de7ae0b30076fed21352753826b954ce4e5342f533a40aa94a4f9e8" Jan 21 15:02:21 crc kubenswrapper[4902]: I0121 15:02:21.213489 4902 scope.go:117] "RemoveContainer" containerID="80f1113ebae178430104e31cb438bfd4b8237fd75e17bfe92c4d153d21a7d7b4" Jan 21 15:02:21 crc kubenswrapper[4902]: I0121 15:02:21.252129 4902 scope.go:117] "RemoveContainer" containerID="f49c0a85c7357d87bfb57238893040a47cac5bc0bd2e46a347d2884a529aa300" Jan 21 15:02:34 crc kubenswrapper[4902]: I0121 15:02:34.295294 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:02:34 crc kubenswrapper[4902]: E0121 15:02:34.296192 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:02:45 crc kubenswrapper[4902]: I0121 15:02:45.295022 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:02:45 crc kubenswrapper[4902]: E0121 15:02:45.296145 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:02:58 crc kubenswrapper[4902]: I0121 15:02:58.298710 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:02:58 crc kubenswrapper[4902]: E0121 15:02:58.299446 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:03:11 crc kubenswrapper[4902]: I0121 15:03:11.295179 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:03:11 crc kubenswrapper[4902]: E0121 15:03:11.295810 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:03:26 crc kubenswrapper[4902]: I0121 15:03:26.295300 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:03:26 crc kubenswrapper[4902]: E0121 15:03:26.296033 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:03:40 crc kubenswrapper[4902]: I0121 15:03:40.294581 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:03:40 crc kubenswrapper[4902]: E0121 15:03:40.295546 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.318449 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7qls4"] Jan 21 15:03:44 crc kubenswrapper[4902]: E0121 15:03:44.319414 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ada0d02-9902-4746-b1ad-42b3f9e711a7" containerName="collect-profiles" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.319427 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ada0d02-9902-4746-b1ad-42b3f9e711a7" containerName="collect-profiles" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.319572 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ada0d02-9902-4746-b1ad-42b3f9e711a7" containerName="collect-profiles" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.321181 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.338746 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b124a0-69eb-423b-9303-c39fc8881a4d-utilities\") pod \"redhat-operators-7qls4\" (UID: \"42b124a0-69eb-423b-9303-c39fc8881a4d\") " pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.338897 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b124a0-69eb-423b-9303-c39fc8881a4d-catalog-content\") pod \"redhat-operators-7qls4\" (UID: \"42b124a0-69eb-423b-9303-c39fc8881a4d\") " pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.339027 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2hb9\" (UniqueName: \"kubernetes.io/projected/42b124a0-69eb-423b-9303-c39fc8881a4d-kube-api-access-k2hb9\") pod \"redhat-operators-7qls4\" (UID: \"42b124a0-69eb-423b-9303-c39fc8881a4d\") " pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.372828 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7qls4"] Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.439696 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b124a0-69eb-423b-9303-c39fc8881a4d-utilities\") pod \"redhat-operators-7qls4\" (UID: \"42b124a0-69eb-423b-9303-c39fc8881a4d\") " pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.439824 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b124a0-69eb-423b-9303-c39fc8881a4d-catalog-content\") pod \"redhat-operators-7qls4\" (UID: \"42b124a0-69eb-423b-9303-c39fc8881a4d\") " pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.439890 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2hb9\" (UniqueName: \"kubernetes.io/projected/42b124a0-69eb-423b-9303-c39fc8881a4d-kube-api-access-k2hb9\") pod \"redhat-operators-7qls4\" (UID: \"42b124a0-69eb-423b-9303-c39fc8881a4d\") " pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.440629 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b124a0-69eb-423b-9303-c39fc8881a4d-utilities\") pod \"redhat-operators-7qls4\" (UID: \"42b124a0-69eb-423b-9303-c39fc8881a4d\") " pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.440738 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b124a0-69eb-423b-9303-c39fc8881a4d-catalog-content\") pod \"redhat-operators-7qls4\" (UID: \"42b124a0-69eb-423b-9303-c39fc8881a4d\") " pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.471167 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2hb9\" (UniqueName: \"kubernetes.io/projected/42b124a0-69eb-423b-9303-c39fc8881a4d-kube-api-access-k2hb9\") pod \"redhat-operators-7qls4\" (UID: \"42b124a0-69eb-423b-9303-c39fc8881a4d\") " pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.651577 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:45 crc kubenswrapper[4902]: I0121 15:03:45.159947 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7qls4"] Jan 21 15:03:45 crc kubenswrapper[4902]: I0121 15:03:45.925389 4902 generic.go:334] "Generic (PLEG): container finished" podID="42b124a0-69eb-423b-9303-c39fc8881a4d" containerID="5fedad8f9fadbd6fbeb2cc7d7a45ba88db6a66a98f3529b9df3eea4163559678" exitCode=0 Jan 21 15:03:45 crc kubenswrapper[4902]: I0121 15:03:45.925452 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qls4" event={"ID":"42b124a0-69eb-423b-9303-c39fc8881a4d","Type":"ContainerDied","Data":"5fedad8f9fadbd6fbeb2cc7d7a45ba88db6a66a98f3529b9df3eea4163559678"} Jan 21 15:03:45 crc kubenswrapper[4902]: I0121 15:03:45.925486 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qls4" event={"ID":"42b124a0-69eb-423b-9303-c39fc8881a4d","Type":"ContainerStarted","Data":"706edc1d7426e3cd62c0ef63834715c5d190790b0c9569569ab948a674b91051"} Jan 21 15:03:45 crc kubenswrapper[4902]: I0121 15:03:45.927548 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:03:46 crc kubenswrapper[4902]: I0121 15:03:46.717366 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7vpk9"] Jan 21 15:03:46 crc kubenswrapper[4902]: I0121 15:03:46.719695 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:46 crc kubenswrapper[4902]: I0121 15:03:46.753695 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7vpk9"] Jan 21 15:03:46 crc kubenswrapper[4902]: I0121 15:03:46.873323 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clx2v\" (UniqueName: \"kubernetes.io/projected/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-kube-api-access-clx2v\") pod \"certified-operators-7vpk9\" (UID: \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\") " pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:46 crc kubenswrapper[4902]: I0121 15:03:46.873422 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-catalog-content\") pod \"certified-operators-7vpk9\" (UID: \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\") " pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:46 crc kubenswrapper[4902]: I0121 15:03:46.873517 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-utilities\") pod \"certified-operators-7vpk9\" (UID: \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\") " pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:46 crc kubenswrapper[4902]: I0121 15:03:46.975189 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clx2v\" (UniqueName: \"kubernetes.io/projected/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-kube-api-access-clx2v\") pod \"certified-operators-7vpk9\" (UID: \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\") " pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:46 crc kubenswrapper[4902]: I0121 15:03:46.975306 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-catalog-content\") pod \"certified-operators-7vpk9\" (UID: \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\") " pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:46 crc kubenswrapper[4902]: I0121 15:03:46.975354 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-utilities\") pod \"certified-operators-7vpk9\" (UID: \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\") " pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:46 crc kubenswrapper[4902]: I0121 15:03:46.975830 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-catalog-content\") pod \"certified-operators-7vpk9\" (UID: \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\") " pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:46 crc kubenswrapper[4902]: I0121 15:03:46.975924 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-utilities\") pod \"certified-operators-7vpk9\" (UID: \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\") " pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:47 crc kubenswrapper[4902]: I0121 15:03:47.003839 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clx2v\" (UniqueName: \"kubernetes.io/projected/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-kube-api-access-clx2v\") pod \"certified-operators-7vpk9\" (UID: \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\") " pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:47 crc kubenswrapper[4902]: I0121 15:03:47.057592 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:47 crc kubenswrapper[4902]: I0121 15:03:47.348791 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7vpk9"] Jan 21 15:03:47 crc kubenswrapper[4902]: W0121 15:03:47.361175 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d2ff121_c8ec_43d3_b97d_e2f164b9f847.slice/crio-e875616804386b93d0ffc56d15792663f14f3e2f21397c783ad065bf8edceedc WatchSource:0}: Error finding container e875616804386b93d0ffc56d15792663f14f3e2f21397c783ad065bf8edceedc: Status 404 returned error can't find the container with id e875616804386b93d0ffc56d15792663f14f3e2f21397c783ad065bf8edceedc Jan 21 15:03:47 crc kubenswrapper[4902]: I0121 15:03:47.942407 4902 generic.go:334] "Generic (PLEG): container finished" podID="8d2ff121-c8ec-43d3-b97d-e2f164b9f847" containerID="13b55829252b21c553e64dd12c86180c72698c6ac8e11d0e116e07a9e6aace7e" exitCode=0 Jan 21 15:03:47 crc kubenswrapper[4902]: I0121 15:03:47.942463 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7vpk9" event={"ID":"8d2ff121-c8ec-43d3-b97d-e2f164b9f847","Type":"ContainerDied","Data":"13b55829252b21c553e64dd12c86180c72698c6ac8e11d0e116e07a9e6aace7e"} Jan 21 15:03:47 crc kubenswrapper[4902]: I0121 15:03:47.942487 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7vpk9" event={"ID":"8d2ff121-c8ec-43d3-b97d-e2f164b9f847","Type":"ContainerStarted","Data":"e875616804386b93d0ffc56d15792663f14f3e2f21397c783ad065bf8edceedc"} Jan 21 15:03:47 crc kubenswrapper[4902]: I0121 15:03:47.945489 4902 generic.go:334] "Generic (PLEG): container finished" podID="42b124a0-69eb-423b-9303-c39fc8881a4d" containerID="76f48702894400b1b02016cf71f8f4d7d8d3fc5d7d9bf5ccc45da8d5b224203b" exitCode=0 Jan 21 15:03:47 crc kubenswrapper[4902]: I0121 15:03:47.945525 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qls4" event={"ID":"42b124a0-69eb-423b-9303-c39fc8881a4d","Type":"ContainerDied","Data":"76f48702894400b1b02016cf71f8f4d7d8d3fc5d7d9bf5ccc45da8d5b224203b"} Jan 21 15:03:48 crc kubenswrapper[4902]: I0121 15:03:48.954005 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qls4" event={"ID":"42b124a0-69eb-423b-9303-c39fc8881a4d","Type":"ContainerStarted","Data":"fe03e2c63c8b6e000dbdbfd4e692fc0ca2d4978df9701c9e736e8a878c1f5549"} Jan 21 15:03:48 crc kubenswrapper[4902]: I0121 15:03:48.973622 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7qls4" podStartSLOduration=2.5471077429999998 podStartE2EDuration="4.97360404s" podCreationTimestamp="2026-01-21 15:03:44 +0000 UTC" firstStartedPulling="2026-01-21 15:03:45.927257311 +0000 UTC m=+1788.004090350" lastFinishedPulling="2026-01-21 15:03:48.353753618 +0000 UTC m=+1790.430586647" observedRunningTime="2026-01-21 15:03:48.968088093 +0000 UTC m=+1791.044921142" watchObservedRunningTime="2026-01-21 15:03:48.97360404 +0000 UTC m=+1791.050437069" Jan 21 15:03:52 crc kubenswrapper[4902]: I0121 15:03:52.985889 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7vpk9" event={"ID":"8d2ff121-c8ec-43d3-b97d-e2f164b9f847","Type":"ContainerStarted","Data":"e288745dbac7373e7837c167d172aa7a653c275a9298b12908e850485a6ca4a0"} Jan 21 15:03:53 crc kubenswrapper[4902]: I0121 15:03:53.294154 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:03:53 crc kubenswrapper[4902]: E0121 15:03:53.294719 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:03:53 crc kubenswrapper[4902]: I0121 15:03:53.997897 4902 generic.go:334] "Generic (PLEG): container finished" podID="8d2ff121-c8ec-43d3-b97d-e2f164b9f847" containerID="e288745dbac7373e7837c167d172aa7a653c275a9298b12908e850485a6ca4a0" exitCode=0 Jan 21 15:03:53 crc kubenswrapper[4902]: I0121 15:03:53.997966 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7vpk9" event={"ID":"8d2ff121-c8ec-43d3-b97d-e2f164b9f847","Type":"ContainerDied","Data":"e288745dbac7373e7837c167d172aa7a653c275a9298b12908e850485a6ca4a0"} Jan 21 15:03:54 crc kubenswrapper[4902]: I0121 15:03:54.652709 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:54 crc kubenswrapper[4902]: I0121 15:03:54.652961 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:54 crc kubenswrapper[4902]: I0121 15:03:54.692318 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:55 crc kubenswrapper[4902]: I0121 15:03:55.008012 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7vpk9" event={"ID":"8d2ff121-c8ec-43d3-b97d-e2f164b9f847","Type":"ContainerStarted","Data":"4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec"} Jan 21 15:03:55 crc kubenswrapper[4902]: I0121 15:03:55.032055 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7vpk9" podStartSLOduration=2.576999078 podStartE2EDuration="9.03202514s" podCreationTimestamp="2026-01-21 15:03:46 +0000 UTC" firstStartedPulling="2026-01-21 15:03:47.945783292 +0000 UTC m=+1790.022616321" lastFinishedPulling="2026-01-21 15:03:54.400809354 +0000 UTC m=+1796.477642383" observedRunningTime="2026-01-21 15:03:55.029270757 +0000 UTC m=+1797.106103796" watchObservedRunningTime="2026-01-21 15:03:55.03202514 +0000 UTC m=+1797.108858169" Jan 21 15:03:55 crc kubenswrapper[4902]: I0121 15:03:55.058122 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:56 crc kubenswrapper[4902]: I0121 15:03:56.720136 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7qls4"] Jan 21 15:03:57 crc kubenswrapper[4902]: I0121 15:03:57.020801 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7qls4" podUID="42b124a0-69eb-423b-9303-c39fc8881a4d" containerName="registry-server" containerID="cri-o://fe03e2c63c8b6e000dbdbfd4e692fc0ca2d4978df9701c9e736e8a878c1f5549" gracePeriod=2 Jan 21 15:03:57 crc kubenswrapper[4902]: I0121 15:03:57.058181 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:57 crc kubenswrapper[4902]: I0121 15:03:57.058250 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:57 crc kubenswrapper[4902]: I0121 15:03:57.110133 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:58 crc kubenswrapper[4902]: I0121 15:03:58.034783 4902 generic.go:334] "Generic (PLEG): container finished" podID="42b124a0-69eb-423b-9303-c39fc8881a4d" containerID="fe03e2c63c8b6e000dbdbfd4e692fc0ca2d4978df9701c9e736e8a878c1f5549" exitCode=0 Jan 21 15:03:58 crc kubenswrapper[4902]: I0121 15:03:58.035747 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qls4" event={"ID":"42b124a0-69eb-423b-9303-c39fc8881a4d","Type":"ContainerDied","Data":"fe03e2c63c8b6e000dbdbfd4e692fc0ca2d4978df9701c9e736e8a878c1f5549"} Jan 21 15:03:58 crc kubenswrapper[4902]: I0121 15:03:58.112840 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:58 crc kubenswrapper[4902]: I0121 15:03:58.248178 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b124a0-69eb-423b-9303-c39fc8881a4d-utilities\") pod \"42b124a0-69eb-423b-9303-c39fc8881a4d\" (UID: \"42b124a0-69eb-423b-9303-c39fc8881a4d\") " Jan 21 15:03:58 crc kubenswrapper[4902]: I0121 15:03:58.248284 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b124a0-69eb-423b-9303-c39fc8881a4d-catalog-content\") pod \"42b124a0-69eb-423b-9303-c39fc8881a4d\" (UID: \"42b124a0-69eb-423b-9303-c39fc8881a4d\") " Jan 21 15:03:58 crc kubenswrapper[4902]: I0121 15:03:58.248362 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2hb9\" (UniqueName: \"kubernetes.io/projected/42b124a0-69eb-423b-9303-c39fc8881a4d-kube-api-access-k2hb9\") pod \"42b124a0-69eb-423b-9303-c39fc8881a4d\" (UID: \"42b124a0-69eb-423b-9303-c39fc8881a4d\") " Jan 21 15:03:58 crc kubenswrapper[4902]: I0121 15:03:58.249264 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42b124a0-69eb-423b-9303-c39fc8881a4d-utilities" (OuterVolumeSpecName: "utilities") pod "42b124a0-69eb-423b-9303-c39fc8881a4d" (UID: "42b124a0-69eb-423b-9303-c39fc8881a4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:03:58 crc kubenswrapper[4902]: I0121 15:03:58.255277 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b124a0-69eb-423b-9303-c39fc8881a4d-kube-api-access-k2hb9" (OuterVolumeSpecName: "kube-api-access-k2hb9") pod "42b124a0-69eb-423b-9303-c39fc8881a4d" (UID: "42b124a0-69eb-423b-9303-c39fc8881a4d"). InnerVolumeSpecName "kube-api-access-k2hb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:03:58 crc kubenswrapper[4902]: I0121 15:03:58.350101 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b124a0-69eb-423b-9303-c39fc8881a4d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:03:58 crc kubenswrapper[4902]: I0121 15:03:58.350141 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2hb9\" (UniqueName: \"kubernetes.io/projected/42b124a0-69eb-423b-9303-c39fc8881a4d-kube-api-access-k2hb9\") on node \"crc\" DevicePath \"\"" Jan 21 15:03:59 crc kubenswrapper[4902]: I0121 15:03:59.043326 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qls4" event={"ID":"42b124a0-69eb-423b-9303-c39fc8881a4d","Type":"ContainerDied","Data":"706edc1d7426e3cd62c0ef63834715c5d190790b0c9569569ab948a674b91051"} Jan 21 15:03:59 crc kubenswrapper[4902]: I0121 15:03:59.043395 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:59 crc kubenswrapper[4902]: I0121 15:03:59.043393 4902 scope.go:117] "RemoveContainer" containerID="fe03e2c63c8b6e000dbdbfd4e692fc0ca2d4978df9701c9e736e8a878c1f5549" Jan 21 15:03:59 crc kubenswrapper[4902]: I0121 15:03:59.062531 4902 scope.go:117] "RemoveContainer" containerID="76f48702894400b1b02016cf71f8f4d7d8d3fc5d7d9bf5ccc45da8d5b224203b" Jan 21 15:03:59 crc kubenswrapper[4902]: I0121 15:03:59.095383 4902 scope.go:117] "RemoveContainer" containerID="5fedad8f9fadbd6fbeb2cc7d7a45ba88db6a66a98f3529b9df3eea4163559678" Jan 21 15:03:59 crc kubenswrapper[4902]: I0121 15:03:59.282442 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42b124a0-69eb-423b-9303-c39fc8881a4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42b124a0-69eb-423b-9303-c39fc8881a4d" (UID: "42b124a0-69eb-423b-9303-c39fc8881a4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:03:59 crc kubenswrapper[4902]: I0121 15:03:59.368771 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b124a0-69eb-423b-9303-c39fc8881a4d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:03:59 crc kubenswrapper[4902]: I0121 15:03:59.380967 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7qls4"] Jan 21 15:03:59 crc kubenswrapper[4902]: I0121 15:03:59.389417 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7qls4"] Jan 21 15:04:00 crc kubenswrapper[4902]: I0121 15:04:00.303720 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42b124a0-69eb-423b-9303-c39fc8881a4d" path="/var/lib/kubelet/pods/42b124a0-69eb-423b-9303-c39fc8881a4d/volumes" Jan 21 15:04:07 crc kubenswrapper[4902]: I0121 15:04:07.119218 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.299898 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:04:08 crc kubenswrapper[4902]: E0121 15:04:08.300264 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.304454 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7vpk9"] Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.372523 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-26g5j"] Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.372787 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-26g5j" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" containerName="registry-server" containerID="cri-o://bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6" gracePeriod=2 Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.753242 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-26g5j" Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.812357 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9904001f-3d1f-494d-bfb6-5baa56f45c7b-utilities\") pod \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\" (UID: \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\") " Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.812454 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9904001f-3d1f-494d-bfb6-5baa56f45c7b-catalog-content\") pod \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\" (UID: \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\") " Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.812482 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmh97\" (UniqueName: \"kubernetes.io/projected/9904001f-3d1f-494d-bfb6-5baa56f45c7b-kube-api-access-vmh97\") pod \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\" (UID: \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\") " Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.813970 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9904001f-3d1f-494d-bfb6-5baa56f45c7b-utilities" (OuterVolumeSpecName: "utilities") pod "9904001f-3d1f-494d-bfb6-5baa56f45c7b" (UID: "9904001f-3d1f-494d-bfb6-5baa56f45c7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.817706 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9904001f-3d1f-494d-bfb6-5baa56f45c7b-kube-api-access-vmh97" (OuterVolumeSpecName: "kube-api-access-vmh97") pod "9904001f-3d1f-494d-bfb6-5baa56f45c7b" (UID: "9904001f-3d1f-494d-bfb6-5baa56f45c7b"). InnerVolumeSpecName "kube-api-access-vmh97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.882194 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9904001f-3d1f-494d-bfb6-5baa56f45c7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9904001f-3d1f-494d-bfb6-5baa56f45c7b" (UID: "9904001f-3d1f-494d-bfb6-5baa56f45c7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.913898 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9904001f-3d1f-494d-bfb6-5baa56f45c7b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.913935 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9904001f-3d1f-494d-bfb6-5baa56f45c7b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.913949 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmh97\" (UniqueName: \"kubernetes.io/projected/9904001f-3d1f-494d-bfb6-5baa56f45c7b-kube-api-access-vmh97\") on node \"crc\" DevicePath \"\"" Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.124347 4902 generic.go:334] "Generic (PLEG): container finished" podID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" containerID="bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6" exitCode=0 Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.124397 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26g5j" event={"ID":"9904001f-3d1f-494d-bfb6-5baa56f45c7b","Type":"ContainerDied","Data":"bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6"} Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.124411 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-26g5j" Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.124429 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26g5j" event={"ID":"9904001f-3d1f-494d-bfb6-5baa56f45c7b","Type":"ContainerDied","Data":"739b3544e777bebaead10779acdf44cab51721b0171dbd10be4cd7129f38efe6"} Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.124450 4902 scope.go:117] "RemoveContainer" containerID="bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6" Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.146747 4902 scope.go:117] "RemoveContainer" containerID="324a5a076adc14dfa1ea7fccdb6783b2662ed72b0f0e9eef50d853de6bd34ce7" Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.189633 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-26g5j"] Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.194830 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-26g5j"] Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.203558 4902 scope.go:117] "RemoveContainer" containerID="de60c37f5710208f72f9f0715ff13efe88f49c259037aabe1c6d0e05acd832c5" Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.232236 4902 scope.go:117] "RemoveContainer" containerID="bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6" Jan 21 15:04:09 crc kubenswrapper[4902]: E0121 15:04:09.232497 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6\": container with ID starting with bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6 not found: ID does not exist" containerID="bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6" Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.232524 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6"} err="failed to get container status \"bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6\": rpc error: code = NotFound desc = could not find container \"bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6\": container with ID starting with bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6 not found: ID does not exist" Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.232545 4902 scope.go:117] "RemoveContainer" containerID="324a5a076adc14dfa1ea7fccdb6783b2662ed72b0f0e9eef50d853de6bd34ce7" Jan 21 15:04:09 crc kubenswrapper[4902]: E0121 15:04:09.232718 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"324a5a076adc14dfa1ea7fccdb6783b2662ed72b0f0e9eef50d853de6bd34ce7\": container with ID starting with 324a5a076adc14dfa1ea7fccdb6783b2662ed72b0f0e9eef50d853de6bd34ce7 not found: ID does not exist" containerID="324a5a076adc14dfa1ea7fccdb6783b2662ed72b0f0e9eef50d853de6bd34ce7" Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.232742 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"324a5a076adc14dfa1ea7fccdb6783b2662ed72b0f0e9eef50d853de6bd34ce7"} err="failed to get container status \"324a5a076adc14dfa1ea7fccdb6783b2662ed72b0f0e9eef50d853de6bd34ce7\": rpc error: code = NotFound desc = could not find container \"324a5a076adc14dfa1ea7fccdb6783b2662ed72b0f0e9eef50d853de6bd34ce7\": container with ID starting with 324a5a076adc14dfa1ea7fccdb6783b2662ed72b0f0e9eef50d853de6bd34ce7 not found: ID does not exist" Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.232757 4902 scope.go:117] "RemoveContainer" containerID="de60c37f5710208f72f9f0715ff13efe88f49c259037aabe1c6d0e05acd832c5" Jan 21 15:04:09 crc kubenswrapper[4902]: E0121 15:04:09.233110 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de60c37f5710208f72f9f0715ff13efe88f49c259037aabe1c6d0e05acd832c5\": container with ID starting with de60c37f5710208f72f9f0715ff13efe88f49c259037aabe1c6d0e05acd832c5 not found: ID does not exist" containerID="de60c37f5710208f72f9f0715ff13efe88f49c259037aabe1c6d0e05acd832c5" Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.233131 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de60c37f5710208f72f9f0715ff13efe88f49c259037aabe1c6d0e05acd832c5"} err="failed to get container status \"de60c37f5710208f72f9f0715ff13efe88f49c259037aabe1c6d0e05acd832c5\": rpc error: code = NotFound desc = could not find container \"de60c37f5710208f72f9f0715ff13efe88f49c259037aabe1c6d0e05acd832c5\": container with ID starting with de60c37f5710208f72f9f0715ff13efe88f49c259037aabe1c6d0e05acd832c5 not found: ID does not exist" Jan 21 15:04:10 crc kubenswrapper[4902]: I0121 15:04:10.303655 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" path="/var/lib/kubelet/pods/9904001f-3d1f-494d-bfb6-5baa56f45c7b/volumes" Jan 21 15:04:23 crc kubenswrapper[4902]: I0121 15:04:23.294643 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:04:23 crc kubenswrapper[4902]: E0121 15:04:23.295367 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:04:34 crc kubenswrapper[4902]: I0121 15:04:34.296304 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:04:34 crc kubenswrapper[4902]: E0121 15:04:34.297566 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:04:48 crc kubenswrapper[4902]: I0121 15:04:48.300318 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:04:48 crc kubenswrapper[4902]: E0121 15:04:48.301059 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:05:01 crc kubenswrapper[4902]: I0121 15:05:01.295276 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:05:01 crc kubenswrapper[4902]: E0121 15:05:01.297499 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:05:12 crc kubenswrapper[4902]: I0121 15:05:12.295687 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:05:12 crc kubenswrapper[4902]: E0121 15:05:12.297655 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:05:21 crc kubenswrapper[4902]: I0121 15:05:21.403851 4902 scope.go:117] "RemoveContainer" containerID="2cde6e75f7222b067d6b79b31a7ebe5313dd71bd0e2b68973e655c5cf6f0a600" Jan 21 15:05:24 crc kubenswrapper[4902]: I0121 15:05:24.295259 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:05:24 crc kubenswrapper[4902]: E0121 15:05:24.296132 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:05:39 crc kubenswrapper[4902]: I0121 15:05:39.294791 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:05:39 crc kubenswrapper[4902]: E0121 15:05:39.296509 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:05:52 crc kubenswrapper[4902]: I0121 15:05:52.295719 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:05:52 crc kubenswrapper[4902]: E0121 15:05:52.296621 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:06:04 crc kubenswrapper[4902]: I0121 15:06:04.295107 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:06:04 crc kubenswrapper[4902]: E0121 15:06:04.295526 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:06:18 crc kubenswrapper[4902]: I0121 15:06:18.304665 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:06:18 crc kubenswrapper[4902]: E0121 15:06:18.305513 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:06:21 crc kubenswrapper[4902]: I0121 15:06:21.439709 4902 scope.go:117] "RemoveContainer" containerID="3422abd8be70782bbc83b0962ea81e9a80ba6bb8f97ab843aad91552b7ac69ef" Jan 21 15:06:21 crc kubenswrapper[4902]: I0121 15:06:21.461439 4902 scope.go:117] "RemoveContainer" containerID="af29f10ace20181a510bff8176fb59c730f1f35a22ee04c32fe59d5d86239e27" Jan 21 15:06:33 crc kubenswrapper[4902]: I0121 15:06:33.294428 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:06:33 crc kubenswrapper[4902]: E0121 15:06:33.296127 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:06:47 crc kubenswrapper[4902]: I0121 15:06:47.295083 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:06:47 crc kubenswrapper[4902]: E0121 15:06:47.296323 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:07:01 crc kubenswrapper[4902]: I0121 15:07:01.294840 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:07:01 crc kubenswrapper[4902]: E0121 15:07:01.295570 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:07:16 crc kubenswrapper[4902]: I0121 15:07:16.294971 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:07:16 crc kubenswrapper[4902]: E0121 15:07:16.295584 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:07:29 crc kubenswrapper[4902]: I0121 15:07:29.295801 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:07:29 crc kubenswrapper[4902]: I0121 15:07:29.746859 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"92fd37d3aa001b2164e48ad0a17e03e78770a1a688c0222739493af9ad719afa"} Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.596216 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8p4nv"] Jan 21 15:09:22 crc kubenswrapper[4902]: E0121 15:09:22.597126 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" containerName="extract-content" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.597141 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" containerName="extract-content" Jan 21 15:09:22 crc kubenswrapper[4902]: E0121 15:09:22.597162 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b124a0-69eb-423b-9303-c39fc8881a4d" containerName="extract-utilities" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.597170 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b124a0-69eb-423b-9303-c39fc8881a4d" containerName="extract-utilities" Jan 21 15:09:22 crc kubenswrapper[4902]: E0121 15:09:22.597183 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b124a0-69eb-423b-9303-c39fc8881a4d" containerName="extract-content" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.597193 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b124a0-69eb-423b-9303-c39fc8881a4d" containerName="extract-content" Jan 21 15:09:22 crc kubenswrapper[4902]: E0121 15:09:22.597213 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b124a0-69eb-423b-9303-c39fc8881a4d" containerName="registry-server" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.597221 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b124a0-69eb-423b-9303-c39fc8881a4d" containerName="registry-server" Jan 21 15:09:22 crc kubenswrapper[4902]: E0121 15:09:22.597232 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" containerName="registry-server" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.597239 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" containerName="registry-server" Jan 21 15:09:22 crc kubenswrapper[4902]: E0121 15:09:22.597251 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" containerName="extract-utilities" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.597258 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" containerName="extract-utilities" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.597420 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" containerName="registry-server" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.597436 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="42b124a0-69eb-423b-9303-c39fc8881a4d" containerName="registry-server" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.598620 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.609714 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8p4nv"] Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.705997 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23ad955a-b6d3-482a-808b-710ec9253c20-utilities\") pod \"redhat-marketplace-8p4nv\" (UID: \"23ad955a-b6d3-482a-808b-710ec9253c20\") " pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.706075 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv8w5\" (UniqueName: \"kubernetes.io/projected/23ad955a-b6d3-482a-808b-710ec9253c20-kube-api-access-nv8w5\") pod \"redhat-marketplace-8p4nv\" (UID: \"23ad955a-b6d3-482a-808b-710ec9253c20\") " pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.706158 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23ad955a-b6d3-482a-808b-710ec9253c20-catalog-content\") pod \"redhat-marketplace-8p4nv\" (UID: \"23ad955a-b6d3-482a-808b-710ec9253c20\") " pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.807253 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv8w5\" (UniqueName: \"kubernetes.io/projected/23ad955a-b6d3-482a-808b-710ec9253c20-kube-api-access-nv8w5\") pod \"redhat-marketplace-8p4nv\" (UID: \"23ad955a-b6d3-482a-808b-710ec9253c20\") " pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.807320 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23ad955a-b6d3-482a-808b-710ec9253c20-catalog-content\") pod \"redhat-marketplace-8p4nv\" (UID: \"23ad955a-b6d3-482a-808b-710ec9253c20\") " pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.807396 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23ad955a-b6d3-482a-808b-710ec9253c20-utilities\") pod \"redhat-marketplace-8p4nv\" (UID: \"23ad955a-b6d3-482a-808b-710ec9253c20\") " pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.807854 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23ad955a-b6d3-482a-808b-710ec9253c20-utilities\") pod \"redhat-marketplace-8p4nv\" (UID: \"23ad955a-b6d3-482a-808b-710ec9253c20\") " pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.808127 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23ad955a-b6d3-482a-808b-710ec9253c20-catalog-content\") pod \"redhat-marketplace-8p4nv\" (UID: \"23ad955a-b6d3-482a-808b-710ec9253c20\") " pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.833883 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv8w5\" (UniqueName: \"kubernetes.io/projected/23ad955a-b6d3-482a-808b-710ec9253c20-kube-api-access-nv8w5\") pod \"redhat-marketplace-8p4nv\" (UID: \"23ad955a-b6d3-482a-808b-710ec9253c20\") " pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.924410 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:23 crc kubenswrapper[4902]: I0121 15:09:23.383417 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8p4nv"] Jan 21 15:09:23 crc kubenswrapper[4902]: I0121 15:09:23.691740 4902 generic.go:334] "Generic (PLEG): container finished" podID="23ad955a-b6d3-482a-808b-710ec9253c20" containerID="7f1775d7f1828462230ff3259b2721101d050c5f65b04c287360bd34060d38ab" exitCode=0 Jan 21 15:09:23 crc kubenswrapper[4902]: I0121 15:09:23.691814 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p4nv" event={"ID":"23ad955a-b6d3-482a-808b-710ec9253c20","Type":"ContainerDied","Data":"7f1775d7f1828462230ff3259b2721101d050c5f65b04c287360bd34060d38ab"} Jan 21 15:09:23 crc kubenswrapper[4902]: I0121 15:09:23.692360 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p4nv" event={"ID":"23ad955a-b6d3-482a-808b-710ec9253c20","Type":"ContainerStarted","Data":"e7625bdb50ae89961afbe71fc32892a8dc04a83d1cc81623c6be51fd71d594af"} Jan 21 15:09:23 crc kubenswrapper[4902]: I0121 15:09:23.694082 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:09:24 crc kubenswrapper[4902]: I0121 15:09:24.700793 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p4nv" event={"ID":"23ad955a-b6d3-482a-808b-710ec9253c20","Type":"ContainerStarted","Data":"bea5e0188992165e7e204738ff2257e4521d3d8edd7c83e51c56e93e5ce59994"} Jan 21 15:09:25 crc kubenswrapper[4902]: I0121 15:09:25.708179 4902 generic.go:334] "Generic (PLEG): container finished" podID="23ad955a-b6d3-482a-808b-710ec9253c20" containerID="bea5e0188992165e7e204738ff2257e4521d3d8edd7c83e51c56e93e5ce59994" exitCode=0 Jan 21 15:09:25 crc kubenswrapper[4902]: I0121 15:09:25.708231 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p4nv" event={"ID":"23ad955a-b6d3-482a-808b-710ec9253c20","Type":"ContainerDied","Data":"bea5e0188992165e7e204738ff2257e4521d3d8edd7c83e51c56e93e5ce59994"} Jan 21 15:09:26 crc kubenswrapper[4902]: I0121 15:09:26.718179 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p4nv" event={"ID":"23ad955a-b6d3-482a-808b-710ec9253c20","Type":"ContainerStarted","Data":"555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5"} Jan 21 15:09:26 crc kubenswrapper[4902]: I0121 15:09:26.754121 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8p4nv" podStartSLOduration=2.379950037 podStartE2EDuration="4.754091555s" podCreationTimestamp="2026-01-21 15:09:22 +0000 UTC" firstStartedPulling="2026-01-21 15:09:23.693782023 +0000 UTC m=+2125.770615062" lastFinishedPulling="2026-01-21 15:09:26.067923551 +0000 UTC m=+2128.144756580" observedRunningTime="2026-01-21 15:09:26.745885509 +0000 UTC m=+2128.822718538" watchObservedRunningTime="2026-01-21 15:09:26.754091555 +0000 UTC m=+2128.830924624" Jan 21 15:09:32 crc kubenswrapper[4902]: I0121 15:09:32.925307 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:32 crc kubenswrapper[4902]: I0121 15:09:32.925900 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:32 crc kubenswrapper[4902]: I0121 15:09:32.975061 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:33 crc kubenswrapper[4902]: I0121 15:09:33.818905 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:33 crc kubenswrapper[4902]: I0121 15:09:33.859459 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8p4nv"] Jan 21 15:09:35 crc kubenswrapper[4902]: I0121 15:09:35.784210 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8p4nv" podUID="23ad955a-b6d3-482a-808b-710ec9253c20" containerName="registry-server" containerID="cri-o://555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5" gracePeriod=2 Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.758688 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.792252 4902 generic.go:334] "Generic (PLEG): container finished" podID="23ad955a-b6d3-482a-808b-710ec9253c20" containerID="555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5" exitCode=0 Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.792301 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p4nv" event={"ID":"23ad955a-b6d3-482a-808b-710ec9253c20","Type":"ContainerDied","Data":"555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5"} Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.792326 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p4nv" event={"ID":"23ad955a-b6d3-482a-808b-710ec9253c20","Type":"ContainerDied","Data":"e7625bdb50ae89961afbe71fc32892a8dc04a83d1cc81623c6be51fd71d594af"} Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.792335 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.792342 4902 scope.go:117] "RemoveContainer" containerID="555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.820273 4902 scope.go:117] "RemoveContainer" containerID="bea5e0188992165e7e204738ff2257e4521d3d8edd7c83e51c56e93e5ce59994" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.835680 4902 scope.go:117] "RemoveContainer" containerID="7f1775d7f1828462230ff3259b2721101d050c5f65b04c287360bd34060d38ab" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.836081 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv8w5\" (UniqueName: \"kubernetes.io/projected/23ad955a-b6d3-482a-808b-710ec9253c20-kube-api-access-nv8w5\") pod \"23ad955a-b6d3-482a-808b-710ec9253c20\" (UID: \"23ad955a-b6d3-482a-808b-710ec9253c20\") " Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.836202 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23ad955a-b6d3-482a-808b-710ec9253c20-utilities\") pod \"23ad955a-b6d3-482a-808b-710ec9253c20\" (UID: \"23ad955a-b6d3-482a-808b-710ec9253c20\") " Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.836278 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23ad955a-b6d3-482a-808b-710ec9253c20-catalog-content\") pod \"23ad955a-b6d3-482a-808b-710ec9253c20\" (UID: \"23ad955a-b6d3-482a-808b-710ec9253c20\") " Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.837403 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23ad955a-b6d3-482a-808b-710ec9253c20-utilities" (OuterVolumeSpecName: "utilities") pod "23ad955a-b6d3-482a-808b-710ec9253c20" (UID: "23ad955a-b6d3-482a-808b-710ec9253c20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.837810 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23ad955a-b6d3-482a-808b-710ec9253c20-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.842564 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23ad955a-b6d3-482a-808b-710ec9253c20-kube-api-access-nv8w5" (OuterVolumeSpecName: "kube-api-access-nv8w5") pod "23ad955a-b6d3-482a-808b-710ec9253c20" (UID: "23ad955a-b6d3-482a-808b-710ec9253c20"). InnerVolumeSpecName "kube-api-access-nv8w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.858780 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23ad955a-b6d3-482a-808b-710ec9253c20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23ad955a-b6d3-482a-808b-710ec9253c20" (UID: "23ad955a-b6d3-482a-808b-710ec9253c20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.886142 4902 scope.go:117] "RemoveContainer" containerID="555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5" Jan 21 15:09:36 crc kubenswrapper[4902]: E0121 15:09:36.886712 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5\": container with ID starting with 555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5 not found: ID does not exist" containerID="555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.886752 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5"} err="failed to get container status \"555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5\": rpc error: code = NotFound desc = could not find container \"555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5\": container with ID starting with 555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5 not found: ID does not exist" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.886778 4902 scope.go:117] "RemoveContainer" containerID="bea5e0188992165e7e204738ff2257e4521d3d8edd7c83e51c56e93e5ce59994" Jan 21 15:09:36 crc kubenswrapper[4902]: E0121 15:09:36.887229 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bea5e0188992165e7e204738ff2257e4521d3d8edd7c83e51c56e93e5ce59994\": container with ID starting with bea5e0188992165e7e204738ff2257e4521d3d8edd7c83e51c56e93e5ce59994 not found: ID does not exist" containerID="bea5e0188992165e7e204738ff2257e4521d3d8edd7c83e51c56e93e5ce59994" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.887265 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bea5e0188992165e7e204738ff2257e4521d3d8edd7c83e51c56e93e5ce59994"} err="failed to get container status \"bea5e0188992165e7e204738ff2257e4521d3d8edd7c83e51c56e93e5ce59994\": rpc error: code = NotFound desc = could not find container \"bea5e0188992165e7e204738ff2257e4521d3d8edd7c83e51c56e93e5ce59994\": container with ID starting with bea5e0188992165e7e204738ff2257e4521d3d8edd7c83e51c56e93e5ce59994 not found: ID does not exist" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.887288 4902 scope.go:117] "RemoveContainer" containerID="7f1775d7f1828462230ff3259b2721101d050c5f65b04c287360bd34060d38ab" Jan 21 15:09:36 crc kubenswrapper[4902]: E0121 15:09:36.887577 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f1775d7f1828462230ff3259b2721101d050c5f65b04c287360bd34060d38ab\": container with ID starting with 7f1775d7f1828462230ff3259b2721101d050c5f65b04c287360bd34060d38ab not found: ID does not exist" containerID="7f1775d7f1828462230ff3259b2721101d050c5f65b04c287360bd34060d38ab" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.887665 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f1775d7f1828462230ff3259b2721101d050c5f65b04c287360bd34060d38ab"} err="failed to get container status \"7f1775d7f1828462230ff3259b2721101d050c5f65b04c287360bd34060d38ab\": rpc error: code = NotFound desc = could not find container \"7f1775d7f1828462230ff3259b2721101d050c5f65b04c287360bd34060d38ab\": container with ID starting with 7f1775d7f1828462230ff3259b2721101d050c5f65b04c287360bd34060d38ab not found: ID does not exist" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.939321 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23ad955a-b6d3-482a-808b-710ec9253c20-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.939582 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv8w5\" (UniqueName: \"kubernetes.io/projected/23ad955a-b6d3-482a-808b-710ec9253c20-kube-api-access-nv8w5\") on node \"crc\" DevicePath \"\"" Jan 21 15:09:37 crc kubenswrapper[4902]: I0121 15:09:37.122090 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8p4nv"] Jan 21 15:09:37 crc kubenswrapper[4902]: I0121 15:09:37.129648 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8p4nv"] Jan 21 15:09:38 crc kubenswrapper[4902]: I0121 15:09:38.309553 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23ad955a-b6d3-482a-808b-710ec9253c20" path="/var/lib/kubelet/pods/23ad955a-b6d3-482a-808b-710ec9253c20/volumes" Jan 21 15:09:47 crc kubenswrapper[4902]: I0121 15:09:47.770109 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:09:47 crc kubenswrapper[4902]: I0121 15:09:47.770723 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:10:17 crc kubenswrapper[4902]: I0121 15:10:17.769883 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:10:17 crc kubenswrapper[4902]: I0121 15:10:17.770934 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:10:47 crc kubenswrapper[4902]: I0121 15:10:47.770561 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:10:47 crc kubenswrapper[4902]: I0121 15:10:47.771230 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:10:47 crc kubenswrapper[4902]: I0121 15:10:47.771290 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 15:10:47 crc kubenswrapper[4902]: I0121 15:10:47.772117 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"92fd37d3aa001b2164e48ad0a17e03e78770a1a688c0222739493af9ad719afa"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:10:47 crc kubenswrapper[4902]: I0121 15:10:47.772198 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://92fd37d3aa001b2164e48ad0a17e03e78770a1a688c0222739493af9ad719afa" gracePeriod=600 Jan 21 15:10:48 crc kubenswrapper[4902]: I0121 15:10:48.300652 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="92fd37d3aa001b2164e48ad0a17e03e78770a1a688c0222739493af9ad719afa" exitCode=0 Jan 21 15:10:48 crc kubenswrapper[4902]: I0121 15:10:48.307698 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"92fd37d3aa001b2164e48ad0a17e03e78770a1a688c0222739493af9ad719afa"} Jan 21 15:10:48 crc kubenswrapper[4902]: I0121 15:10:48.307807 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27"} Jan 21 15:10:48 crc kubenswrapper[4902]: I0121 15:10:48.307827 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.211663 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wjsh4"] Jan 21 15:11:09 crc kubenswrapper[4902]: E0121 15:11:09.213814 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ad955a-b6d3-482a-808b-710ec9253c20" containerName="extract-utilities" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.213961 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ad955a-b6d3-482a-808b-710ec9253c20" containerName="extract-utilities" Jan 21 15:11:09 crc kubenswrapper[4902]: E0121 15:11:09.214093 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ad955a-b6d3-482a-808b-710ec9253c20" containerName="extract-content" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.214191 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ad955a-b6d3-482a-808b-710ec9253c20" containerName="extract-content" Jan 21 15:11:09 crc kubenswrapper[4902]: E0121 15:11:09.214313 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ad955a-b6d3-482a-808b-710ec9253c20" containerName="registry-server" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.214394 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ad955a-b6d3-482a-808b-710ec9253c20" containerName="registry-server" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.214659 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="23ad955a-b6d3-482a-808b-710ec9253c20" containerName="registry-server" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.216002 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.222596 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wjsh4"] Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.321060 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5fe57c1-6b56-4abe-8067-dd74165e5937-catalog-content\") pod \"community-operators-wjsh4\" (UID: \"e5fe57c1-6b56-4abe-8067-dd74165e5937\") " pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.321113 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5fe57c1-6b56-4abe-8067-dd74165e5937-utilities\") pod \"community-operators-wjsh4\" (UID: \"e5fe57c1-6b56-4abe-8067-dd74165e5937\") " pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.321189 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z2qw\" (UniqueName: \"kubernetes.io/projected/e5fe57c1-6b56-4abe-8067-dd74165e5937-kube-api-access-8z2qw\") pod \"community-operators-wjsh4\" (UID: \"e5fe57c1-6b56-4abe-8067-dd74165e5937\") " pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.422249 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5fe57c1-6b56-4abe-8067-dd74165e5937-catalog-content\") pod \"community-operators-wjsh4\" (UID: \"e5fe57c1-6b56-4abe-8067-dd74165e5937\") " pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.422536 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5fe57c1-6b56-4abe-8067-dd74165e5937-utilities\") pod \"community-operators-wjsh4\" (UID: \"e5fe57c1-6b56-4abe-8067-dd74165e5937\") " pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.422636 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z2qw\" (UniqueName: \"kubernetes.io/projected/e5fe57c1-6b56-4abe-8067-dd74165e5937-kube-api-access-8z2qw\") pod \"community-operators-wjsh4\" (UID: \"e5fe57c1-6b56-4abe-8067-dd74165e5937\") " pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.423312 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5fe57c1-6b56-4abe-8067-dd74165e5937-utilities\") pod \"community-operators-wjsh4\" (UID: \"e5fe57c1-6b56-4abe-8067-dd74165e5937\") " pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.423551 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5fe57c1-6b56-4abe-8067-dd74165e5937-catalog-content\") pod \"community-operators-wjsh4\" (UID: \"e5fe57c1-6b56-4abe-8067-dd74165e5937\") " pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.454750 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z2qw\" (UniqueName: \"kubernetes.io/projected/e5fe57c1-6b56-4abe-8067-dd74165e5937-kube-api-access-8z2qw\") pod \"community-operators-wjsh4\" (UID: \"e5fe57c1-6b56-4abe-8067-dd74165e5937\") " pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.532815 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:10 crc kubenswrapper[4902]: I0121 15:11:10.042532 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wjsh4"] Jan 21 15:11:10 crc kubenswrapper[4902]: W0121 15:11:10.049216 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5fe57c1_6b56_4abe_8067_dd74165e5937.slice/crio-89b7465d8dc818cfaa2e1689a3da79184e64fc5acdd505be8d6d0a84f726f3c2 WatchSource:0}: Error finding container 89b7465d8dc818cfaa2e1689a3da79184e64fc5acdd505be8d6d0a84f726f3c2: Status 404 returned error can't find the container with id 89b7465d8dc818cfaa2e1689a3da79184e64fc5acdd505be8d6d0a84f726f3c2 Jan 21 15:11:10 crc kubenswrapper[4902]: I0121 15:11:10.469251 4902 generic.go:334] "Generic (PLEG): container finished" podID="e5fe57c1-6b56-4abe-8067-dd74165e5937" containerID="344d19118c86f03fbf2f993a19127bdedf8b15658b6934e157f5be424839cc2b" exitCode=0 Jan 21 15:11:10 crc kubenswrapper[4902]: I0121 15:11:10.469301 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wjsh4" event={"ID":"e5fe57c1-6b56-4abe-8067-dd74165e5937","Type":"ContainerDied","Data":"344d19118c86f03fbf2f993a19127bdedf8b15658b6934e157f5be424839cc2b"} Jan 21 15:11:10 crc kubenswrapper[4902]: I0121 15:11:10.469327 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wjsh4" event={"ID":"e5fe57c1-6b56-4abe-8067-dd74165e5937","Type":"ContainerStarted","Data":"89b7465d8dc818cfaa2e1689a3da79184e64fc5acdd505be8d6d0a84f726f3c2"} Jan 21 15:11:14 crc kubenswrapper[4902]: I0121 15:11:14.497031 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wjsh4" event={"ID":"e5fe57c1-6b56-4abe-8067-dd74165e5937","Type":"ContainerStarted","Data":"34374407baf34d68a7781ace1d2fb9bc3bdb37a4c1cf7aa74f8ac2a1a35e8926"} Jan 21 15:11:15 crc kubenswrapper[4902]: I0121 15:11:15.509363 4902 generic.go:334] "Generic (PLEG): container finished" podID="e5fe57c1-6b56-4abe-8067-dd74165e5937" containerID="34374407baf34d68a7781ace1d2fb9bc3bdb37a4c1cf7aa74f8ac2a1a35e8926" exitCode=0 Jan 21 15:11:15 crc kubenswrapper[4902]: I0121 15:11:15.509423 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wjsh4" event={"ID":"e5fe57c1-6b56-4abe-8067-dd74165e5937","Type":"ContainerDied","Data":"34374407baf34d68a7781ace1d2fb9bc3bdb37a4c1cf7aa74f8ac2a1a35e8926"} Jan 21 15:11:16 crc kubenswrapper[4902]: I0121 15:11:16.517456 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wjsh4" event={"ID":"e5fe57c1-6b56-4abe-8067-dd74165e5937","Type":"ContainerStarted","Data":"19a8d1133a2e6685cc9bc7e7f47735379996d6da281f38f489fde9576b3c3a8b"} Jan 21 15:11:16 crc kubenswrapper[4902]: I0121 15:11:16.537291 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wjsh4" podStartSLOduration=2.091553252 podStartE2EDuration="7.537272809s" podCreationTimestamp="2026-01-21 15:11:09 +0000 UTC" firstStartedPulling="2026-01-21 15:11:10.471309725 +0000 UTC m=+2232.548142754" lastFinishedPulling="2026-01-21 15:11:15.917029272 +0000 UTC m=+2237.993862311" observedRunningTime="2026-01-21 15:11:16.53356349 +0000 UTC m=+2238.610396519" watchObservedRunningTime="2026-01-21 15:11:16.537272809 +0000 UTC m=+2238.614105838" Jan 21 15:11:19 crc kubenswrapper[4902]: I0121 15:11:19.534530 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:19 crc kubenswrapper[4902]: I0121 15:11:19.534825 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:19 crc kubenswrapper[4902]: I0121 15:11:19.585759 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:29 crc kubenswrapper[4902]: I0121 15:11:29.577750 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:29 crc kubenswrapper[4902]: I0121 15:11:29.646086 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wjsh4"] Jan 21 15:11:29 crc kubenswrapper[4902]: I0121 15:11:29.683424 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wx2t6"] Jan 21 15:11:29 crc kubenswrapper[4902]: I0121 15:11:29.684206 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wx2t6" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" containerName="registry-server" containerID="cri-o://f55942192334bed78a5ef126b5bef566b21361e0fd55643757cc52ec26452a2b" gracePeriod=2 Jan 21 15:11:29 crc kubenswrapper[4902]: E0121 15:11:29.769299 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f55942192334bed78a5ef126b5bef566b21361e0fd55643757cc52ec26452a2b is running failed: container process not found" containerID="f55942192334bed78a5ef126b5bef566b21361e0fd55643757cc52ec26452a2b" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 15:11:29 crc kubenswrapper[4902]: E0121 15:11:29.770021 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f55942192334bed78a5ef126b5bef566b21361e0fd55643757cc52ec26452a2b is running failed: container process not found" containerID="f55942192334bed78a5ef126b5bef566b21361e0fd55643757cc52ec26452a2b" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 15:11:29 crc kubenswrapper[4902]: E0121 15:11:29.770828 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f55942192334bed78a5ef126b5bef566b21361e0fd55643757cc52ec26452a2b is running failed: container process not found" containerID="f55942192334bed78a5ef126b5bef566b21361e0fd55643757cc52ec26452a2b" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 15:11:29 crc kubenswrapper[4902]: E0121 15:11:29.771104 4902 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f55942192334bed78a5ef126b5bef566b21361e0fd55643757cc52ec26452a2b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-wx2t6" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" containerName="registry-server" Jan 21 15:11:30 crc kubenswrapper[4902]: I0121 15:11:30.612932 4902 generic.go:334] "Generic (PLEG): container finished" podID="a1458bec-2134-4eb6-8510-ece2a6568215" containerID="f55942192334bed78a5ef126b5bef566b21361e0fd55643757cc52ec26452a2b" exitCode=0 Jan 21 15:11:30 crc kubenswrapper[4902]: I0121 15:11:30.613012 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wx2t6" event={"ID":"a1458bec-2134-4eb6-8510-ece2a6568215","Type":"ContainerDied","Data":"f55942192334bed78a5ef126b5bef566b21361e0fd55643757cc52ec26452a2b"} Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.184935 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wx2t6" Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.330760 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxvzq\" (UniqueName: \"kubernetes.io/projected/a1458bec-2134-4eb6-8510-ece2a6568215-kube-api-access-bxvzq\") pod \"a1458bec-2134-4eb6-8510-ece2a6568215\" (UID: \"a1458bec-2134-4eb6-8510-ece2a6568215\") " Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.331322 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1458bec-2134-4eb6-8510-ece2a6568215-utilities\") pod \"a1458bec-2134-4eb6-8510-ece2a6568215\" (UID: \"a1458bec-2134-4eb6-8510-ece2a6568215\") " Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.331372 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1458bec-2134-4eb6-8510-ece2a6568215-catalog-content\") pod \"a1458bec-2134-4eb6-8510-ece2a6568215\" (UID: \"a1458bec-2134-4eb6-8510-ece2a6568215\") " Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.334565 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1458bec-2134-4eb6-8510-ece2a6568215-utilities" (OuterVolumeSpecName: "utilities") pod "a1458bec-2134-4eb6-8510-ece2a6568215" (UID: "a1458bec-2134-4eb6-8510-ece2a6568215"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.346860 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1458bec-2134-4eb6-8510-ece2a6568215-kube-api-access-bxvzq" (OuterVolumeSpecName: "kube-api-access-bxvzq") pod "a1458bec-2134-4eb6-8510-ece2a6568215" (UID: "a1458bec-2134-4eb6-8510-ece2a6568215"). InnerVolumeSpecName "kube-api-access-bxvzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.385613 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1458bec-2134-4eb6-8510-ece2a6568215-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1458bec-2134-4eb6-8510-ece2a6568215" (UID: "a1458bec-2134-4eb6-8510-ece2a6568215"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.432323 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxvzq\" (UniqueName: \"kubernetes.io/projected/a1458bec-2134-4eb6-8510-ece2a6568215-kube-api-access-bxvzq\") on node \"crc\" DevicePath \"\"" Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.432365 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1458bec-2134-4eb6-8510-ece2a6568215-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.432378 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1458bec-2134-4eb6-8510-ece2a6568215-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.622537 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wx2t6" event={"ID":"a1458bec-2134-4eb6-8510-ece2a6568215","Type":"ContainerDied","Data":"0c33f9b7fd46d05c8e52b7ed0e8c0e3ee3e633992cb415fa75bef4908ef2fa1f"} Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.623178 4902 scope.go:117] "RemoveContainer" containerID="f55942192334bed78a5ef126b5bef566b21361e0fd55643757cc52ec26452a2b" Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.622606 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wx2t6" Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.651890 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wx2t6"] Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.655141 4902 scope.go:117] "RemoveContainer" containerID="43adeb973bdbf05aa4340e69a147ab41031881fc3cf5bd920322ca643738ff13" Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.658314 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wx2t6"] Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.684968 4902 scope.go:117] "RemoveContainer" containerID="75dbfffe1a292d59aebf0dda1372b5bf1cb539e9684f4315cb02199044a5774e" Jan 21 15:11:32 crc kubenswrapper[4902]: I0121 15:11:32.308237 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" path="/var/lib/kubelet/pods/a1458bec-2134-4eb6-8510-ece2a6568215/volumes" Jan 21 15:13:17 crc kubenswrapper[4902]: I0121 15:13:17.770027 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:13:17 crc kubenswrapper[4902]: I0121 15:13:17.770826 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.050213 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wzkws"] Jan 21 15:13:45 crc kubenswrapper[4902]: E0121 15:13:45.051113 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" containerName="registry-server" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.051127 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" containerName="registry-server" Jan 21 15:13:45 crc kubenswrapper[4902]: E0121 15:13:45.051146 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" containerName="extract-content" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.051153 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" containerName="extract-content" Jan 21 15:13:45 crc kubenswrapper[4902]: E0121 15:13:45.051173 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" containerName="extract-utilities" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.051180 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" containerName="extract-utilities" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.051299 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" containerName="registry-server" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.052889 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.089289 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wzkws"] Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.223421 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-catalog-content\") pod \"redhat-operators-wzkws\" (UID: \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\") " pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.223562 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzht7\" (UniqueName: \"kubernetes.io/projected/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-kube-api-access-vzht7\") pod \"redhat-operators-wzkws\" (UID: \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\") " pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.223602 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-utilities\") pod \"redhat-operators-wzkws\" (UID: \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\") " pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.324751 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzht7\" (UniqueName: \"kubernetes.io/projected/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-kube-api-access-vzht7\") pod \"redhat-operators-wzkws\" (UID: \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\") " pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.324794 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-utilities\") pod \"redhat-operators-wzkws\" (UID: \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\") " pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.325326 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-utilities\") pod \"redhat-operators-wzkws\" (UID: \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\") " pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.325443 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-catalog-content\") pod \"redhat-operators-wzkws\" (UID: \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\") " pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.325877 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-catalog-content\") pod \"redhat-operators-wzkws\" (UID: \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\") " pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.352334 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzht7\" (UniqueName: \"kubernetes.io/projected/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-kube-api-access-vzht7\") pod \"redhat-operators-wzkws\" (UID: \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\") " pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.381234 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.823278 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wzkws"] Jan 21 15:13:46 crc kubenswrapper[4902]: I0121 15:13:46.099500 4902 generic.go:334] "Generic (PLEG): container finished" podID="f66198ce-ce00-4f3e-9c56-a90edc66a3d8" containerID="7c5d2ef6cc3e7fd025ea4b5f10272723397e48b2fabc67e6be770d45577ea1c2" exitCode=0 Jan 21 15:13:46 crc kubenswrapper[4902]: I0121 15:13:46.099609 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzkws" event={"ID":"f66198ce-ce00-4f3e-9c56-a90edc66a3d8","Type":"ContainerDied","Data":"7c5d2ef6cc3e7fd025ea4b5f10272723397e48b2fabc67e6be770d45577ea1c2"} Jan 21 15:13:46 crc kubenswrapper[4902]: I0121 15:13:46.099837 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzkws" event={"ID":"f66198ce-ce00-4f3e-9c56-a90edc66a3d8","Type":"ContainerStarted","Data":"7379ea3435a22eb51df6ace035a8b6585b355ca8dc72320a5ee6641459c189cc"} Jan 21 15:13:47 crc kubenswrapper[4902]: I0121 15:13:47.107804 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzkws" event={"ID":"f66198ce-ce00-4f3e-9c56-a90edc66a3d8","Type":"ContainerStarted","Data":"0b7438f5f642b13894e234d2a4415edb797dc4419116a8e7b05022b8f35c8d2d"} Jan 21 15:13:47 crc kubenswrapper[4902]: I0121 15:13:47.769630 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:13:47 crc kubenswrapper[4902]: I0121 15:13:47.770340 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:13:48 crc kubenswrapper[4902]: I0121 15:13:48.120226 4902 generic.go:334] "Generic (PLEG): container finished" podID="f66198ce-ce00-4f3e-9c56-a90edc66a3d8" containerID="0b7438f5f642b13894e234d2a4415edb797dc4419116a8e7b05022b8f35c8d2d" exitCode=0 Jan 21 15:13:48 crc kubenswrapper[4902]: I0121 15:13:48.120268 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzkws" event={"ID":"f66198ce-ce00-4f3e-9c56-a90edc66a3d8","Type":"ContainerDied","Data":"0b7438f5f642b13894e234d2a4415edb797dc4419116a8e7b05022b8f35c8d2d"} Jan 21 15:13:49 crc kubenswrapper[4902]: I0121 15:13:49.130584 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzkws" event={"ID":"f66198ce-ce00-4f3e-9c56-a90edc66a3d8","Type":"ContainerStarted","Data":"be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c"} Jan 21 15:13:49 crc kubenswrapper[4902]: I0121 15:13:49.157014 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wzkws" podStartSLOduration=1.7332472700000001 podStartE2EDuration="4.157000399s" podCreationTimestamp="2026-01-21 15:13:45 +0000 UTC" firstStartedPulling="2026-01-21 15:13:46.101544857 +0000 UTC m=+2388.178377896" lastFinishedPulling="2026-01-21 15:13:48.525297996 +0000 UTC m=+2390.602131025" observedRunningTime="2026-01-21 15:13:49.154812601 +0000 UTC m=+2391.231645650" watchObservedRunningTime="2026-01-21 15:13:49.157000399 +0000 UTC m=+2391.233833428" Jan 21 15:13:55 crc kubenswrapper[4902]: I0121 15:13:55.381912 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:55 crc kubenswrapper[4902]: I0121 15:13:55.382329 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:55 crc kubenswrapper[4902]: I0121 15:13:55.444409 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:56 crc kubenswrapper[4902]: I0121 15:13:56.223184 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:56 crc kubenswrapper[4902]: I0121 15:13:56.267422 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wzkws"] Jan 21 15:13:58 crc kubenswrapper[4902]: I0121 15:13:58.194240 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wzkws" podUID="f66198ce-ce00-4f3e-9c56-a90edc66a3d8" containerName="registry-server" containerID="cri-o://be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c" gracePeriod=2 Jan 21 15:13:59 crc kubenswrapper[4902]: I0121 15:13:59.760020 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:59 crc kubenswrapper[4902]: I0121 15:13:59.949000 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-utilities\") pod \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\" (UID: \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\") " Jan 21 15:13:59 crc kubenswrapper[4902]: I0121 15:13:59.949384 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-catalog-content\") pod \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\" (UID: \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\") " Jan 21 15:13:59 crc kubenswrapper[4902]: I0121 15:13:59.949565 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzht7\" (UniqueName: \"kubernetes.io/projected/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-kube-api-access-vzht7\") pod \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\" (UID: \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\") " Jan 21 15:13:59 crc kubenswrapper[4902]: I0121 15:13:59.950210 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-utilities" (OuterVolumeSpecName: "utilities") pod "f66198ce-ce00-4f3e-9c56-a90edc66a3d8" (UID: "f66198ce-ce00-4f3e-9c56-a90edc66a3d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:13:59 crc kubenswrapper[4902]: I0121 15:13:59.956862 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-kube-api-access-vzht7" (OuterVolumeSpecName: "kube-api-access-vzht7") pod "f66198ce-ce00-4f3e-9c56-a90edc66a3d8" (UID: "f66198ce-ce00-4f3e-9c56-a90edc66a3d8"). InnerVolumeSpecName "kube-api-access-vzht7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.051403 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.051716 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzht7\" (UniqueName: \"kubernetes.io/projected/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-kube-api-access-vzht7\") on node \"crc\" DevicePath \"\"" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.082111 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f66198ce-ce00-4f3e-9c56-a90edc66a3d8" (UID: "f66198ce-ce00-4f3e-9c56-a90edc66a3d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.153491 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.208473 4902 generic.go:334] "Generic (PLEG): container finished" podID="f66198ce-ce00-4f3e-9c56-a90edc66a3d8" containerID="be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c" exitCode=0 Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.208515 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzkws" event={"ID":"f66198ce-ce00-4f3e-9c56-a90edc66a3d8","Type":"ContainerDied","Data":"be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c"} Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.208540 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzkws" event={"ID":"f66198ce-ce00-4f3e-9c56-a90edc66a3d8","Type":"ContainerDied","Data":"7379ea3435a22eb51df6ace035a8b6585b355ca8dc72320a5ee6641459c189cc"} Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.208557 4902 scope.go:117] "RemoveContainer" containerID="be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.208567 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.226807 4902 scope.go:117] "RemoveContainer" containerID="0b7438f5f642b13894e234d2a4415edb797dc4419116a8e7b05022b8f35c8d2d" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.244602 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wzkws"] Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.249471 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wzkws"] Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.261352 4902 scope.go:117] "RemoveContainer" containerID="7c5d2ef6cc3e7fd025ea4b5f10272723397e48b2fabc67e6be770d45577ea1c2" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.301889 4902 scope.go:117] "RemoveContainer" containerID="be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c" Jan 21 15:14:00 crc kubenswrapper[4902]: E0121 15:14:00.302285 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c\": container with ID starting with be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c not found: ID does not exist" containerID="be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.302317 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c"} err="failed to get container status \"be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c\": rpc error: code = NotFound desc = could not find container \"be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c\": container with ID starting with be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c not found: ID does not exist" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.302337 4902 scope.go:117] "RemoveContainer" containerID="0b7438f5f642b13894e234d2a4415edb797dc4419116a8e7b05022b8f35c8d2d" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.303818 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f66198ce-ce00-4f3e-9c56-a90edc66a3d8" path="/var/lib/kubelet/pods/f66198ce-ce00-4f3e-9c56-a90edc66a3d8/volumes" Jan 21 15:14:00 crc kubenswrapper[4902]: E0121 15:14:00.304455 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b7438f5f642b13894e234d2a4415edb797dc4419116a8e7b05022b8f35c8d2d\": container with ID starting with 0b7438f5f642b13894e234d2a4415edb797dc4419116a8e7b05022b8f35c8d2d not found: ID does not exist" containerID="0b7438f5f642b13894e234d2a4415edb797dc4419116a8e7b05022b8f35c8d2d" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.304505 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b7438f5f642b13894e234d2a4415edb797dc4419116a8e7b05022b8f35c8d2d"} err="failed to get container status \"0b7438f5f642b13894e234d2a4415edb797dc4419116a8e7b05022b8f35c8d2d\": rpc error: code = NotFound desc = could not find container \"0b7438f5f642b13894e234d2a4415edb797dc4419116a8e7b05022b8f35c8d2d\": container with ID starting with 0b7438f5f642b13894e234d2a4415edb797dc4419116a8e7b05022b8f35c8d2d not found: ID does not exist" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.304536 4902 scope.go:117] "RemoveContainer" containerID="7c5d2ef6cc3e7fd025ea4b5f10272723397e48b2fabc67e6be770d45577ea1c2" Jan 21 15:14:00 crc kubenswrapper[4902]: E0121 15:14:00.304869 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c5d2ef6cc3e7fd025ea4b5f10272723397e48b2fabc67e6be770d45577ea1c2\": container with ID starting with 7c5d2ef6cc3e7fd025ea4b5f10272723397e48b2fabc67e6be770d45577ea1c2 not found: ID does not exist" containerID="7c5d2ef6cc3e7fd025ea4b5f10272723397e48b2fabc67e6be770d45577ea1c2" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.304902 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c5d2ef6cc3e7fd025ea4b5f10272723397e48b2fabc67e6be770d45577ea1c2"} err="failed to get container status \"7c5d2ef6cc3e7fd025ea4b5f10272723397e48b2fabc67e6be770d45577ea1c2\": rpc error: code = NotFound desc = could not find container \"7c5d2ef6cc3e7fd025ea4b5f10272723397e48b2fabc67e6be770d45577ea1c2\": container with ID starting with 7c5d2ef6cc3e7fd025ea4b5f10272723397e48b2fabc67e6be770d45577ea1c2 not found: ID does not exist" Jan 21 15:14:17 crc kubenswrapper[4902]: I0121 15:14:17.769916 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:14:17 crc kubenswrapper[4902]: I0121 15:14:17.770568 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:14:17 crc kubenswrapper[4902]: I0121 15:14:17.770619 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 15:14:17 crc kubenswrapper[4902]: I0121 15:14:17.771662 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:14:17 crc kubenswrapper[4902]: I0121 15:14:17.771737 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" gracePeriod=600 Jan 21 15:14:17 crc kubenswrapper[4902]: E0121 15:14:17.907738 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:14:18 crc kubenswrapper[4902]: I0121 15:14:18.361441 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" exitCode=0 Jan 21 15:14:18 crc kubenswrapper[4902]: I0121 15:14:18.361520 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27"} Jan 21 15:14:18 crc kubenswrapper[4902]: I0121 15:14:18.361818 4902 scope.go:117] "RemoveContainer" containerID="92fd37d3aa001b2164e48ad0a17e03e78770a1a688c0222739493af9ad719afa" Jan 21 15:14:18 crc kubenswrapper[4902]: I0121 15:14:18.363234 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:14:18 crc kubenswrapper[4902]: E0121 15:14:18.364105 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:14:29 crc kubenswrapper[4902]: I0121 15:14:29.295542 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:14:29 crc kubenswrapper[4902]: E0121 15:14:29.297785 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:14:44 crc kubenswrapper[4902]: I0121 15:14:44.295304 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:14:44 crc kubenswrapper[4902]: E0121 15:14:44.296318 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:14:58 crc kubenswrapper[4902]: I0121 15:14:58.299990 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:14:58 crc kubenswrapper[4902]: E0121 15:14:58.301490 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.144035 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d"] Jan 21 15:15:00 crc kubenswrapper[4902]: E0121 15:15:00.144686 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66198ce-ce00-4f3e-9c56-a90edc66a3d8" containerName="extract-content" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.144701 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66198ce-ce00-4f3e-9c56-a90edc66a3d8" containerName="extract-content" Jan 21 15:15:00 crc kubenswrapper[4902]: E0121 15:15:00.144726 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66198ce-ce00-4f3e-9c56-a90edc66a3d8" containerName="extract-utilities" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.144735 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66198ce-ce00-4f3e-9c56-a90edc66a3d8" containerName="extract-utilities" Jan 21 15:15:00 crc kubenswrapper[4902]: E0121 15:15:00.144754 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66198ce-ce00-4f3e-9c56-a90edc66a3d8" containerName="registry-server" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.144763 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66198ce-ce00-4f3e-9c56-a90edc66a3d8" containerName="registry-server" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.145370 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f66198ce-ce00-4f3e-9c56-a90edc66a3d8" containerName="registry-server" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.145968 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.147815 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.154985 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d"] Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.155822 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.165469 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bebd9484-ab72-4bbd-84e7-99f28795ad85-config-volume\") pod \"collect-profiles-29483475-zk92d\" (UID: \"bebd9484-ab72-4bbd-84e7-99f28795ad85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.165567 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bebd9484-ab72-4bbd-84e7-99f28795ad85-secret-volume\") pod \"collect-profiles-29483475-zk92d\" (UID: \"bebd9484-ab72-4bbd-84e7-99f28795ad85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.165598 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls67j\" (UniqueName: \"kubernetes.io/projected/bebd9484-ab72-4bbd-84e7-99f28795ad85-kube-api-access-ls67j\") pod \"collect-profiles-29483475-zk92d\" (UID: \"bebd9484-ab72-4bbd-84e7-99f28795ad85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.267132 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bebd9484-ab72-4bbd-84e7-99f28795ad85-secret-volume\") pod \"collect-profiles-29483475-zk92d\" (UID: \"bebd9484-ab72-4bbd-84e7-99f28795ad85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.267177 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls67j\" (UniqueName: \"kubernetes.io/projected/bebd9484-ab72-4bbd-84e7-99f28795ad85-kube-api-access-ls67j\") pod \"collect-profiles-29483475-zk92d\" (UID: \"bebd9484-ab72-4bbd-84e7-99f28795ad85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.267221 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bebd9484-ab72-4bbd-84e7-99f28795ad85-config-volume\") pod \"collect-profiles-29483475-zk92d\" (UID: \"bebd9484-ab72-4bbd-84e7-99f28795ad85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.268273 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bebd9484-ab72-4bbd-84e7-99f28795ad85-config-volume\") pod \"collect-profiles-29483475-zk92d\" (UID: \"bebd9484-ab72-4bbd-84e7-99f28795ad85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.275131 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bebd9484-ab72-4bbd-84e7-99f28795ad85-secret-volume\") pod \"collect-profiles-29483475-zk92d\" (UID: \"bebd9484-ab72-4bbd-84e7-99f28795ad85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.288944 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls67j\" (UniqueName: \"kubernetes.io/projected/bebd9484-ab72-4bbd-84e7-99f28795ad85-kube-api-access-ls67j\") pod \"collect-profiles-29483475-zk92d\" (UID: \"bebd9484-ab72-4bbd-84e7-99f28795ad85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.467484 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.905836 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d"] Jan 21 15:15:01 crc kubenswrapper[4902]: I0121 15:15:01.696666 4902 generic.go:334] "Generic (PLEG): container finished" podID="bebd9484-ab72-4bbd-84e7-99f28795ad85" containerID="5f2cc1ae5d9e64887200b316f71af17b596d6725e436d2e46c7acd03a38f0c75" exitCode=0 Jan 21 15:15:01 crc kubenswrapper[4902]: I0121 15:15:01.696730 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" event={"ID":"bebd9484-ab72-4bbd-84e7-99f28795ad85","Type":"ContainerDied","Data":"5f2cc1ae5d9e64887200b316f71af17b596d6725e436d2e46c7acd03a38f0c75"} Jan 21 15:15:01 crc kubenswrapper[4902]: I0121 15:15:01.696963 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" event={"ID":"bebd9484-ab72-4bbd-84e7-99f28795ad85","Type":"ContainerStarted","Data":"18dd01f0dec26129b8ac3b01d72dd971536f6f32d360ef65d3a9bd90a2b6abfc"} Jan 21 15:15:02 crc kubenswrapper[4902]: I0121 15:15:02.974059 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:03 crc kubenswrapper[4902]: I0121 15:15:03.015932 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bebd9484-ab72-4bbd-84e7-99f28795ad85-secret-volume\") pod \"bebd9484-ab72-4bbd-84e7-99f28795ad85\" (UID: \"bebd9484-ab72-4bbd-84e7-99f28795ad85\") " Jan 21 15:15:03 crc kubenswrapper[4902]: I0121 15:15:03.016094 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bebd9484-ab72-4bbd-84e7-99f28795ad85-config-volume\") pod \"bebd9484-ab72-4bbd-84e7-99f28795ad85\" (UID: \"bebd9484-ab72-4bbd-84e7-99f28795ad85\") " Jan 21 15:15:03 crc kubenswrapper[4902]: I0121 15:15:03.016124 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls67j\" (UniqueName: \"kubernetes.io/projected/bebd9484-ab72-4bbd-84e7-99f28795ad85-kube-api-access-ls67j\") pod \"bebd9484-ab72-4bbd-84e7-99f28795ad85\" (UID: \"bebd9484-ab72-4bbd-84e7-99f28795ad85\") " Jan 21 15:15:03 crc kubenswrapper[4902]: I0121 15:15:03.017631 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bebd9484-ab72-4bbd-84e7-99f28795ad85-config-volume" (OuterVolumeSpecName: "config-volume") pod "bebd9484-ab72-4bbd-84e7-99f28795ad85" (UID: "bebd9484-ab72-4bbd-84e7-99f28795ad85"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:15:03 crc kubenswrapper[4902]: I0121 15:15:03.022290 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd9484-ab72-4bbd-84e7-99f28795ad85-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bebd9484-ab72-4bbd-84e7-99f28795ad85" (UID: "bebd9484-ab72-4bbd-84e7-99f28795ad85"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:15:03 crc kubenswrapper[4902]: I0121 15:15:03.022626 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bebd9484-ab72-4bbd-84e7-99f28795ad85-kube-api-access-ls67j" (OuterVolumeSpecName: "kube-api-access-ls67j") pod "bebd9484-ab72-4bbd-84e7-99f28795ad85" (UID: "bebd9484-ab72-4bbd-84e7-99f28795ad85"). InnerVolumeSpecName "kube-api-access-ls67j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:15:03 crc kubenswrapper[4902]: I0121 15:15:03.118130 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bebd9484-ab72-4bbd-84e7-99f28795ad85-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:15:03 crc kubenswrapper[4902]: I0121 15:15:03.118181 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls67j\" (UniqueName: \"kubernetes.io/projected/bebd9484-ab72-4bbd-84e7-99f28795ad85-kube-api-access-ls67j\") on node \"crc\" DevicePath \"\"" Jan 21 15:15:03 crc kubenswrapper[4902]: I0121 15:15:03.118196 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bebd9484-ab72-4bbd-84e7-99f28795ad85-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:15:03 crc kubenswrapper[4902]: I0121 15:15:03.713669 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" event={"ID":"bebd9484-ab72-4bbd-84e7-99f28795ad85","Type":"ContainerDied","Data":"18dd01f0dec26129b8ac3b01d72dd971536f6f32d360ef65d3a9bd90a2b6abfc"} Jan 21 15:15:03 crc kubenswrapper[4902]: I0121 15:15:03.713720 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18dd01f0dec26129b8ac3b01d72dd971536f6f32d360ef65d3a9bd90a2b6abfc" Jan 21 15:15:03 crc kubenswrapper[4902]: I0121 15:15:03.713757 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:04 crc kubenswrapper[4902]: I0121 15:15:04.042987 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw"] Jan 21 15:15:04 crc kubenswrapper[4902]: I0121 15:15:04.049638 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw"] Jan 21 15:15:04 crc kubenswrapper[4902]: I0121 15:15:04.305887 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70656800-9429-43df-a1cb-7c8617d23b3f" path="/var/lib/kubelet/pods/70656800-9429-43df-a1cb-7c8617d23b3f/volumes" Jan 21 15:15:09 crc kubenswrapper[4902]: I0121 15:15:09.295199 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:15:09 crc kubenswrapper[4902]: E0121 15:15:09.296081 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:15:21 crc kubenswrapper[4902]: I0121 15:15:21.721989 4902 scope.go:117] "RemoveContainer" containerID="de8fcd8c3571217b412f9ba6c688fc875ba6c7c7eb18b7b87d8ab03820c43542" Jan 21 15:15:24 crc kubenswrapper[4902]: I0121 15:15:24.294898 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:15:24 crc kubenswrapper[4902]: E0121 15:15:24.295464 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:15:39 crc kubenswrapper[4902]: I0121 15:15:39.294870 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:15:39 crc kubenswrapper[4902]: E0121 15:15:39.295906 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.469088 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-548m6"] Jan 21 15:15:52 crc kubenswrapper[4902]: E0121 15:15:52.469922 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebd9484-ab72-4bbd-84e7-99f28795ad85" containerName="collect-profiles" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.469939 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebd9484-ab72-4bbd-84e7-99f28795ad85" containerName="collect-profiles" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.470147 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="bebd9484-ab72-4bbd-84e7-99f28795ad85" containerName="collect-profiles" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.471266 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.506302 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-548m6"] Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.629013 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d4sc\" (UniqueName: \"kubernetes.io/projected/c3b76b38-be5a-4672-b09e-478fd80b1c0c-kube-api-access-7d4sc\") pod \"certified-operators-548m6\" (UID: \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\") " pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.629690 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b76b38-be5a-4672-b09e-478fd80b1c0c-catalog-content\") pod \"certified-operators-548m6\" (UID: \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\") " pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.629778 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b76b38-be5a-4672-b09e-478fd80b1c0c-utilities\") pod \"certified-operators-548m6\" (UID: \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\") " pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.731591 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b76b38-be5a-4672-b09e-478fd80b1c0c-catalog-content\") pod \"certified-operators-548m6\" (UID: \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\") " pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.730829 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b76b38-be5a-4672-b09e-478fd80b1c0c-catalog-content\") pod \"certified-operators-548m6\" (UID: \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\") " pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.731745 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b76b38-be5a-4672-b09e-478fd80b1c0c-utilities\") pod \"certified-operators-548m6\" (UID: \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\") " pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.732182 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b76b38-be5a-4672-b09e-478fd80b1c0c-utilities\") pod \"certified-operators-548m6\" (UID: \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\") " pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.732328 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d4sc\" (UniqueName: \"kubernetes.io/projected/c3b76b38-be5a-4672-b09e-478fd80b1c0c-kube-api-access-7d4sc\") pod \"certified-operators-548m6\" (UID: \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\") " pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.757727 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d4sc\" (UniqueName: \"kubernetes.io/projected/c3b76b38-be5a-4672-b09e-478fd80b1c0c-kube-api-access-7d4sc\") pod \"certified-operators-548m6\" (UID: \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\") " pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.800481 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:15:53 crc kubenswrapper[4902]: I0121 15:15:53.243670 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-548m6"] Jan 21 15:15:53 crc kubenswrapper[4902]: W0121 15:15:53.248993 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3b76b38_be5a_4672_b09e_478fd80b1c0c.slice/crio-674d9e4d0b5e1c13514bc938797d4379dd0b8c270486271e8d0cf4d945d5cdff WatchSource:0}: Error finding container 674d9e4d0b5e1c13514bc938797d4379dd0b8c270486271e8d0cf4d945d5cdff: Status 404 returned error can't find the container with id 674d9e4d0b5e1c13514bc938797d4379dd0b8c270486271e8d0cf4d945d5cdff Jan 21 15:15:53 crc kubenswrapper[4902]: I0121 15:15:53.294999 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:15:53 crc kubenswrapper[4902]: E0121 15:15:53.295249 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:15:54 crc kubenswrapper[4902]: I0121 15:15:54.132157 4902 generic.go:334] "Generic (PLEG): container finished" podID="c3b76b38-be5a-4672-b09e-478fd80b1c0c" containerID="584668f054666391ea99936131b6f15547c60f5d701a7218a6b40f17a4a3fbf4" exitCode=0 Jan 21 15:15:54 crc kubenswrapper[4902]: I0121 15:15:54.132596 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-548m6" event={"ID":"c3b76b38-be5a-4672-b09e-478fd80b1c0c","Type":"ContainerDied","Data":"584668f054666391ea99936131b6f15547c60f5d701a7218a6b40f17a4a3fbf4"} Jan 21 15:15:54 crc kubenswrapper[4902]: I0121 15:15:54.132652 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-548m6" event={"ID":"c3b76b38-be5a-4672-b09e-478fd80b1c0c","Type":"ContainerStarted","Data":"674d9e4d0b5e1c13514bc938797d4379dd0b8c270486271e8d0cf4d945d5cdff"} Jan 21 15:15:54 crc kubenswrapper[4902]: I0121 15:15:54.135357 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:15:56 crc kubenswrapper[4902]: I0121 15:15:56.148937 4902 generic.go:334] "Generic (PLEG): container finished" podID="c3b76b38-be5a-4672-b09e-478fd80b1c0c" containerID="e523a31341dc80f85707e2fda6607e76debaecec3798bd5ba267c4e0b129d205" exitCode=0 Jan 21 15:15:56 crc kubenswrapper[4902]: I0121 15:15:56.149058 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-548m6" event={"ID":"c3b76b38-be5a-4672-b09e-478fd80b1c0c","Type":"ContainerDied","Data":"e523a31341dc80f85707e2fda6607e76debaecec3798bd5ba267c4e0b129d205"} Jan 21 15:15:57 crc kubenswrapper[4902]: I0121 15:15:57.159964 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-548m6" event={"ID":"c3b76b38-be5a-4672-b09e-478fd80b1c0c","Type":"ContainerStarted","Data":"72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026"} Jan 21 15:15:57 crc kubenswrapper[4902]: I0121 15:15:57.179886 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-548m6" podStartSLOduration=2.722377917 podStartE2EDuration="5.179871183s" podCreationTimestamp="2026-01-21 15:15:52 +0000 UTC" firstStartedPulling="2026-01-21 15:15:54.135062019 +0000 UTC m=+2516.211895068" lastFinishedPulling="2026-01-21 15:15:56.592555305 +0000 UTC m=+2518.669388334" observedRunningTime="2026-01-21 15:15:57.177879417 +0000 UTC m=+2519.254712446" watchObservedRunningTime="2026-01-21 15:15:57.179871183 +0000 UTC m=+2519.256704212" Jan 21 15:16:02 crc kubenswrapper[4902]: I0121 15:16:02.800912 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:16:02 crc kubenswrapper[4902]: I0121 15:16:02.802178 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:16:02 crc kubenswrapper[4902]: I0121 15:16:02.840523 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:16:03 crc kubenswrapper[4902]: I0121 15:16:03.268849 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:16:03 crc kubenswrapper[4902]: I0121 15:16:03.318743 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-548m6"] Jan 21 15:16:05 crc kubenswrapper[4902]: I0121 15:16:05.221347 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-548m6" podUID="c3b76b38-be5a-4672-b09e-478fd80b1c0c" containerName="registry-server" containerID="cri-o://72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026" gracePeriod=2 Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.190082 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.245193 4902 generic.go:334] "Generic (PLEG): container finished" podID="c3b76b38-be5a-4672-b09e-478fd80b1c0c" containerID="72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026" exitCode=0 Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.245241 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-548m6" event={"ID":"c3b76b38-be5a-4672-b09e-478fd80b1c0c","Type":"ContainerDied","Data":"72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026"} Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.245272 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-548m6" event={"ID":"c3b76b38-be5a-4672-b09e-478fd80b1c0c","Type":"ContainerDied","Data":"674d9e4d0b5e1c13514bc938797d4379dd0b8c270486271e8d0cf4d945d5cdff"} Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.245293 4902 scope.go:117] "RemoveContainer" containerID="72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.245320 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.265710 4902 scope.go:117] "RemoveContainer" containerID="e523a31341dc80f85707e2fda6607e76debaecec3798bd5ba267c4e0b129d205" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.282722 4902 scope.go:117] "RemoveContainer" containerID="584668f054666391ea99936131b6f15547c60f5d701a7218a6b40f17a4a3fbf4" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.302640 4902 scope.go:117] "RemoveContainer" containerID="72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026" Jan 21 15:16:06 crc kubenswrapper[4902]: E0121 15:16:06.303447 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026\": container with ID starting with 72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026 not found: ID does not exist" containerID="72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.303486 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026"} err="failed to get container status \"72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026\": rpc error: code = NotFound desc = could not find container \"72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026\": container with ID starting with 72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026 not found: ID does not exist" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.303514 4902 scope.go:117] "RemoveContainer" containerID="e523a31341dc80f85707e2fda6607e76debaecec3798bd5ba267c4e0b129d205" Jan 21 15:16:06 crc kubenswrapper[4902]: E0121 15:16:06.303957 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e523a31341dc80f85707e2fda6607e76debaecec3798bd5ba267c4e0b129d205\": container with ID starting with e523a31341dc80f85707e2fda6607e76debaecec3798bd5ba267c4e0b129d205 not found: ID does not exist" containerID="e523a31341dc80f85707e2fda6607e76debaecec3798bd5ba267c4e0b129d205" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.303984 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e523a31341dc80f85707e2fda6607e76debaecec3798bd5ba267c4e0b129d205"} err="failed to get container status \"e523a31341dc80f85707e2fda6607e76debaecec3798bd5ba267c4e0b129d205\": rpc error: code = NotFound desc = could not find container \"e523a31341dc80f85707e2fda6607e76debaecec3798bd5ba267c4e0b129d205\": container with ID starting with e523a31341dc80f85707e2fda6607e76debaecec3798bd5ba267c4e0b129d205 not found: ID does not exist" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.303998 4902 scope.go:117] "RemoveContainer" containerID="584668f054666391ea99936131b6f15547c60f5d701a7218a6b40f17a4a3fbf4" Jan 21 15:16:06 crc kubenswrapper[4902]: E0121 15:16:06.305185 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"584668f054666391ea99936131b6f15547c60f5d701a7218a6b40f17a4a3fbf4\": container with ID starting with 584668f054666391ea99936131b6f15547c60f5d701a7218a6b40f17a4a3fbf4 not found: ID does not exist" containerID="584668f054666391ea99936131b6f15547c60f5d701a7218a6b40f17a4a3fbf4" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.305210 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"584668f054666391ea99936131b6f15547c60f5d701a7218a6b40f17a4a3fbf4"} err="failed to get container status \"584668f054666391ea99936131b6f15547c60f5d701a7218a6b40f17a4a3fbf4\": rpc error: code = NotFound desc = could not find container \"584668f054666391ea99936131b6f15547c60f5d701a7218a6b40f17a4a3fbf4\": container with ID starting with 584668f054666391ea99936131b6f15547c60f5d701a7218a6b40f17a4a3fbf4 not found: ID does not exist" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.345669 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b76b38-be5a-4672-b09e-478fd80b1c0c-catalog-content\") pod \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\" (UID: \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\") " Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.345729 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d4sc\" (UniqueName: \"kubernetes.io/projected/c3b76b38-be5a-4672-b09e-478fd80b1c0c-kube-api-access-7d4sc\") pod \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\" (UID: \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\") " Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.345757 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b76b38-be5a-4672-b09e-478fd80b1c0c-utilities\") pod \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\" (UID: \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\") " Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.347107 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3b76b38-be5a-4672-b09e-478fd80b1c0c-utilities" (OuterVolumeSpecName: "utilities") pod "c3b76b38-be5a-4672-b09e-478fd80b1c0c" (UID: "c3b76b38-be5a-4672-b09e-478fd80b1c0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.352232 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3b76b38-be5a-4672-b09e-478fd80b1c0c-kube-api-access-7d4sc" (OuterVolumeSpecName: "kube-api-access-7d4sc") pod "c3b76b38-be5a-4672-b09e-478fd80b1c0c" (UID: "c3b76b38-be5a-4672-b09e-478fd80b1c0c"). InnerVolumeSpecName "kube-api-access-7d4sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.398392 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3b76b38-be5a-4672-b09e-478fd80b1c0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3b76b38-be5a-4672-b09e-478fd80b1c0c" (UID: "c3b76b38-be5a-4672-b09e-478fd80b1c0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.447472 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b76b38-be5a-4672-b09e-478fd80b1c0c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.447502 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b76b38-be5a-4672-b09e-478fd80b1c0c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.447514 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d4sc\" (UniqueName: \"kubernetes.io/projected/c3b76b38-be5a-4672-b09e-478fd80b1c0c-kube-api-access-7d4sc\") on node \"crc\" DevicePath \"\"" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.602200 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-548m6"] Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.609849 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-548m6"] Jan 21 15:16:07 crc kubenswrapper[4902]: I0121 15:16:07.295316 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:16:07 crc kubenswrapper[4902]: E0121 15:16:07.295757 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:16:08 crc kubenswrapper[4902]: I0121 15:16:08.304180 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3b76b38-be5a-4672-b09e-478fd80b1c0c" path="/var/lib/kubelet/pods/c3b76b38-be5a-4672-b09e-478fd80b1c0c/volumes" Jan 21 15:16:22 crc kubenswrapper[4902]: I0121 15:16:22.295270 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:16:22 crc kubenswrapper[4902]: E0121 15:16:22.295971 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:16:36 crc kubenswrapper[4902]: I0121 15:16:36.295058 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:16:36 crc kubenswrapper[4902]: E0121 15:16:36.295859 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:16:48 crc kubenswrapper[4902]: I0121 15:16:48.298864 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:16:48 crc kubenswrapper[4902]: E0121 15:16:48.299345 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:17:01 crc kubenswrapper[4902]: I0121 15:17:01.294918 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:17:01 crc kubenswrapper[4902]: E0121 15:17:01.295611 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:17:14 crc kubenswrapper[4902]: I0121 15:17:14.294864 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:17:14 crc kubenswrapper[4902]: E0121 15:17:14.296021 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:17:28 crc kubenswrapper[4902]: I0121 15:17:28.299956 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:17:28 crc kubenswrapper[4902]: E0121 15:17:28.300931 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:17:42 crc kubenswrapper[4902]: I0121 15:17:42.295201 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:17:42 crc kubenswrapper[4902]: E0121 15:17:42.295793 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:17:54 crc kubenswrapper[4902]: I0121 15:17:54.294604 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:17:54 crc kubenswrapper[4902]: E0121 15:17:54.295395 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:18:08 crc kubenswrapper[4902]: I0121 15:18:08.300678 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:18:08 crc kubenswrapper[4902]: E0121 15:18:08.301568 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:18:21 crc kubenswrapper[4902]: I0121 15:18:21.294966 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:18:21 crc kubenswrapper[4902]: E0121 15:18:21.295810 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:18:36 crc kubenswrapper[4902]: I0121 15:18:36.295000 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:18:36 crc kubenswrapper[4902]: E0121 15:18:36.295975 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:18:50 crc kubenswrapper[4902]: I0121 15:18:50.295489 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:18:50 crc kubenswrapper[4902]: E0121 15:18:50.296302 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:19:02 crc kubenswrapper[4902]: I0121 15:19:02.295231 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:19:02 crc kubenswrapper[4902]: E0121 15:19:02.295885 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:19:13 crc kubenswrapper[4902]: I0121 15:19:13.295486 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:19:13 crc kubenswrapper[4902]: E0121 15:19:13.296182 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:19:25 crc kubenswrapper[4902]: I0121 15:19:25.295397 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:19:25 crc kubenswrapper[4902]: I0121 15:19:25.771639 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"9118bec5924d18ddd618a8750d04dac5bfd45c7ae04f2acab924299ac7122ce9"} Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.423691 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-znnbw"] Jan 21 15:19:30 crc kubenswrapper[4902]: E0121 15:19:30.424599 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b76b38-be5a-4672-b09e-478fd80b1c0c" containerName="extract-content" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.424618 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b76b38-be5a-4672-b09e-478fd80b1c0c" containerName="extract-content" Jan 21 15:19:30 crc kubenswrapper[4902]: E0121 15:19:30.424660 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b76b38-be5a-4672-b09e-478fd80b1c0c" containerName="extract-utilities" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.424669 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b76b38-be5a-4672-b09e-478fd80b1c0c" containerName="extract-utilities" Jan 21 15:19:30 crc kubenswrapper[4902]: E0121 15:19:30.424681 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b76b38-be5a-4672-b09e-478fd80b1c0c" containerName="registry-server" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.424691 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b76b38-be5a-4672-b09e-478fd80b1c0c" containerName="registry-server" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.424815 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3b76b38-be5a-4672-b09e-478fd80b1c0c" containerName="registry-server" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.426944 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.437887 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-znnbw"] Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.514679 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-catalog-content\") pod \"redhat-marketplace-znnbw\" (UID: \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\") " pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.514794 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzgxl\" (UniqueName: \"kubernetes.io/projected/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-kube-api-access-wzgxl\") pod \"redhat-marketplace-znnbw\" (UID: \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\") " pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.514837 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-utilities\") pod \"redhat-marketplace-znnbw\" (UID: \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\") " pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.615675 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-catalog-content\") pod \"redhat-marketplace-znnbw\" (UID: \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\") " pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.615744 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzgxl\" (UniqueName: \"kubernetes.io/projected/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-kube-api-access-wzgxl\") pod \"redhat-marketplace-znnbw\" (UID: \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\") " pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.615766 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-utilities\") pod \"redhat-marketplace-znnbw\" (UID: \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\") " pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.616260 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-utilities\") pod \"redhat-marketplace-znnbw\" (UID: \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\") " pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.616479 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-catalog-content\") pod \"redhat-marketplace-znnbw\" (UID: \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\") " pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.634284 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzgxl\" (UniqueName: \"kubernetes.io/projected/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-kube-api-access-wzgxl\") pod \"redhat-marketplace-znnbw\" (UID: \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\") " pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.742966 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:31 crc kubenswrapper[4902]: I0121 15:19:31.155478 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-znnbw"] Jan 21 15:19:31 crc kubenswrapper[4902]: I0121 15:19:31.811984 4902 generic.go:334] "Generic (PLEG): container finished" podID="fc75a5cf-c2f6-4ec4-bb1b-715732baded5" containerID="a0619e3491245d9ba483a1c1f53c32fc39c899fefbe01cac1f03c55c440659ee" exitCode=0 Jan 21 15:19:31 crc kubenswrapper[4902]: I0121 15:19:31.812118 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znnbw" event={"ID":"fc75a5cf-c2f6-4ec4-bb1b-715732baded5","Type":"ContainerDied","Data":"a0619e3491245d9ba483a1c1f53c32fc39c899fefbe01cac1f03c55c440659ee"} Jan 21 15:19:31 crc kubenswrapper[4902]: I0121 15:19:31.812429 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znnbw" event={"ID":"fc75a5cf-c2f6-4ec4-bb1b-715732baded5","Type":"ContainerStarted","Data":"7af97d4ed275a1ae5c9629fc436df7ed6ef28556298e74dd885f31e365b940b5"} Jan 21 15:19:33 crc kubenswrapper[4902]: I0121 15:19:33.828246 4902 generic.go:334] "Generic (PLEG): container finished" podID="fc75a5cf-c2f6-4ec4-bb1b-715732baded5" containerID="3ad8c3582399a71daeb81b537ebc1fefe326cc6c0e9ef9d37eb4b7187ba1d0ba" exitCode=0 Jan 21 15:19:33 crc kubenswrapper[4902]: I0121 15:19:33.828306 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znnbw" event={"ID":"fc75a5cf-c2f6-4ec4-bb1b-715732baded5","Type":"ContainerDied","Data":"3ad8c3582399a71daeb81b537ebc1fefe326cc6c0e9ef9d37eb4b7187ba1d0ba"} Jan 21 15:19:34 crc kubenswrapper[4902]: I0121 15:19:34.837788 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znnbw" event={"ID":"fc75a5cf-c2f6-4ec4-bb1b-715732baded5","Type":"ContainerStarted","Data":"254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57"} Jan 21 15:19:34 crc kubenswrapper[4902]: I0121 15:19:34.856247 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-znnbw" podStartSLOduration=2.451734481 podStartE2EDuration="4.856230214s" podCreationTimestamp="2026-01-21 15:19:30 +0000 UTC" firstStartedPulling="2026-01-21 15:19:31.813850453 +0000 UTC m=+2733.890683482" lastFinishedPulling="2026-01-21 15:19:34.218346186 +0000 UTC m=+2736.295179215" observedRunningTime="2026-01-21 15:19:34.853582228 +0000 UTC m=+2736.930415267" watchObservedRunningTime="2026-01-21 15:19:34.856230214 +0000 UTC m=+2736.933063243" Jan 21 15:19:40 crc kubenswrapper[4902]: I0121 15:19:40.743731 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:40 crc kubenswrapper[4902]: I0121 15:19:40.744351 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:40 crc kubenswrapper[4902]: I0121 15:19:40.798309 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:40 crc kubenswrapper[4902]: I0121 15:19:40.932098 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:41 crc kubenswrapper[4902]: I0121 15:19:41.046060 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-znnbw"] Jan 21 15:19:42 crc kubenswrapper[4902]: I0121 15:19:42.895733 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-znnbw" podUID="fc75a5cf-c2f6-4ec4-bb1b-715732baded5" containerName="registry-server" containerID="cri-o://254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57" gracePeriod=2 Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.367904 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.521143 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-utilities\") pod \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\" (UID: \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\") " Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.521280 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-catalog-content\") pod \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\" (UID: \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\") " Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.521307 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzgxl\" (UniqueName: \"kubernetes.io/projected/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-kube-api-access-wzgxl\") pod \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\" (UID: \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\") " Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.522004 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-utilities" (OuterVolumeSpecName: "utilities") pod "fc75a5cf-c2f6-4ec4-bb1b-715732baded5" (UID: "fc75a5cf-c2f6-4ec4-bb1b-715732baded5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.539454 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-kube-api-access-wzgxl" (OuterVolumeSpecName: "kube-api-access-wzgxl") pod "fc75a5cf-c2f6-4ec4-bb1b-715732baded5" (UID: "fc75a5cf-c2f6-4ec4-bb1b-715732baded5"). InnerVolumeSpecName "kube-api-access-wzgxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.549006 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc75a5cf-c2f6-4ec4-bb1b-715732baded5" (UID: "fc75a5cf-c2f6-4ec4-bb1b-715732baded5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.622847 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.622901 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzgxl\" (UniqueName: \"kubernetes.io/projected/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-kube-api-access-wzgxl\") on node \"crc\" DevicePath \"\"" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.622919 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.905107 4902 generic.go:334] "Generic (PLEG): container finished" podID="fc75a5cf-c2f6-4ec4-bb1b-715732baded5" containerID="254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57" exitCode=0 Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.905165 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.905182 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znnbw" event={"ID":"fc75a5cf-c2f6-4ec4-bb1b-715732baded5","Type":"ContainerDied","Data":"254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57"} Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.905531 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znnbw" event={"ID":"fc75a5cf-c2f6-4ec4-bb1b-715732baded5","Type":"ContainerDied","Data":"7af97d4ed275a1ae5c9629fc436df7ed6ef28556298e74dd885f31e365b940b5"} Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.905574 4902 scope.go:117] "RemoveContainer" containerID="254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.928287 4902 scope.go:117] "RemoveContainer" containerID="3ad8c3582399a71daeb81b537ebc1fefe326cc6c0e9ef9d37eb4b7187ba1d0ba" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.953495 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-znnbw"] Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.956420 4902 scope.go:117] "RemoveContainer" containerID="a0619e3491245d9ba483a1c1f53c32fc39c899fefbe01cac1f03c55c440659ee" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.958201 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-znnbw"] Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.969844 4902 scope.go:117] "RemoveContainer" containerID="254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57" Jan 21 15:19:43 crc kubenswrapper[4902]: E0121 15:19:43.970377 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57\": container with ID starting with 254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57 not found: ID does not exist" containerID="254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.970432 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57"} err="failed to get container status \"254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57\": rpc error: code = NotFound desc = could not find container \"254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57\": container with ID starting with 254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57 not found: ID does not exist" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.970461 4902 scope.go:117] "RemoveContainer" containerID="3ad8c3582399a71daeb81b537ebc1fefe326cc6c0e9ef9d37eb4b7187ba1d0ba" Jan 21 15:19:43 crc kubenswrapper[4902]: E0121 15:19:43.970851 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ad8c3582399a71daeb81b537ebc1fefe326cc6c0e9ef9d37eb4b7187ba1d0ba\": container with ID starting with 3ad8c3582399a71daeb81b537ebc1fefe326cc6c0e9ef9d37eb4b7187ba1d0ba not found: ID does not exist" containerID="3ad8c3582399a71daeb81b537ebc1fefe326cc6c0e9ef9d37eb4b7187ba1d0ba" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.970895 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ad8c3582399a71daeb81b537ebc1fefe326cc6c0e9ef9d37eb4b7187ba1d0ba"} err="failed to get container status \"3ad8c3582399a71daeb81b537ebc1fefe326cc6c0e9ef9d37eb4b7187ba1d0ba\": rpc error: code = NotFound desc = could not find container \"3ad8c3582399a71daeb81b537ebc1fefe326cc6c0e9ef9d37eb4b7187ba1d0ba\": container with ID starting with 3ad8c3582399a71daeb81b537ebc1fefe326cc6c0e9ef9d37eb4b7187ba1d0ba not found: ID does not exist" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.970932 4902 scope.go:117] "RemoveContainer" containerID="a0619e3491245d9ba483a1c1f53c32fc39c899fefbe01cac1f03c55c440659ee" Jan 21 15:19:43 crc kubenswrapper[4902]: E0121 15:19:43.971269 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0619e3491245d9ba483a1c1f53c32fc39c899fefbe01cac1f03c55c440659ee\": container with ID starting with a0619e3491245d9ba483a1c1f53c32fc39c899fefbe01cac1f03c55c440659ee not found: ID does not exist" containerID="a0619e3491245d9ba483a1c1f53c32fc39c899fefbe01cac1f03c55c440659ee" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.971317 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0619e3491245d9ba483a1c1f53c32fc39c899fefbe01cac1f03c55c440659ee"} err="failed to get container status \"a0619e3491245d9ba483a1c1f53c32fc39c899fefbe01cac1f03c55c440659ee\": rpc error: code = NotFound desc = could not find container \"a0619e3491245d9ba483a1c1f53c32fc39c899fefbe01cac1f03c55c440659ee\": container with ID starting with a0619e3491245d9ba483a1c1f53c32fc39c899fefbe01cac1f03c55c440659ee not found: ID does not exist" Jan 21 15:19:44 crc kubenswrapper[4902]: I0121 15:19:44.309875 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc75a5cf-c2f6-4ec4-bb1b-715732baded5" path="/var/lib/kubelet/pods/fc75a5cf-c2f6-4ec4-bb1b-715732baded5/volumes" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.125668 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5jt5k"] Jan 21 15:21:12 crc kubenswrapper[4902]: E0121 15:21:12.126511 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc75a5cf-c2f6-4ec4-bb1b-715732baded5" containerName="extract-content" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.126527 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc75a5cf-c2f6-4ec4-bb1b-715732baded5" containerName="extract-content" Jan 21 15:21:12 crc kubenswrapper[4902]: E0121 15:21:12.126554 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc75a5cf-c2f6-4ec4-bb1b-715732baded5" containerName="registry-server" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.126563 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc75a5cf-c2f6-4ec4-bb1b-715732baded5" containerName="registry-server" Jan 21 15:21:12 crc kubenswrapper[4902]: E0121 15:21:12.126587 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc75a5cf-c2f6-4ec4-bb1b-715732baded5" containerName="extract-utilities" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.126596 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc75a5cf-c2f6-4ec4-bb1b-715732baded5" containerName="extract-utilities" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.126768 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc75a5cf-c2f6-4ec4-bb1b-715732baded5" containerName="registry-server" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.147733 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.162088 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5jt5k"] Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.176853 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49a0f3d6-d1a4-40f2-bfa2-25e435864950-utilities\") pod \"community-operators-5jt5k\" (UID: \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\") " pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.176913 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8gmg\" (UniqueName: \"kubernetes.io/projected/49a0f3d6-d1a4-40f2-bfa2-25e435864950-kube-api-access-j8gmg\") pod \"community-operators-5jt5k\" (UID: \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\") " pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.177024 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49a0f3d6-d1a4-40f2-bfa2-25e435864950-catalog-content\") pod \"community-operators-5jt5k\" (UID: \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\") " pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.278563 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49a0f3d6-d1a4-40f2-bfa2-25e435864950-utilities\") pod \"community-operators-5jt5k\" (UID: \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\") " pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.278623 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8gmg\" (UniqueName: \"kubernetes.io/projected/49a0f3d6-d1a4-40f2-bfa2-25e435864950-kube-api-access-j8gmg\") pod \"community-operators-5jt5k\" (UID: \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\") " pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.279284 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49a0f3d6-d1a4-40f2-bfa2-25e435864950-catalog-content\") pod \"community-operators-5jt5k\" (UID: \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\") " pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.279298 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49a0f3d6-d1a4-40f2-bfa2-25e435864950-catalog-content\") pod \"community-operators-5jt5k\" (UID: \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\") " pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.279311 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49a0f3d6-d1a4-40f2-bfa2-25e435864950-utilities\") pod \"community-operators-5jt5k\" (UID: \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\") " pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.300999 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8gmg\" (UniqueName: \"kubernetes.io/projected/49a0f3d6-d1a4-40f2-bfa2-25e435864950-kube-api-access-j8gmg\") pod \"community-operators-5jt5k\" (UID: \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\") " pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.486649 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.952990 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5jt5k"] Jan 21 15:21:12 crc kubenswrapper[4902]: W0121 15:21:12.960110 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49a0f3d6_d1a4_40f2_bfa2_25e435864950.slice/crio-2fff381c71980f38e99d9c08a1725ee6490e2b20f384b83405ef046942fdb900 WatchSource:0}: Error finding container 2fff381c71980f38e99d9c08a1725ee6490e2b20f384b83405ef046942fdb900: Status 404 returned error can't find the container with id 2fff381c71980f38e99d9c08a1725ee6490e2b20f384b83405ef046942fdb900 Jan 21 15:21:13 crc kubenswrapper[4902]: I0121 15:21:13.601970 4902 generic.go:334] "Generic (PLEG): container finished" podID="49a0f3d6-d1a4-40f2-bfa2-25e435864950" containerID="14d871613afdc9d268fe054a680ca6d69258d71c162792b607ec10b8abbd3065" exitCode=0 Jan 21 15:21:13 crc kubenswrapper[4902]: I0121 15:21:13.602010 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jt5k" event={"ID":"49a0f3d6-d1a4-40f2-bfa2-25e435864950","Type":"ContainerDied","Data":"14d871613afdc9d268fe054a680ca6d69258d71c162792b607ec10b8abbd3065"} Jan 21 15:21:13 crc kubenswrapper[4902]: I0121 15:21:13.602036 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jt5k" event={"ID":"49a0f3d6-d1a4-40f2-bfa2-25e435864950","Type":"ContainerStarted","Data":"2fff381c71980f38e99d9c08a1725ee6490e2b20f384b83405ef046942fdb900"} Jan 21 15:21:13 crc kubenswrapper[4902]: I0121 15:21:13.605881 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:21:14 crc kubenswrapper[4902]: I0121 15:21:14.615780 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jt5k" event={"ID":"49a0f3d6-d1a4-40f2-bfa2-25e435864950","Type":"ContainerStarted","Data":"fff241d35cf1b860a0866179911b56484a74d4995ec07197fbf4bfaee3d90ddd"} Jan 21 15:21:15 crc kubenswrapper[4902]: I0121 15:21:15.626894 4902 generic.go:334] "Generic (PLEG): container finished" podID="49a0f3d6-d1a4-40f2-bfa2-25e435864950" containerID="fff241d35cf1b860a0866179911b56484a74d4995ec07197fbf4bfaee3d90ddd" exitCode=0 Jan 21 15:21:15 crc kubenswrapper[4902]: I0121 15:21:15.626946 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jt5k" event={"ID":"49a0f3d6-d1a4-40f2-bfa2-25e435864950","Type":"ContainerDied","Data":"fff241d35cf1b860a0866179911b56484a74d4995ec07197fbf4bfaee3d90ddd"} Jan 21 15:21:16 crc kubenswrapper[4902]: I0121 15:21:16.637206 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jt5k" event={"ID":"49a0f3d6-d1a4-40f2-bfa2-25e435864950","Type":"ContainerStarted","Data":"958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f"} Jan 21 15:21:16 crc kubenswrapper[4902]: I0121 15:21:16.660680 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5jt5k" podStartSLOduration=1.902132151 podStartE2EDuration="4.660662566s" podCreationTimestamp="2026-01-21 15:21:12 +0000 UTC" firstStartedPulling="2026-01-21 15:21:13.60515322 +0000 UTC m=+2835.681986289" lastFinishedPulling="2026-01-21 15:21:16.363683675 +0000 UTC m=+2838.440516704" observedRunningTime="2026-01-21 15:21:16.657466425 +0000 UTC m=+2838.734299464" watchObservedRunningTime="2026-01-21 15:21:16.660662566 +0000 UTC m=+2838.737495595" Jan 21 15:21:22 crc kubenswrapper[4902]: I0121 15:21:22.487100 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:22 crc kubenswrapper[4902]: I0121 15:21:22.487980 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:22 crc kubenswrapper[4902]: I0121 15:21:22.533528 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:22 crc kubenswrapper[4902]: I0121 15:21:22.747232 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:23 crc kubenswrapper[4902]: I0121 15:21:23.910398 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5jt5k"] Jan 21 15:21:24 crc kubenswrapper[4902]: I0121 15:21:24.692943 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5jt5k" podUID="49a0f3d6-d1a4-40f2-bfa2-25e435864950" containerName="registry-server" containerID="cri-o://958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f" gracePeriod=2 Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.621581 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.700670 4902 generic.go:334] "Generic (PLEG): container finished" podID="49a0f3d6-d1a4-40f2-bfa2-25e435864950" containerID="958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f" exitCode=0 Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.700710 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jt5k" event={"ID":"49a0f3d6-d1a4-40f2-bfa2-25e435864950","Type":"ContainerDied","Data":"958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f"} Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.700734 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jt5k" event={"ID":"49a0f3d6-d1a4-40f2-bfa2-25e435864950","Type":"ContainerDied","Data":"2fff381c71980f38e99d9c08a1725ee6490e2b20f384b83405ef046942fdb900"} Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.700769 4902 scope.go:117] "RemoveContainer" containerID="958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.700884 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.720561 4902 scope.go:117] "RemoveContainer" containerID="fff241d35cf1b860a0866179911b56484a74d4995ec07197fbf4bfaee3d90ddd" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.740016 4902 scope.go:117] "RemoveContainer" containerID="14d871613afdc9d268fe054a680ca6d69258d71c162792b607ec10b8abbd3065" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.778461 4902 scope.go:117] "RemoveContainer" containerID="958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f" Jan 21 15:21:25 crc kubenswrapper[4902]: E0121 15:21:25.779192 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f\": container with ID starting with 958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f not found: ID does not exist" containerID="958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.779234 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f"} err="failed to get container status \"958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f\": rpc error: code = NotFound desc = could not find container \"958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f\": container with ID starting with 958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f not found: ID does not exist" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.779266 4902 scope.go:117] "RemoveContainer" containerID="fff241d35cf1b860a0866179911b56484a74d4995ec07197fbf4bfaee3d90ddd" Jan 21 15:21:25 crc kubenswrapper[4902]: E0121 15:21:25.779517 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fff241d35cf1b860a0866179911b56484a74d4995ec07197fbf4bfaee3d90ddd\": container with ID starting with fff241d35cf1b860a0866179911b56484a74d4995ec07197fbf4bfaee3d90ddd not found: ID does not exist" containerID="fff241d35cf1b860a0866179911b56484a74d4995ec07197fbf4bfaee3d90ddd" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.779541 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fff241d35cf1b860a0866179911b56484a74d4995ec07197fbf4bfaee3d90ddd"} err="failed to get container status \"fff241d35cf1b860a0866179911b56484a74d4995ec07197fbf4bfaee3d90ddd\": rpc error: code = NotFound desc = could not find container \"fff241d35cf1b860a0866179911b56484a74d4995ec07197fbf4bfaee3d90ddd\": container with ID starting with fff241d35cf1b860a0866179911b56484a74d4995ec07197fbf4bfaee3d90ddd not found: ID does not exist" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.779558 4902 scope.go:117] "RemoveContainer" containerID="14d871613afdc9d268fe054a680ca6d69258d71c162792b607ec10b8abbd3065" Jan 21 15:21:25 crc kubenswrapper[4902]: E0121 15:21:25.779850 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14d871613afdc9d268fe054a680ca6d69258d71c162792b607ec10b8abbd3065\": container with ID starting with 14d871613afdc9d268fe054a680ca6d69258d71c162792b607ec10b8abbd3065 not found: ID does not exist" containerID="14d871613afdc9d268fe054a680ca6d69258d71c162792b607ec10b8abbd3065" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.779866 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14d871613afdc9d268fe054a680ca6d69258d71c162792b607ec10b8abbd3065"} err="failed to get container status \"14d871613afdc9d268fe054a680ca6d69258d71c162792b607ec10b8abbd3065\": rpc error: code = NotFound desc = could not find container \"14d871613afdc9d268fe054a680ca6d69258d71c162792b607ec10b8abbd3065\": container with ID starting with 14d871613afdc9d268fe054a680ca6d69258d71c162792b607ec10b8abbd3065 not found: ID does not exist" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.783622 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49a0f3d6-d1a4-40f2-bfa2-25e435864950-catalog-content\") pod \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\" (UID: \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\") " Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.783717 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49a0f3d6-d1a4-40f2-bfa2-25e435864950-utilities\") pod \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\" (UID: \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\") " Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.783783 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8gmg\" (UniqueName: \"kubernetes.io/projected/49a0f3d6-d1a4-40f2-bfa2-25e435864950-kube-api-access-j8gmg\") pod \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\" (UID: \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\") " Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.786599 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49a0f3d6-d1a4-40f2-bfa2-25e435864950-utilities" (OuterVolumeSpecName: "utilities") pod "49a0f3d6-d1a4-40f2-bfa2-25e435864950" (UID: "49a0f3d6-d1a4-40f2-bfa2-25e435864950"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.792467 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49a0f3d6-d1a4-40f2-bfa2-25e435864950-kube-api-access-j8gmg" (OuterVolumeSpecName: "kube-api-access-j8gmg") pod "49a0f3d6-d1a4-40f2-bfa2-25e435864950" (UID: "49a0f3d6-d1a4-40f2-bfa2-25e435864950"). InnerVolumeSpecName "kube-api-access-j8gmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.837180 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49a0f3d6-d1a4-40f2-bfa2-25e435864950-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49a0f3d6-d1a4-40f2-bfa2-25e435864950" (UID: "49a0f3d6-d1a4-40f2-bfa2-25e435864950"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.885675 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8gmg\" (UniqueName: \"kubernetes.io/projected/49a0f3d6-d1a4-40f2-bfa2-25e435864950-kube-api-access-j8gmg\") on node \"crc\" DevicePath \"\"" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.885707 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49a0f3d6-d1a4-40f2-bfa2-25e435864950-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.885716 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49a0f3d6-d1a4-40f2-bfa2-25e435864950-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:21:26 crc kubenswrapper[4902]: I0121 15:21:26.034734 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5jt5k"] Jan 21 15:21:26 crc kubenswrapper[4902]: I0121 15:21:26.039735 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5jt5k"] Jan 21 15:21:26 crc kubenswrapper[4902]: I0121 15:21:26.306994 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49a0f3d6-d1a4-40f2-bfa2-25e435864950" path="/var/lib/kubelet/pods/49a0f3d6-d1a4-40f2-bfa2-25e435864950/volumes" Jan 21 15:21:47 crc kubenswrapper[4902]: I0121 15:21:47.770185 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:21:47 crc kubenswrapper[4902]: I0121 15:21:47.770823 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:22:17 crc kubenswrapper[4902]: I0121 15:22:17.770493 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:22:17 crc kubenswrapper[4902]: I0121 15:22:17.771402 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:22:47 crc kubenswrapper[4902]: I0121 15:22:47.769908 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:22:47 crc kubenswrapper[4902]: I0121 15:22:47.770522 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:22:47 crc kubenswrapper[4902]: I0121 15:22:47.770598 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 15:22:47 crc kubenswrapper[4902]: I0121 15:22:47.771644 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9118bec5924d18ddd618a8750d04dac5bfd45c7ae04f2acab924299ac7122ce9"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:22:47 crc kubenswrapper[4902]: I0121 15:22:47.771774 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://9118bec5924d18ddd618a8750d04dac5bfd45c7ae04f2acab924299ac7122ce9" gracePeriod=600 Jan 21 15:22:48 crc kubenswrapper[4902]: I0121 15:22:48.357254 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="9118bec5924d18ddd618a8750d04dac5bfd45c7ae04f2acab924299ac7122ce9" exitCode=0 Jan 21 15:22:48 crc kubenswrapper[4902]: I0121 15:22:48.357345 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"9118bec5924d18ddd618a8750d04dac5bfd45c7ae04f2acab924299ac7122ce9"} Jan 21 15:22:48 crc kubenswrapper[4902]: I0121 15:22:48.357575 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e"} Jan 21 15:22:48 crc kubenswrapper[4902]: I0121 15:22:48.357618 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.758030 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jh5w8"] Jan 21 15:25:06 crc kubenswrapper[4902]: E0121 15:25:06.760605 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a0f3d6-d1a4-40f2-bfa2-25e435864950" containerName="extract-content" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.760627 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a0f3d6-d1a4-40f2-bfa2-25e435864950" containerName="extract-content" Jan 21 15:25:06 crc kubenswrapper[4902]: E0121 15:25:06.760636 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a0f3d6-d1a4-40f2-bfa2-25e435864950" containerName="registry-server" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.760642 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a0f3d6-d1a4-40f2-bfa2-25e435864950" containerName="registry-server" Jan 21 15:25:06 crc kubenswrapper[4902]: E0121 15:25:06.760654 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a0f3d6-d1a4-40f2-bfa2-25e435864950" containerName="extract-utilities" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.760661 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a0f3d6-d1a4-40f2-bfa2-25e435864950" containerName="extract-utilities" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.760831 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="49a0f3d6-d1a4-40f2-bfa2-25e435864950" containerName="registry-server" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.762028 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.774557 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jh5w8"] Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.857937 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5224a84-9644-4cdc-b3c4-eed2488ae61d-utilities\") pod \"redhat-operators-jh5w8\" (UID: \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\") " pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.858086 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5224a84-9644-4cdc-b3c4-eed2488ae61d-catalog-content\") pod \"redhat-operators-jh5w8\" (UID: \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\") " pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.858127 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cfqj\" (UniqueName: \"kubernetes.io/projected/f5224a84-9644-4cdc-b3c4-eed2488ae61d-kube-api-access-2cfqj\") pod \"redhat-operators-jh5w8\" (UID: \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\") " pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.958855 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5224a84-9644-4cdc-b3c4-eed2488ae61d-utilities\") pod \"redhat-operators-jh5w8\" (UID: \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\") " pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.958933 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5224a84-9644-4cdc-b3c4-eed2488ae61d-catalog-content\") pod \"redhat-operators-jh5w8\" (UID: \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\") " pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.958960 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cfqj\" (UniqueName: \"kubernetes.io/projected/f5224a84-9644-4cdc-b3c4-eed2488ae61d-kube-api-access-2cfqj\") pod \"redhat-operators-jh5w8\" (UID: \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\") " pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.959878 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5224a84-9644-4cdc-b3c4-eed2488ae61d-utilities\") pod \"redhat-operators-jh5w8\" (UID: \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\") " pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.960213 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5224a84-9644-4cdc-b3c4-eed2488ae61d-catalog-content\") pod \"redhat-operators-jh5w8\" (UID: \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\") " pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.980432 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cfqj\" (UniqueName: \"kubernetes.io/projected/f5224a84-9644-4cdc-b3c4-eed2488ae61d-kube-api-access-2cfqj\") pod \"redhat-operators-jh5w8\" (UID: \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\") " pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:07 crc kubenswrapper[4902]: I0121 15:25:07.095520 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:07 crc kubenswrapper[4902]: I0121 15:25:07.526246 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jh5w8"] Jan 21 15:25:07 crc kubenswrapper[4902]: W0121 15:25:07.530670 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5224a84_9644_4cdc_b3c4_eed2488ae61d.slice/crio-0b5459aee9b23bbb40e190f872efd622cfffaa20e0ae6affbe426245a2f1dda1 WatchSource:0}: Error finding container 0b5459aee9b23bbb40e190f872efd622cfffaa20e0ae6affbe426245a2f1dda1: Status 404 returned error can't find the container with id 0b5459aee9b23bbb40e190f872efd622cfffaa20e0ae6affbe426245a2f1dda1 Jan 21 15:25:08 crc kubenswrapper[4902]: I0121 15:25:08.483077 4902 generic.go:334] "Generic (PLEG): container finished" podID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" containerID="8a5f95d5679b15fc65d6a7b55c9c852a36827e939807e21416015d5fc062d811" exitCode=0 Jan 21 15:25:08 crc kubenswrapper[4902]: I0121 15:25:08.483239 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jh5w8" event={"ID":"f5224a84-9644-4cdc-b3c4-eed2488ae61d","Type":"ContainerDied","Data":"8a5f95d5679b15fc65d6a7b55c9c852a36827e939807e21416015d5fc062d811"} Jan 21 15:25:08 crc kubenswrapper[4902]: I0121 15:25:08.483361 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jh5w8" event={"ID":"f5224a84-9644-4cdc-b3c4-eed2488ae61d","Type":"ContainerStarted","Data":"0b5459aee9b23bbb40e190f872efd622cfffaa20e0ae6affbe426245a2f1dda1"} Jan 21 15:25:10 crc kubenswrapper[4902]: I0121 15:25:10.501478 4902 generic.go:334] "Generic (PLEG): container finished" podID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" containerID="cf3c1d1246b4fb81294d95f7d00a39205f7f983eaab1d058a8fa6a8897855e85" exitCode=0 Jan 21 15:25:10 crc kubenswrapper[4902]: I0121 15:25:10.501525 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jh5w8" event={"ID":"f5224a84-9644-4cdc-b3c4-eed2488ae61d","Type":"ContainerDied","Data":"cf3c1d1246b4fb81294d95f7d00a39205f7f983eaab1d058a8fa6a8897855e85"} Jan 21 15:25:11 crc kubenswrapper[4902]: I0121 15:25:11.510070 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jh5w8" event={"ID":"f5224a84-9644-4cdc-b3c4-eed2488ae61d","Type":"ContainerStarted","Data":"3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581"} Jan 21 15:25:11 crc kubenswrapper[4902]: I0121 15:25:11.525617 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jh5w8" podStartSLOduration=2.743580443 podStartE2EDuration="5.525597639s" podCreationTimestamp="2026-01-21 15:25:06 +0000 UTC" firstStartedPulling="2026-01-21 15:25:08.484392747 +0000 UTC m=+3070.561225776" lastFinishedPulling="2026-01-21 15:25:11.266409933 +0000 UTC m=+3073.343242972" observedRunningTime="2026-01-21 15:25:11.525598769 +0000 UTC m=+3073.602431798" watchObservedRunningTime="2026-01-21 15:25:11.525597639 +0000 UTC m=+3073.602430668" Jan 21 15:25:17 crc kubenswrapper[4902]: I0121 15:25:17.097507 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:17 crc kubenswrapper[4902]: I0121 15:25:17.098134 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:17 crc kubenswrapper[4902]: I0121 15:25:17.769952 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:25:17 crc kubenswrapper[4902]: I0121 15:25:17.770118 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:25:18 crc kubenswrapper[4902]: I0121 15:25:18.150841 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jh5w8" podUID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" containerName="registry-server" probeResult="failure" output=< Jan 21 15:25:18 crc kubenswrapper[4902]: timeout: failed to connect service ":50051" within 1s Jan 21 15:25:18 crc kubenswrapper[4902]: > Jan 21 15:25:27 crc kubenswrapper[4902]: I0121 15:25:27.136764 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:27 crc kubenswrapper[4902]: I0121 15:25:27.176998 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:27 crc kubenswrapper[4902]: I0121 15:25:27.367020 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jh5w8"] Jan 21 15:25:28 crc kubenswrapper[4902]: I0121 15:25:28.754899 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jh5w8" podUID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" containerName="registry-server" containerID="cri-o://3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581" gracePeriod=2 Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.672926 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.680224 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cfqj\" (UniqueName: \"kubernetes.io/projected/f5224a84-9644-4cdc-b3c4-eed2488ae61d-kube-api-access-2cfqj\") pod \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\" (UID: \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\") " Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.680283 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5224a84-9644-4cdc-b3c4-eed2488ae61d-utilities\") pod \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\" (UID: \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\") " Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.680322 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5224a84-9644-4cdc-b3c4-eed2488ae61d-catalog-content\") pod \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\" (UID: \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\") " Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.681572 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5224a84-9644-4cdc-b3c4-eed2488ae61d-utilities" (OuterVolumeSpecName: "utilities") pod "f5224a84-9644-4cdc-b3c4-eed2488ae61d" (UID: "f5224a84-9644-4cdc-b3c4-eed2488ae61d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.686524 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5224a84-9644-4cdc-b3c4-eed2488ae61d-kube-api-access-2cfqj" (OuterVolumeSpecName: "kube-api-access-2cfqj") pod "f5224a84-9644-4cdc-b3c4-eed2488ae61d" (UID: "f5224a84-9644-4cdc-b3c4-eed2488ae61d"). InnerVolumeSpecName "kube-api-access-2cfqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.766698 4902 generic.go:334] "Generic (PLEG): container finished" podID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" containerID="3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581" exitCode=0 Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.766736 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jh5w8" event={"ID":"f5224a84-9644-4cdc-b3c4-eed2488ae61d","Type":"ContainerDied","Data":"3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581"} Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.766760 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jh5w8" event={"ID":"f5224a84-9644-4cdc-b3c4-eed2488ae61d","Type":"ContainerDied","Data":"0b5459aee9b23bbb40e190f872efd622cfffaa20e0ae6affbe426245a2f1dda1"} Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.766777 4902 scope.go:117] "RemoveContainer" containerID="3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.766929 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.783325 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cfqj\" (UniqueName: \"kubernetes.io/projected/f5224a84-9644-4cdc-b3c4-eed2488ae61d-kube-api-access-2cfqj\") on node \"crc\" DevicePath \"\"" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.783359 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5224a84-9644-4cdc-b3c4-eed2488ae61d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.797361 4902 scope.go:117] "RemoveContainer" containerID="cf3c1d1246b4fb81294d95f7d00a39205f7f983eaab1d058a8fa6a8897855e85" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.825439 4902 scope.go:117] "RemoveContainer" containerID="8a5f95d5679b15fc65d6a7b55c9c852a36827e939807e21416015d5fc062d811" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.832248 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5224a84-9644-4cdc-b3c4-eed2488ae61d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5224a84-9644-4cdc-b3c4-eed2488ae61d" (UID: "f5224a84-9644-4cdc-b3c4-eed2488ae61d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.846947 4902 scope.go:117] "RemoveContainer" containerID="3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581" Jan 21 15:25:29 crc kubenswrapper[4902]: E0121 15:25:29.847258 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581\": container with ID starting with 3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581 not found: ID does not exist" containerID="3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.847295 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581"} err="failed to get container status \"3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581\": rpc error: code = NotFound desc = could not find container \"3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581\": container with ID starting with 3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581 not found: ID does not exist" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.847320 4902 scope.go:117] "RemoveContainer" containerID="cf3c1d1246b4fb81294d95f7d00a39205f7f983eaab1d058a8fa6a8897855e85" Jan 21 15:25:29 crc kubenswrapper[4902]: E0121 15:25:29.847642 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf3c1d1246b4fb81294d95f7d00a39205f7f983eaab1d058a8fa6a8897855e85\": container with ID starting with cf3c1d1246b4fb81294d95f7d00a39205f7f983eaab1d058a8fa6a8897855e85 not found: ID does not exist" containerID="cf3c1d1246b4fb81294d95f7d00a39205f7f983eaab1d058a8fa6a8897855e85" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.847675 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf3c1d1246b4fb81294d95f7d00a39205f7f983eaab1d058a8fa6a8897855e85"} err="failed to get container status \"cf3c1d1246b4fb81294d95f7d00a39205f7f983eaab1d058a8fa6a8897855e85\": rpc error: code = NotFound desc = could not find container \"cf3c1d1246b4fb81294d95f7d00a39205f7f983eaab1d058a8fa6a8897855e85\": container with ID starting with cf3c1d1246b4fb81294d95f7d00a39205f7f983eaab1d058a8fa6a8897855e85 not found: ID does not exist" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.847696 4902 scope.go:117] "RemoveContainer" containerID="8a5f95d5679b15fc65d6a7b55c9c852a36827e939807e21416015d5fc062d811" Jan 21 15:25:29 crc kubenswrapper[4902]: E0121 15:25:29.848299 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a5f95d5679b15fc65d6a7b55c9c852a36827e939807e21416015d5fc062d811\": container with ID starting with 8a5f95d5679b15fc65d6a7b55c9c852a36827e939807e21416015d5fc062d811 not found: ID does not exist" containerID="8a5f95d5679b15fc65d6a7b55c9c852a36827e939807e21416015d5fc062d811" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.848325 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a5f95d5679b15fc65d6a7b55c9c852a36827e939807e21416015d5fc062d811"} err="failed to get container status \"8a5f95d5679b15fc65d6a7b55c9c852a36827e939807e21416015d5fc062d811\": rpc error: code = NotFound desc = could not find container \"8a5f95d5679b15fc65d6a7b55c9c852a36827e939807e21416015d5fc062d811\": container with ID starting with 8a5f95d5679b15fc65d6a7b55c9c852a36827e939807e21416015d5fc062d811 not found: ID does not exist" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.885253 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5224a84-9644-4cdc-b3c4-eed2488ae61d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:25:30 crc kubenswrapper[4902]: I0121 15:25:30.120218 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jh5w8"] Jan 21 15:25:30 crc kubenswrapper[4902]: I0121 15:25:30.126219 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jh5w8"] Jan 21 15:25:30 crc kubenswrapper[4902]: I0121 15:25:30.308449 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" path="/var/lib/kubelet/pods/f5224a84-9644-4cdc-b3c4-eed2488ae61d/volumes" Jan 21 15:25:47 crc kubenswrapper[4902]: I0121 15:25:47.769782 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:25:47 crc kubenswrapper[4902]: I0121 15:25:47.770520 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:26:17 crc kubenswrapper[4902]: I0121 15:26:17.770771 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:26:17 crc kubenswrapper[4902]: I0121 15:26:17.771536 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:26:17 crc kubenswrapper[4902]: I0121 15:26:17.771614 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 15:26:17 crc kubenswrapper[4902]: I0121 15:26:17.772657 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:26:17 crc kubenswrapper[4902]: I0121 15:26:17.772745 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" gracePeriod=600 Jan 21 15:26:17 crc kubenswrapper[4902]: E0121 15:26:17.894677 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:26:18 crc kubenswrapper[4902]: I0121 15:26:18.162373 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" exitCode=0 Jan 21 15:26:18 crc kubenswrapper[4902]: I0121 15:26:18.162722 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e"} Jan 21 15:26:18 crc kubenswrapper[4902]: I0121 15:26:18.162909 4902 scope.go:117] "RemoveContainer" containerID="9118bec5924d18ddd618a8750d04dac5bfd45c7ae04f2acab924299ac7122ce9" Jan 21 15:26:18 crc kubenswrapper[4902]: I0121 15:26:18.163444 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:26:18 crc kubenswrapper[4902]: E0121 15:26:18.163723 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:26:31 crc kubenswrapper[4902]: I0121 15:26:31.295140 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:26:31 crc kubenswrapper[4902]: E0121 15:26:31.296027 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.255879 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mklsf"] Jan 21 15:26:32 crc kubenswrapper[4902]: E0121 15:26:32.256192 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" containerName="registry-server" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.256208 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" containerName="registry-server" Jan 21 15:26:32 crc kubenswrapper[4902]: E0121 15:26:32.256227 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" containerName="extract-utilities" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.256235 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" containerName="extract-utilities" Jan 21 15:26:32 crc kubenswrapper[4902]: E0121 15:26:32.256279 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" containerName="extract-content" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.256286 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" containerName="extract-content" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.256458 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" containerName="registry-server" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.257575 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.279290 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mklsf"] Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.354414 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ec2690b-73b2-45db-b14b-355b80ab92a6-catalog-content\") pod \"certified-operators-mklsf\" (UID: \"2ec2690b-73b2-45db-b14b-355b80ab92a6\") " pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.354490 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ec2690b-73b2-45db-b14b-355b80ab92a6-utilities\") pod \"certified-operators-mklsf\" (UID: \"2ec2690b-73b2-45db-b14b-355b80ab92a6\") " pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.354567 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cqrk\" (UniqueName: \"kubernetes.io/projected/2ec2690b-73b2-45db-b14b-355b80ab92a6-kube-api-access-6cqrk\") pod \"certified-operators-mklsf\" (UID: \"2ec2690b-73b2-45db-b14b-355b80ab92a6\") " pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.456226 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ec2690b-73b2-45db-b14b-355b80ab92a6-catalog-content\") pod \"certified-operators-mklsf\" (UID: \"2ec2690b-73b2-45db-b14b-355b80ab92a6\") " pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.456283 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ec2690b-73b2-45db-b14b-355b80ab92a6-utilities\") pod \"certified-operators-mklsf\" (UID: \"2ec2690b-73b2-45db-b14b-355b80ab92a6\") " pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.456339 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cqrk\" (UniqueName: \"kubernetes.io/projected/2ec2690b-73b2-45db-b14b-355b80ab92a6-kube-api-access-6cqrk\") pod \"certified-operators-mklsf\" (UID: \"2ec2690b-73b2-45db-b14b-355b80ab92a6\") " pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.456775 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ec2690b-73b2-45db-b14b-355b80ab92a6-catalog-content\") pod \"certified-operators-mklsf\" (UID: \"2ec2690b-73b2-45db-b14b-355b80ab92a6\") " pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.456854 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ec2690b-73b2-45db-b14b-355b80ab92a6-utilities\") pod \"certified-operators-mklsf\" (UID: \"2ec2690b-73b2-45db-b14b-355b80ab92a6\") " pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.476836 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cqrk\" (UniqueName: \"kubernetes.io/projected/2ec2690b-73b2-45db-b14b-355b80ab92a6-kube-api-access-6cqrk\") pod \"certified-operators-mklsf\" (UID: \"2ec2690b-73b2-45db-b14b-355b80ab92a6\") " pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.590178 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:33 crc kubenswrapper[4902]: I0121 15:26:33.061884 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mklsf"] Jan 21 15:26:33 crc kubenswrapper[4902]: I0121 15:26:33.284094 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mklsf" event={"ID":"2ec2690b-73b2-45db-b14b-355b80ab92a6","Type":"ContainerStarted","Data":"b7497e30666457211bc1dbff0d19c6d29f8267a666765fa8bec62fbde6239e21"} Jan 21 15:26:33 crc kubenswrapper[4902]: I0121 15:26:33.284182 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mklsf" event={"ID":"2ec2690b-73b2-45db-b14b-355b80ab92a6","Type":"ContainerStarted","Data":"1427ab1882dfac4469780ae6a2ba2c61e1cf315860af91b43a52ef914530182e"} Jan 21 15:26:34 crc kubenswrapper[4902]: I0121 15:26:34.299571 4902 generic.go:334] "Generic (PLEG): container finished" podID="2ec2690b-73b2-45db-b14b-355b80ab92a6" containerID="b7497e30666457211bc1dbff0d19c6d29f8267a666765fa8bec62fbde6239e21" exitCode=0 Jan 21 15:26:34 crc kubenswrapper[4902]: I0121 15:26:34.302678 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:26:34 crc kubenswrapper[4902]: I0121 15:26:34.306523 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mklsf" event={"ID":"2ec2690b-73b2-45db-b14b-355b80ab92a6","Type":"ContainerDied","Data":"b7497e30666457211bc1dbff0d19c6d29f8267a666765fa8bec62fbde6239e21"} Jan 21 15:26:38 crc kubenswrapper[4902]: I0121 15:26:38.335536 4902 generic.go:334] "Generic (PLEG): container finished" podID="2ec2690b-73b2-45db-b14b-355b80ab92a6" containerID="aff16eb520fba6a9e2277db12e779189239f388a24f571354f5779dc3e7d15e7" exitCode=0 Jan 21 15:26:38 crc kubenswrapper[4902]: I0121 15:26:38.335653 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mklsf" event={"ID":"2ec2690b-73b2-45db-b14b-355b80ab92a6","Type":"ContainerDied","Data":"aff16eb520fba6a9e2277db12e779189239f388a24f571354f5779dc3e7d15e7"} Jan 21 15:26:40 crc kubenswrapper[4902]: I0121 15:26:40.352704 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mklsf" event={"ID":"2ec2690b-73b2-45db-b14b-355b80ab92a6","Type":"ContainerStarted","Data":"f70837e6519eb0b6b2c831e5486484d916388f44857bbc9bc3d77a0eeea931f8"} Jan 21 15:26:40 crc kubenswrapper[4902]: I0121 15:26:40.380557 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mklsf" podStartSLOduration=3.382409175 podStartE2EDuration="8.38053745s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="2026-01-21 15:26:34.302350074 +0000 UTC m=+3156.379183113" lastFinishedPulling="2026-01-21 15:26:39.300478359 +0000 UTC m=+3161.377311388" observedRunningTime="2026-01-21 15:26:40.374858052 +0000 UTC m=+3162.451691111" watchObservedRunningTime="2026-01-21 15:26:40.38053745 +0000 UTC m=+3162.457370479" Jan 21 15:26:42 crc kubenswrapper[4902]: I0121 15:26:42.591247 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:42 crc kubenswrapper[4902]: I0121 15:26:42.591679 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:42 crc kubenswrapper[4902]: I0121 15:26:42.633006 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:43 crc kubenswrapper[4902]: I0121 15:26:43.295313 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:26:43 crc kubenswrapper[4902]: E0121 15:26:43.295597 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:26:52 crc kubenswrapper[4902]: I0121 15:26:52.638605 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:52 crc kubenswrapper[4902]: I0121 15:26:52.704358 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mklsf"] Jan 21 15:26:52 crc kubenswrapper[4902]: I0121 15:26:52.748797 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7vpk9"] Jan 21 15:26:52 crc kubenswrapper[4902]: I0121 15:26:52.749056 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7vpk9" podUID="8d2ff121-c8ec-43d3-b97d-e2f164b9f847" containerName="registry-server" containerID="cri-o://4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec" gracePeriod=2 Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.118109 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.282942 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-utilities\") pod \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\" (UID: \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\") " Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.283020 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clx2v\" (UniqueName: \"kubernetes.io/projected/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-kube-api-access-clx2v\") pod \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\" (UID: \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\") " Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.283890 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-utilities" (OuterVolumeSpecName: "utilities") pod "8d2ff121-c8ec-43d3-b97d-e2f164b9f847" (UID: "8d2ff121-c8ec-43d3-b97d-e2f164b9f847"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.284110 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-catalog-content\") pod \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\" (UID: \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\") " Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.284474 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.300785 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-kube-api-access-clx2v" (OuterVolumeSpecName: "kube-api-access-clx2v") pod "8d2ff121-c8ec-43d3-b97d-e2f164b9f847" (UID: "8d2ff121-c8ec-43d3-b97d-e2f164b9f847"). InnerVolumeSpecName "kube-api-access-clx2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.340071 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d2ff121-c8ec-43d3-b97d-e2f164b9f847" (UID: "8d2ff121-c8ec-43d3-b97d-e2f164b9f847"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.385796 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.385837 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clx2v\" (UniqueName: \"kubernetes.io/projected/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-kube-api-access-clx2v\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.466863 4902 generic.go:334] "Generic (PLEG): container finished" podID="8d2ff121-c8ec-43d3-b97d-e2f164b9f847" containerID="4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec" exitCode=0 Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.466931 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7vpk9" event={"ID":"8d2ff121-c8ec-43d3-b97d-e2f164b9f847","Type":"ContainerDied","Data":"4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec"} Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.466991 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7vpk9" event={"ID":"8d2ff121-c8ec-43d3-b97d-e2f164b9f847","Type":"ContainerDied","Data":"e875616804386b93d0ffc56d15792663f14f3e2f21397c783ad065bf8edceedc"} Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.467014 4902 scope.go:117] "RemoveContainer" containerID="4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.466958 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.498871 4902 scope.go:117] "RemoveContainer" containerID="e288745dbac7373e7837c167d172aa7a653c275a9298b12908e850485a6ca4a0" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.514674 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7vpk9"] Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.521250 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7vpk9"] Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.525397 4902 scope.go:117] "RemoveContainer" containerID="13b55829252b21c553e64dd12c86180c72698c6ac8e11d0e116e07a9e6aace7e" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.543277 4902 scope.go:117] "RemoveContainer" containerID="4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec" Jan 21 15:26:53 crc kubenswrapper[4902]: E0121 15:26:53.543733 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec\": container with ID starting with 4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec not found: ID does not exist" containerID="4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.543775 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec"} err="failed to get container status \"4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec\": rpc error: code = NotFound desc = could not find container \"4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec\": container with ID starting with 4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec not found: ID does not exist" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.543800 4902 scope.go:117] "RemoveContainer" containerID="e288745dbac7373e7837c167d172aa7a653c275a9298b12908e850485a6ca4a0" Jan 21 15:26:53 crc kubenswrapper[4902]: E0121 15:26:53.544092 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e288745dbac7373e7837c167d172aa7a653c275a9298b12908e850485a6ca4a0\": container with ID starting with e288745dbac7373e7837c167d172aa7a653c275a9298b12908e850485a6ca4a0 not found: ID does not exist" containerID="e288745dbac7373e7837c167d172aa7a653c275a9298b12908e850485a6ca4a0" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.544119 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e288745dbac7373e7837c167d172aa7a653c275a9298b12908e850485a6ca4a0"} err="failed to get container status \"e288745dbac7373e7837c167d172aa7a653c275a9298b12908e850485a6ca4a0\": rpc error: code = NotFound desc = could not find container \"e288745dbac7373e7837c167d172aa7a653c275a9298b12908e850485a6ca4a0\": container with ID starting with e288745dbac7373e7837c167d172aa7a653c275a9298b12908e850485a6ca4a0 not found: ID does not exist" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.544136 4902 scope.go:117] "RemoveContainer" containerID="13b55829252b21c553e64dd12c86180c72698c6ac8e11d0e116e07a9e6aace7e" Jan 21 15:26:53 crc kubenswrapper[4902]: E0121 15:26:53.544370 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13b55829252b21c553e64dd12c86180c72698c6ac8e11d0e116e07a9e6aace7e\": container with ID starting with 13b55829252b21c553e64dd12c86180c72698c6ac8e11d0e116e07a9e6aace7e not found: ID does not exist" containerID="13b55829252b21c553e64dd12c86180c72698c6ac8e11d0e116e07a9e6aace7e" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.544400 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13b55829252b21c553e64dd12c86180c72698c6ac8e11d0e116e07a9e6aace7e"} err="failed to get container status \"13b55829252b21c553e64dd12c86180c72698c6ac8e11d0e116e07a9e6aace7e\": rpc error: code = NotFound desc = could not find container \"13b55829252b21c553e64dd12c86180c72698c6ac8e11d0e116e07a9e6aace7e\": container with ID starting with 13b55829252b21c553e64dd12c86180c72698c6ac8e11d0e116e07a9e6aace7e not found: ID does not exist" Jan 21 15:26:54 crc kubenswrapper[4902]: I0121 15:26:54.302448 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d2ff121-c8ec-43d3-b97d-e2f164b9f847" path="/var/lib/kubelet/pods/8d2ff121-c8ec-43d3-b97d-e2f164b9f847/volumes" Jan 21 15:26:58 crc kubenswrapper[4902]: I0121 15:26:58.303081 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:26:58 crc kubenswrapper[4902]: E0121 15:26:58.303725 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:27:12 crc kubenswrapper[4902]: I0121 15:27:12.300171 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:27:12 crc kubenswrapper[4902]: E0121 15:27:12.301232 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:27:24 crc kubenswrapper[4902]: I0121 15:27:24.296277 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:27:24 crc kubenswrapper[4902]: E0121 15:27:24.297412 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:27:36 crc kubenswrapper[4902]: I0121 15:27:36.295494 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:27:36 crc kubenswrapper[4902]: E0121 15:27:36.296639 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:27:50 crc kubenswrapper[4902]: I0121 15:27:50.295295 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:27:50 crc kubenswrapper[4902]: E0121 15:27:50.296890 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:28:03 crc kubenswrapper[4902]: I0121 15:28:03.295185 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:28:03 crc kubenswrapper[4902]: E0121 15:28:03.296017 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:28:15 crc kubenswrapper[4902]: I0121 15:28:15.294687 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:28:15 crc kubenswrapper[4902]: E0121 15:28:15.295470 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:28:26 crc kubenswrapper[4902]: I0121 15:28:26.295190 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:28:26 crc kubenswrapper[4902]: E0121 15:28:26.296468 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:28:41 crc kubenswrapper[4902]: I0121 15:28:41.294785 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:28:41 crc kubenswrapper[4902]: E0121 15:28:41.295655 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:28:54 crc kubenswrapper[4902]: I0121 15:28:54.295916 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:28:54 crc kubenswrapper[4902]: E0121 15:28:54.297122 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:29:07 crc kubenswrapper[4902]: I0121 15:29:07.294480 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:29:07 crc kubenswrapper[4902]: E0121 15:29:07.295174 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:29:22 crc kubenswrapper[4902]: I0121 15:29:22.295513 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:29:22 crc kubenswrapper[4902]: E0121 15:29:22.296305 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:29:34 crc kubenswrapper[4902]: I0121 15:29:34.295201 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:29:34 crc kubenswrapper[4902]: E0121 15:29:34.296426 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:29:46 crc kubenswrapper[4902]: I0121 15:29:46.296542 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:29:46 crc kubenswrapper[4902]: E0121 15:29:46.297533 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:29:57 crc kubenswrapper[4902]: I0121 15:29:57.294390 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:29:57 crc kubenswrapper[4902]: E0121 15:29:57.295255 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.158885 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg"] Jan 21 15:30:00 crc kubenswrapper[4902]: E0121 15:30:00.159702 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d2ff121-c8ec-43d3-b97d-e2f164b9f847" containerName="registry-server" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.159715 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d2ff121-c8ec-43d3-b97d-e2f164b9f847" containerName="registry-server" Jan 21 15:30:00 crc kubenswrapper[4902]: E0121 15:30:00.159726 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d2ff121-c8ec-43d3-b97d-e2f164b9f847" containerName="extract-utilities" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.159733 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d2ff121-c8ec-43d3-b97d-e2f164b9f847" containerName="extract-utilities" Jan 21 15:30:00 crc kubenswrapper[4902]: E0121 15:30:00.159743 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d2ff121-c8ec-43d3-b97d-e2f164b9f847" containerName="extract-content" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.159749 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d2ff121-c8ec-43d3-b97d-e2f164b9f847" containerName="extract-content" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.159894 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d2ff121-c8ec-43d3-b97d-e2f164b9f847" containerName="registry-server" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.160374 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.162823 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.163705 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.173989 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg"] Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.346852 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c55xt\" (UniqueName: \"kubernetes.io/projected/e93c6a82-9651-4ed2-a941-9414d9aff62c-kube-api-access-c55xt\") pod \"collect-profiles-29483490-b6ktg\" (UID: \"e93c6a82-9651-4ed2-a941-9414d9aff62c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.346905 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e93c6a82-9651-4ed2-a941-9414d9aff62c-config-volume\") pod \"collect-profiles-29483490-b6ktg\" (UID: \"e93c6a82-9651-4ed2-a941-9414d9aff62c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.346978 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e93c6a82-9651-4ed2-a941-9414d9aff62c-secret-volume\") pod \"collect-profiles-29483490-b6ktg\" (UID: \"e93c6a82-9651-4ed2-a941-9414d9aff62c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.448942 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c55xt\" (UniqueName: \"kubernetes.io/projected/e93c6a82-9651-4ed2-a941-9414d9aff62c-kube-api-access-c55xt\") pod \"collect-profiles-29483490-b6ktg\" (UID: \"e93c6a82-9651-4ed2-a941-9414d9aff62c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.449008 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e93c6a82-9651-4ed2-a941-9414d9aff62c-config-volume\") pod \"collect-profiles-29483490-b6ktg\" (UID: \"e93c6a82-9651-4ed2-a941-9414d9aff62c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.449075 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e93c6a82-9651-4ed2-a941-9414d9aff62c-secret-volume\") pod \"collect-profiles-29483490-b6ktg\" (UID: \"e93c6a82-9651-4ed2-a941-9414d9aff62c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.451435 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e93c6a82-9651-4ed2-a941-9414d9aff62c-config-volume\") pod \"collect-profiles-29483490-b6ktg\" (UID: \"e93c6a82-9651-4ed2-a941-9414d9aff62c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.460005 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e93c6a82-9651-4ed2-a941-9414d9aff62c-secret-volume\") pod \"collect-profiles-29483490-b6ktg\" (UID: \"e93c6a82-9651-4ed2-a941-9414d9aff62c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.484776 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c55xt\" (UniqueName: \"kubernetes.io/projected/e93c6a82-9651-4ed2-a941-9414d9aff62c-kube-api-access-c55xt\") pod \"collect-profiles-29483490-b6ktg\" (UID: \"e93c6a82-9651-4ed2-a941-9414d9aff62c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.777566 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:01 crc kubenswrapper[4902]: I0121 15:30:01.241818 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg"] Jan 21 15:30:01 crc kubenswrapper[4902]: I0121 15:30:01.978265 4902 generic.go:334] "Generic (PLEG): container finished" podID="e93c6a82-9651-4ed2-a941-9414d9aff62c" containerID="2d74f71a998726973b118e0b0755aa5903f2b68cb19dc4c893a565df10186a56" exitCode=0 Jan 21 15:30:01 crc kubenswrapper[4902]: I0121 15:30:01.978336 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" event={"ID":"e93c6a82-9651-4ed2-a941-9414d9aff62c","Type":"ContainerDied","Data":"2d74f71a998726973b118e0b0755aa5903f2b68cb19dc4c893a565df10186a56"} Jan 21 15:30:01 crc kubenswrapper[4902]: I0121 15:30:01.978678 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" event={"ID":"e93c6a82-9651-4ed2-a941-9414d9aff62c","Type":"ContainerStarted","Data":"e6b6e42c855295ba91f6834b95903c938c31c49afcc92b34579974a80c3b5cbc"} Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.328667 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.492512 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c55xt\" (UniqueName: \"kubernetes.io/projected/e93c6a82-9651-4ed2-a941-9414d9aff62c-kube-api-access-c55xt\") pod \"e93c6a82-9651-4ed2-a941-9414d9aff62c\" (UID: \"e93c6a82-9651-4ed2-a941-9414d9aff62c\") " Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.493262 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e93c6a82-9651-4ed2-a941-9414d9aff62c-config-volume\") pod \"e93c6a82-9651-4ed2-a941-9414d9aff62c\" (UID: \"e93c6a82-9651-4ed2-a941-9414d9aff62c\") " Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.493334 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e93c6a82-9651-4ed2-a941-9414d9aff62c-secret-volume\") pod \"e93c6a82-9651-4ed2-a941-9414d9aff62c\" (UID: \"e93c6a82-9651-4ed2-a941-9414d9aff62c\") " Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.493959 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e93c6a82-9651-4ed2-a941-9414d9aff62c-config-volume" (OuterVolumeSpecName: "config-volume") pod "e93c6a82-9651-4ed2-a941-9414d9aff62c" (UID: "e93c6a82-9651-4ed2-a941-9414d9aff62c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.497815 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e93c6a82-9651-4ed2-a941-9414d9aff62c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e93c6a82-9651-4ed2-a941-9414d9aff62c" (UID: "e93c6a82-9651-4ed2-a941-9414d9aff62c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.498794 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e93c6a82-9651-4ed2-a941-9414d9aff62c-kube-api-access-c55xt" (OuterVolumeSpecName: "kube-api-access-c55xt") pod "e93c6a82-9651-4ed2-a941-9414d9aff62c" (UID: "e93c6a82-9651-4ed2-a941-9414d9aff62c"). InnerVolumeSpecName "kube-api-access-c55xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.595009 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e93c6a82-9651-4ed2-a941-9414d9aff62c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.595060 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c55xt\" (UniqueName: \"kubernetes.io/projected/e93c6a82-9651-4ed2-a941-9414d9aff62c-kube-api-access-c55xt\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.595069 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e93c6a82-9651-4ed2-a941-9414d9aff62c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.997593 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" event={"ID":"e93c6a82-9651-4ed2-a941-9414d9aff62c","Type":"ContainerDied","Data":"e6b6e42c855295ba91f6834b95903c938c31c49afcc92b34579974a80c3b5cbc"} Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.998799 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6b6e42c855295ba91f6834b95903c938c31c49afcc92b34579974a80c3b5cbc" Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.997651 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:04 crc kubenswrapper[4902]: I0121 15:30:04.404186 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2"] Jan 21 15:30:04 crc kubenswrapper[4902]: I0121 15:30:04.413398 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2"] Jan 21 15:30:06 crc kubenswrapper[4902]: I0121 15:30:06.303373 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fbc78bb-1faf-4da9-ab79-cee1540bb647" path="/var/lib/kubelet/pods/0fbc78bb-1faf-4da9-ab79-cee1540bb647/volumes" Jan 21 15:30:08 crc kubenswrapper[4902]: I0121 15:30:08.303212 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:30:08 crc kubenswrapper[4902]: E0121 15:30:08.304118 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:30:19 crc kubenswrapper[4902]: I0121 15:30:19.295516 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:30:19 crc kubenswrapper[4902]: E0121 15:30:19.296213 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.670609 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kkdg8"] Jan 21 15:30:20 crc kubenswrapper[4902]: E0121 15:30:20.671260 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e93c6a82-9651-4ed2-a941-9414d9aff62c" containerName="collect-profiles" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.671277 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e93c6a82-9651-4ed2-a941-9414d9aff62c" containerName="collect-profiles" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.671443 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e93c6a82-9651-4ed2-a941-9414d9aff62c" containerName="collect-profiles" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.672802 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.682738 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkdg8"] Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.847158 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39a9fe1b-7335-4734-976f-9fdb787938c0-catalog-content\") pod \"redhat-marketplace-kkdg8\" (UID: \"39a9fe1b-7335-4734-976f-9fdb787938c0\") " pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.847245 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39a9fe1b-7335-4734-976f-9fdb787938c0-utilities\") pod \"redhat-marketplace-kkdg8\" (UID: \"39a9fe1b-7335-4734-976f-9fdb787938c0\") " pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.847285 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvfq7\" (UniqueName: \"kubernetes.io/projected/39a9fe1b-7335-4734-976f-9fdb787938c0-kube-api-access-rvfq7\") pod \"redhat-marketplace-kkdg8\" (UID: \"39a9fe1b-7335-4734-976f-9fdb787938c0\") " pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.948777 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39a9fe1b-7335-4734-976f-9fdb787938c0-utilities\") pod \"redhat-marketplace-kkdg8\" (UID: \"39a9fe1b-7335-4734-976f-9fdb787938c0\") " pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.948863 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvfq7\" (UniqueName: \"kubernetes.io/projected/39a9fe1b-7335-4734-976f-9fdb787938c0-kube-api-access-rvfq7\") pod \"redhat-marketplace-kkdg8\" (UID: \"39a9fe1b-7335-4734-976f-9fdb787938c0\") " pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.949004 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39a9fe1b-7335-4734-976f-9fdb787938c0-catalog-content\") pod \"redhat-marketplace-kkdg8\" (UID: \"39a9fe1b-7335-4734-976f-9fdb787938c0\") " pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.949534 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39a9fe1b-7335-4734-976f-9fdb787938c0-utilities\") pod \"redhat-marketplace-kkdg8\" (UID: \"39a9fe1b-7335-4734-976f-9fdb787938c0\") " pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.949718 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39a9fe1b-7335-4734-976f-9fdb787938c0-catalog-content\") pod \"redhat-marketplace-kkdg8\" (UID: \"39a9fe1b-7335-4734-976f-9fdb787938c0\") " pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.978359 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvfq7\" (UniqueName: \"kubernetes.io/projected/39a9fe1b-7335-4734-976f-9fdb787938c0-kube-api-access-rvfq7\") pod \"redhat-marketplace-kkdg8\" (UID: \"39a9fe1b-7335-4734-976f-9fdb787938c0\") " pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.994292 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:21 crc kubenswrapper[4902]: I0121 15:30:21.428627 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkdg8"] Jan 21 15:30:22 crc kubenswrapper[4902]: I0121 15:30:22.090823 4902 scope.go:117] "RemoveContainer" containerID="fa1156cf23ef6713ff3d92ca234f6e5140ae3f940464e50453ee6dd138fecf3b" Jan 21 15:30:22 crc kubenswrapper[4902]: I0121 15:30:22.122247 4902 generic.go:334] "Generic (PLEG): container finished" podID="39a9fe1b-7335-4734-976f-9fdb787938c0" containerID="2480d82f65de82746c1eb70d59f1b79cec2023b767cfd5a115d53dcbda4e6805" exitCode=0 Jan 21 15:30:22 crc kubenswrapper[4902]: I0121 15:30:22.122301 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkdg8" event={"ID":"39a9fe1b-7335-4734-976f-9fdb787938c0","Type":"ContainerDied","Data":"2480d82f65de82746c1eb70d59f1b79cec2023b767cfd5a115d53dcbda4e6805"} Jan 21 15:30:22 crc kubenswrapper[4902]: I0121 15:30:22.122337 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkdg8" event={"ID":"39a9fe1b-7335-4734-976f-9fdb787938c0","Type":"ContainerStarted","Data":"63135e61a2dc6c716def5056ebb4d08cd182f00371ec69399c421fbb8857c147"} Jan 21 15:30:24 crc kubenswrapper[4902]: I0121 15:30:24.143401 4902 generic.go:334] "Generic (PLEG): container finished" podID="39a9fe1b-7335-4734-976f-9fdb787938c0" containerID="3a27944c82a731bdb5767321407fad25136bfc04cc67971318677dd787f80ce1" exitCode=0 Jan 21 15:30:24 crc kubenswrapper[4902]: I0121 15:30:24.143821 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkdg8" event={"ID":"39a9fe1b-7335-4734-976f-9fdb787938c0","Type":"ContainerDied","Data":"3a27944c82a731bdb5767321407fad25136bfc04cc67971318677dd787f80ce1"} Jan 21 15:30:25 crc kubenswrapper[4902]: I0121 15:30:25.157672 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkdg8" event={"ID":"39a9fe1b-7335-4734-976f-9fdb787938c0","Type":"ContainerStarted","Data":"e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f"} Jan 21 15:30:25 crc kubenswrapper[4902]: I0121 15:30:25.182335 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kkdg8" podStartSLOduration=2.648859559 podStartE2EDuration="5.182311517s" podCreationTimestamp="2026-01-21 15:30:20 +0000 UTC" firstStartedPulling="2026-01-21 15:30:22.125089294 +0000 UTC m=+3384.201922373" lastFinishedPulling="2026-01-21 15:30:24.658541302 +0000 UTC m=+3386.735374331" observedRunningTime="2026-01-21 15:30:25.179956711 +0000 UTC m=+3387.256789780" watchObservedRunningTime="2026-01-21 15:30:25.182311517 +0000 UTC m=+3387.259144586" Jan 21 15:30:30 crc kubenswrapper[4902]: I0121 15:30:30.995247 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:30 crc kubenswrapper[4902]: I0121 15:30:30.995620 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:31 crc kubenswrapper[4902]: I0121 15:30:31.053078 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:31 crc kubenswrapper[4902]: I0121 15:30:31.247170 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:31 crc kubenswrapper[4902]: I0121 15:30:31.295684 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:30:31 crc kubenswrapper[4902]: E0121 15:30:31.296105 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:30:31 crc kubenswrapper[4902]: I0121 15:30:31.302314 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkdg8"] Jan 21 15:30:33 crc kubenswrapper[4902]: I0121 15:30:33.222349 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kkdg8" podUID="39a9fe1b-7335-4734-976f-9fdb787938c0" containerName="registry-server" containerID="cri-o://e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f" gracePeriod=2 Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.231557 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.233760 4902 generic.go:334] "Generic (PLEG): container finished" podID="39a9fe1b-7335-4734-976f-9fdb787938c0" containerID="e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f" exitCode=0 Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.233816 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkdg8" event={"ID":"39a9fe1b-7335-4734-976f-9fdb787938c0","Type":"ContainerDied","Data":"e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f"} Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.233851 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkdg8" event={"ID":"39a9fe1b-7335-4734-976f-9fdb787938c0","Type":"ContainerDied","Data":"63135e61a2dc6c716def5056ebb4d08cd182f00371ec69399c421fbb8857c147"} Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.233872 4902 scope.go:117] "RemoveContainer" containerID="e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.259010 4902 scope.go:117] "RemoveContainer" containerID="3a27944c82a731bdb5767321407fad25136bfc04cc67971318677dd787f80ce1" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.269753 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39a9fe1b-7335-4734-976f-9fdb787938c0-catalog-content\") pod \"39a9fe1b-7335-4734-976f-9fdb787938c0\" (UID: \"39a9fe1b-7335-4734-976f-9fdb787938c0\") " Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.270013 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39a9fe1b-7335-4734-976f-9fdb787938c0-utilities\") pod \"39a9fe1b-7335-4734-976f-9fdb787938c0\" (UID: \"39a9fe1b-7335-4734-976f-9fdb787938c0\") " Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.270209 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvfq7\" (UniqueName: \"kubernetes.io/projected/39a9fe1b-7335-4734-976f-9fdb787938c0-kube-api-access-rvfq7\") pod \"39a9fe1b-7335-4734-976f-9fdb787938c0\" (UID: \"39a9fe1b-7335-4734-976f-9fdb787938c0\") " Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.271029 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39a9fe1b-7335-4734-976f-9fdb787938c0-utilities" (OuterVolumeSpecName: "utilities") pod "39a9fe1b-7335-4734-976f-9fdb787938c0" (UID: "39a9fe1b-7335-4734-976f-9fdb787938c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.276469 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39a9fe1b-7335-4734-976f-9fdb787938c0-kube-api-access-rvfq7" (OuterVolumeSpecName: "kube-api-access-rvfq7") pod "39a9fe1b-7335-4734-976f-9fdb787938c0" (UID: "39a9fe1b-7335-4734-976f-9fdb787938c0"). InnerVolumeSpecName "kube-api-access-rvfq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.294426 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39a9fe1b-7335-4734-976f-9fdb787938c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39a9fe1b-7335-4734-976f-9fdb787938c0" (UID: "39a9fe1b-7335-4734-976f-9fdb787938c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.297648 4902 scope.go:117] "RemoveContainer" containerID="2480d82f65de82746c1eb70d59f1b79cec2023b767cfd5a115d53dcbda4e6805" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.333099 4902 scope.go:117] "RemoveContainer" containerID="e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f" Jan 21 15:30:34 crc kubenswrapper[4902]: E0121 15:30:34.334247 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f\": container with ID starting with e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f not found: ID does not exist" containerID="e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.334302 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f"} err="failed to get container status \"e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f\": rpc error: code = NotFound desc = could not find container \"e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f\": container with ID starting with e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f not found: ID does not exist" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.334334 4902 scope.go:117] "RemoveContainer" containerID="3a27944c82a731bdb5767321407fad25136bfc04cc67971318677dd787f80ce1" Jan 21 15:30:34 crc kubenswrapper[4902]: E0121 15:30:34.335127 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a27944c82a731bdb5767321407fad25136bfc04cc67971318677dd787f80ce1\": container with ID starting with 3a27944c82a731bdb5767321407fad25136bfc04cc67971318677dd787f80ce1 not found: ID does not exist" containerID="3a27944c82a731bdb5767321407fad25136bfc04cc67971318677dd787f80ce1" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.335205 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a27944c82a731bdb5767321407fad25136bfc04cc67971318677dd787f80ce1"} err="failed to get container status \"3a27944c82a731bdb5767321407fad25136bfc04cc67971318677dd787f80ce1\": rpc error: code = NotFound desc = could not find container \"3a27944c82a731bdb5767321407fad25136bfc04cc67971318677dd787f80ce1\": container with ID starting with 3a27944c82a731bdb5767321407fad25136bfc04cc67971318677dd787f80ce1 not found: ID does not exist" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.335258 4902 scope.go:117] "RemoveContainer" containerID="2480d82f65de82746c1eb70d59f1b79cec2023b767cfd5a115d53dcbda4e6805" Jan 21 15:30:34 crc kubenswrapper[4902]: E0121 15:30:34.335725 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2480d82f65de82746c1eb70d59f1b79cec2023b767cfd5a115d53dcbda4e6805\": container with ID starting with 2480d82f65de82746c1eb70d59f1b79cec2023b767cfd5a115d53dcbda4e6805 not found: ID does not exist" containerID="2480d82f65de82746c1eb70d59f1b79cec2023b767cfd5a115d53dcbda4e6805" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.335771 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2480d82f65de82746c1eb70d59f1b79cec2023b767cfd5a115d53dcbda4e6805"} err="failed to get container status \"2480d82f65de82746c1eb70d59f1b79cec2023b767cfd5a115d53dcbda4e6805\": rpc error: code = NotFound desc = could not find container \"2480d82f65de82746c1eb70d59f1b79cec2023b767cfd5a115d53dcbda4e6805\": container with ID starting with 2480d82f65de82746c1eb70d59f1b79cec2023b767cfd5a115d53dcbda4e6805 not found: ID does not exist" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.372003 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39a9fe1b-7335-4734-976f-9fdb787938c0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.372095 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39a9fe1b-7335-4734-976f-9fdb787938c0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.372111 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvfq7\" (UniqueName: \"kubernetes.io/projected/39a9fe1b-7335-4734-976f-9fdb787938c0-kube-api-access-rvfq7\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:35 crc kubenswrapper[4902]: I0121 15:30:35.247903 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:35 crc kubenswrapper[4902]: I0121 15:30:35.269025 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkdg8"] Jan 21 15:30:35 crc kubenswrapper[4902]: I0121 15:30:35.275516 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkdg8"] Jan 21 15:30:36 crc kubenswrapper[4902]: I0121 15:30:36.313358 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39a9fe1b-7335-4734-976f-9fdb787938c0" path="/var/lib/kubelet/pods/39a9fe1b-7335-4734-976f-9fdb787938c0/volumes" Jan 21 15:30:42 crc kubenswrapper[4902]: I0121 15:30:42.295452 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:30:42 crc kubenswrapper[4902]: E0121 15:30:42.296306 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:30:55 crc kubenswrapper[4902]: I0121 15:30:55.295855 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:30:55 crc kubenswrapper[4902]: E0121 15:30:55.297130 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:31:09 crc kubenswrapper[4902]: I0121 15:31:09.295396 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:31:09 crc kubenswrapper[4902]: E0121 15:31:09.296678 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:31:22 crc kubenswrapper[4902]: I0121 15:31:22.295342 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:31:22 crc kubenswrapper[4902]: I0121 15:31:22.611995 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"d2b268969053a6288fc1ae2239677e73b8d0905d0f4f4bd5a3225c287ca914ed"} Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.122595 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7768z"] Jan 21 15:31:31 crc kubenswrapper[4902]: E0121 15:31:31.123303 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39a9fe1b-7335-4734-976f-9fdb787938c0" containerName="extract-content" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.123315 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="39a9fe1b-7335-4734-976f-9fdb787938c0" containerName="extract-content" Jan 21 15:31:31 crc kubenswrapper[4902]: E0121 15:31:31.123335 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39a9fe1b-7335-4734-976f-9fdb787938c0" containerName="registry-server" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.123341 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="39a9fe1b-7335-4734-976f-9fdb787938c0" containerName="registry-server" Jan 21 15:31:31 crc kubenswrapper[4902]: E0121 15:31:31.123358 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39a9fe1b-7335-4734-976f-9fdb787938c0" containerName="extract-utilities" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.123364 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="39a9fe1b-7335-4734-976f-9fdb787938c0" containerName="extract-utilities" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.123492 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="39a9fe1b-7335-4734-976f-9fdb787938c0" containerName="registry-server" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.124418 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.154936 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7768z"] Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.249350 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vqv8\" (UniqueName: \"kubernetes.io/projected/43b01214-a7cb-4f07-a4a2-9ca629e85474-kube-api-access-7vqv8\") pod \"community-operators-7768z\" (UID: \"43b01214-a7cb-4f07-a4a2-9ca629e85474\") " pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.249614 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b01214-a7cb-4f07-a4a2-9ca629e85474-utilities\") pod \"community-operators-7768z\" (UID: \"43b01214-a7cb-4f07-a4a2-9ca629e85474\") " pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.249738 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b01214-a7cb-4f07-a4a2-9ca629e85474-catalog-content\") pod \"community-operators-7768z\" (UID: \"43b01214-a7cb-4f07-a4a2-9ca629e85474\") " pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.350638 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b01214-a7cb-4f07-a4a2-9ca629e85474-utilities\") pod \"community-operators-7768z\" (UID: \"43b01214-a7cb-4f07-a4a2-9ca629e85474\") " pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.350710 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b01214-a7cb-4f07-a4a2-9ca629e85474-catalog-content\") pod \"community-operators-7768z\" (UID: \"43b01214-a7cb-4f07-a4a2-9ca629e85474\") " pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.350760 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vqv8\" (UniqueName: \"kubernetes.io/projected/43b01214-a7cb-4f07-a4a2-9ca629e85474-kube-api-access-7vqv8\") pod \"community-operators-7768z\" (UID: \"43b01214-a7cb-4f07-a4a2-9ca629e85474\") " pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.351331 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b01214-a7cb-4f07-a4a2-9ca629e85474-utilities\") pod \"community-operators-7768z\" (UID: \"43b01214-a7cb-4f07-a4a2-9ca629e85474\") " pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.351476 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b01214-a7cb-4f07-a4a2-9ca629e85474-catalog-content\") pod \"community-operators-7768z\" (UID: \"43b01214-a7cb-4f07-a4a2-9ca629e85474\") " pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.374484 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vqv8\" (UniqueName: \"kubernetes.io/projected/43b01214-a7cb-4f07-a4a2-9ca629e85474-kube-api-access-7vqv8\") pod \"community-operators-7768z\" (UID: \"43b01214-a7cb-4f07-a4a2-9ca629e85474\") " pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.462749 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.787749 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7768z"] Jan 21 15:31:32 crc kubenswrapper[4902]: I0121 15:31:32.688755 4902 generic.go:334] "Generic (PLEG): container finished" podID="43b01214-a7cb-4f07-a4a2-9ca629e85474" containerID="3f2435aa7f0686465fbe40e8d5408e82ef993d8b03bf0cb6bcbabea4092373d1" exitCode=0 Jan 21 15:31:32 crc kubenswrapper[4902]: I0121 15:31:32.688800 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7768z" event={"ID":"43b01214-a7cb-4f07-a4a2-9ca629e85474","Type":"ContainerDied","Data":"3f2435aa7f0686465fbe40e8d5408e82ef993d8b03bf0cb6bcbabea4092373d1"} Jan 21 15:31:32 crc kubenswrapper[4902]: I0121 15:31:32.689026 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7768z" event={"ID":"43b01214-a7cb-4f07-a4a2-9ca629e85474","Type":"ContainerStarted","Data":"a4fde28708bde8ddfb8d4b7f02f0bb7bca9b9fbbc2803ec40f958a5fa6144701"} Jan 21 15:31:33 crc kubenswrapper[4902]: I0121 15:31:33.708525 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7768z" event={"ID":"43b01214-a7cb-4f07-a4a2-9ca629e85474","Type":"ContainerStarted","Data":"43fb7b60c9bfe69a5145058697e9ec46a3c77589d6e33a3d137eebf90e452afb"} Jan 21 15:31:34 crc kubenswrapper[4902]: I0121 15:31:34.719293 4902 generic.go:334] "Generic (PLEG): container finished" podID="43b01214-a7cb-4f07-a4a2-9ca629e85474" containerID="43fb7b60c9bfe69a5145058697e9ec46a3c77589d6e33a3d137eebf90e452afb" exitCode=0 Jan 21 15:31:34 crc kubenswrapper[4902]: I0121 15:31:34.719367 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7768z" event={"ID":"43b01214-a7cb-4f07-a4a2-9ca629e85474","Type":"ContainerDied","Data":"43fb7b60c9bfe69a5145058697e9ec46a3c77589d6e33a3d137eebf90e452afb"} Jan 21 15:31:34 crc kubenswrapper[4902]: I0121 15:31:34.722008 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:31:35 crc kubenswrapper[4902]: I0121 15:31:35.728380 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7768z" event={"ID":"43b01214-a7cb-4f07-a4a2-9ca629e85474","Type":"ContainerStarted","Data":"72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711"} Jan 21 15:31:35 crc kubenswrapper[4902]: I0121 15:31:35.759899 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7768z" podStartSLOduration=2.320205848 podStartE2EDuration="4.759881067s" podCreationTimestamp="2026-01-21 15:31:31 +0000 UTC" firstStartedPulling="2026-01-21 15:31:32.690721477 +0000 UTC m=+3454.767554506" lastFinishedPulling="2026-01-21 15:31:35.130396696 +0000 UTC m=+3457.207229725" observedRunningTime="2026-01-21 15:31:35.750347602 +0000 UTC m=+3457.827180641" watchObservedRunningTime="2026-01-21 15:31:35.759881067 +0000 UTC m=+3457.836714106" Jan 21 15:31:41 crc kubenswrapper[4902]: I0121 15:31:41.463325 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:41 crc kubenswrapper[4902]: I0121 15:31:41.465215 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:41 crc kubenswrapper[4902]: I0121 15:31:41.519286 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:41 crc kubenswrapper[4902]: I0121 15:31:41.840035 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:41 crc kubenswrapper[4902]: I0121 15:31:41.886829 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7768z"] Jan 21 15:31:43 crc kubenswrapper[4902]: I0121 15:31:43.798838 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7768z" podUID="43b01214-a7cb-4f07-a4a2-9ca629e85474" containerName="registry-server" containerID="cri-o://72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711" gracePeriod=2 Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.246666 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.356020 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b01214-a7cb-4f07-a4a2-9ca629e85474-utilities\") pod \"43b01214-a7cb-4f07-a4a2-9ca629e85474\" (UID: \"43b01214-a7cb-4f07-a4a2-9ca629e85474\") " Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.356211 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vqv8\" (UniqueName: \"kubernetes.io/projected/43b01214-a7cb-4f07-a4a2-9ca629e85474-kube-api-access-7vqv8\") pod \"43b01214-a7cb-4f07-a4a2-9ca629e85474\" (UID: \"43b01214-a7cb-4f07-a4a2-9ca629e85474\") " Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.356266 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b01214-a7cb-4f07-a4a2-9ca629e85474-catalog-content\") pod \"43b01214-a7cb-4f07-a4a2-9ca629e85474\" (UID: \"43b01214-a7cb-4f07-a4a2-9ca629e85474\") " Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.357199 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43b01214-a7cb-4f07-a4a2-9ca629e85474-utilities" (OuterVolumeSpecName: "utilities") pod "43b01214-a7cb-4f07-a4a2-9ca629e85474" (UID: "43b01214-a7cb-4f07-a4a2-9ca629e85474"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.362612 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43b01214-a7cb-4f07-a4a2-9ca629e85474-kube-api-access-7vqv8" (OuterVolumeSpecName: "kube-api-access-7vqv8") pod "43b01214-a7cb-4f07-a4a2-9ca629e85474" (UID: "43b01214-a7cb-4f07-a4a2-9ca629e85474"). InnerVolumeSpecName "kube-api-access-7vqv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.435751 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43b01214-a7cb-4f07-a4a2-9ca629e85474-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43b01214-a7cb-4f07-a4a2-9ca629e85474" (UID: "43b01214-a7cb-4f07-a4a2-9ca629e85474"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.457842 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vqv8\" (UniqueName: \"kubernetes.io/projected/43b01214-a7cb-4f07-a4a2-9ca629e85474-kube-api-access-7vqv8\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.457894 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b01214-a7cb-4f07-a4a2-9ca629e85474-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.457910 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b01214-a7cb-4f07-a4a2-9ca629e85474-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.808150 4902 generic.go:334] "Generic (PLEG): container finished" podID="43b01214-a7cb-4f07-a4a2-9ca629e85474" containerID="72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711" exitCode=0 Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.808219 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.808246 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7768z" event={"ID":"43b01214-a7cb-4f07-a4a2-9ca629e85474","Type":"ContainerDied","Data":"72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711"} Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.809253 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7768z" event={"ID":"43b01214-a7cb-4f07-a4a2-9ca629e85474","Type":"ContainerDied","Data":"a4fde28708bde8ddfb8d4b7f02f0bb7bca9b9fbbc2803ec40f958a5fa6144701"} Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.809291 4902 scope.go:117] "RemoveContainer" containerID="72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.837134 4902 scope.go:117] "RemoveContainer" containerID="43fb7b60c9bfe69a5145058697e9ec46a3c77589d6e33a3d137eebf90e452afb" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.843660 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7768z"] Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.850212 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7768z"] Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.872602 4902 scope.go:117] "RemoveContainer" containerID="3f2435aa7f0686465fbe40e8d5408e82ef993d8b03bf0cb6bcbabea4092373d1" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.889959 4902 scope.go:117] "RemoveContainer" containerID="72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711" Jan 21 15:31:44 crc kubenswrapper[4902]: E0121 15:31:44.890464 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711\": container with ID starting with 72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711 not found: ID does not exist" containerID="72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.890731 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711"} err="failed to get container status \"72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711\": rpc error: code = NotFound desc = could not find container \"72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711\": container with ID starting with 72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711 not found: ID does not exist" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.891829 4902 scope.go:117] "RemoveContainer" containerID="43fb7b60c9bfe69a5145058697e9ec46a3c77589d6e33a3d137eebf90e452afb" Jan 21 15:31:44 crc kubenswrapper[4902]: E0121 15:31:44.893447 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43fb7b60c9bfe69a5145058697e9ec46a3c77589d6e33a3d137eebf90e452afb\": container with ID starting with 43fb7b60c9bfe69a5145058697e9ec46a3c77589d6e33a3d137eebf90e452afb not found: ID does not exist" containerID="43fb7b60c9bfe69a5145058697e9ec46a3c77589d6e33a3d137eebf90e452afb" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.893481 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43fb7b60c9bfe69a5145058697e9ec46a3c77589d6e33a3d137eebf90e452afb"} err="failed to get container status \"43fb7b60c9bfe69a5145058697e9ec46a3c77589d6e33a3d137eebf90e452afb\": rpc error: code = NotFound desc = could not find container \"43fb7b60c9bfe69a5145058697e9ec46a3c77589d6e33a3d137eebf90e452afb\": container with ID starting with 43fb7b60c9bfe69a5145058697e9ec46a3c77589d6e33a3d137eebf90e452afb not found: ID does not exist" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.893505 4902 scope.go:117] "RemoveContainer" containerID="3f2435aa7f0686465fbe40e8d5408e82ef993d8b03bf0cb6bcbabea4092373d1" Jan 21 15:31:44 crc kubenswrapper[4902]: E0121 15:31:44.895122 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f2435aa7f0686465fbe40e8d5408e82ef993d8b03bf0cb6bcbabea4092373d1\": container with ID starting with 3f2435aa7f0686465fbe40e8d5408e82ef993d8b03bf0cb6bcbabea4092373d1 not found: ID does not exist" containerID="3f2435aa7f0686465fbe40e8d5408e82ef993d8b03bf0cb6bcbabea4092373d1" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.895228 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f2435aa7f0686465fbe40e8d5408e82ef993d8b03bf0cb6bcbabea4092373d1"} err="failed to get container status \"3f2435aa7f0686465fbe40e8d5408e82ef993d8b03bf0cb6bcbabea4092373d1\": rpc error: code = NotFound desc = could not find container \"3f2435aa7f0686465fbe40e8d5408e82ef993d8b03bf0cb6bcbabea4092373d1\": container with ID starting with 3f2435aa7f0686465fbe40e8d5408e82ef993d8b03bf0cb6bcbabea4092373d1 not found: ID does not exist" Jan 21 15:31:46 crc kubenswrapper[4902]: I0121 15:31:46.308445 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43b01214-a7cb-4f07-a4a2-9ca629e85474" path="/var/lib/kubelet/pods/43b01214-a7cb-4f07-a4a2-9ca629e85474/volumes" Jan 21 15:33:47 crc kubenswrapper[4902]: I0121 15:33:47.770565 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:33:47 crc kubenswrapper[4902]: I0121 15:33:47.771394 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:34:17 crc kubenswrapper[4902]: I0121 15:34:17.769446 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:34:17 crc kubenswrapper[4902]: I0121 15:34:17.770008 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:34:47 crc kubenswrapper[4902]: I0121 15:34:47.770606 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:34:47 crc kubenswrapper[4902]: I0121 15:34:47.772689 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:34:47 crc kubenswrapper[4902]: I0121 15:34:47.772841 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 15:34:47 crc kubenswrapper[4902]: I0121 15:34:47.773691 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d2b268969053a6288fc1ae2239677e73b8d0905d0f4f4bd5a3225c287ca914ed"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:34:47 crc kubenswrapper[4902]: I0121 15:34:47.775159 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://d2b268969053a6288fc1ae2239677e73b8d0905d0f4f4bd5a3225c287ca914ed" gracePeriod=600 Jan 21 15:34:48 crc kubenswrapper[4902]: I0121 15:34:48.289573 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="d2b268969053a6288fc1ae2239677e73b8d0905d0f4f4bd5a3225c287ca914ed" exitCode=0 Jan 21 15:34:48 crc kubenswrapper[4902]: I0121 15:34:48.289636 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"d2b268969053a6288fc1ae2239677e73b8d0905d0f4f4bd5a3225c287ca914ed"} Jan 21 15:34:48 crc kubenswrapper[4902]: I0121 15:34:48.289907 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321"} Jan 21 15:34:48 crc kubenswrapper[4902]: I0121 15:34:48.289930 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:35:27 crc kubenswrapper[4902]: I0121 15:35:27.888960 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rrdhp"] Jan 21 15:35:27 crc kubenswrapper[4902]: E0121 15:35:27.889821 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b01214-a7cb-4f07-a4a2-9ca629e85474" containerName="registry-server" Jan 21 15:35:27 crc kubenswrapper[4902]: I0121 15:35:27.889839 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b01214-a7cb-4f07-a4a2-9ca629e85474" containerName="registry-server" Jan 21 15:35:27 crc kubenswrapper[4902]: E0121 15:35:27.889859 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b01214-a7cb-4f07-a4a2-9ca629e85474" containerName="extract-content" Jan 21 15:35:27 crc kubenswrapper[4902]: I0121 15:35:27.889867 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b01214-a7cb-4f07-a4a2-9ca629e85474" containerName="extract-content" Jan 21 15:35:27 crc kubenswrapper[4902]: E0121 15:35:27.889884 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b01214-a7cb-4f07-a4a2-9ca629e85474" containerName="extract-utilities" Jan 21 15:35:27 crc kubenswrapper[4902]: I0121 15:35:27.889892 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b01214-a7cb-4f07-a4a2-9ca629e85474" containerName="extract-utilities" Jan 21 15:35:27 crc kubenswrapper[4902]: I0121 15:35:27.890094 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="43b01214-a7cb-4f07-a4a2-9ca629e85474" containerName="registry-server" Jan 21 15:35:27 crc kubenswrapper[4902]: I0121 15:35:27.891285 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:27 crc kubenswrapper[4902]: I0121 15:35:27.901831 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rrdhp"] Jan 21 15:35:28 crc kubenswrapper[4902]: I0121 15:35:28.003227 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-catalog-content\") pod \"redhat-operators-rrdhp\" (UID: \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\") " pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:28 crc kubenswrapper[4902]: I0121 15:35:28.003293 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-utilities\") pod \"redhat-operators-rrdhp\" (UID: \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\") " pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:28 crc kubenswrapper[4902]: I0121 15:35:28.003354 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42wkh\" (UniqueName: \"kubernetes.io/projected/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-kube-api-access-42wkh\") pod \"redhat-operators-rrdhp\" (UID: \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\") " pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:28 crc kubenswrapper[4902]: I0121 15:35:28.104141 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-catalog-content\") pod \"redhat-operators-rrdhp\" (UID: \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\") " pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:28 crc kubenswrapper[4902]: I0121 15:35:28.104476 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-utilities\") pod \"redhat-operators-rrdhp\" (UID: \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\") " pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:28 crc kubenswrapper[4902]: I0121 15:35:28.104627 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42wkh\" (UniqueName: \"kubernetes.io/projected/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-kube-api-access-42wkh\") pod \"redhat-operators-rrdhp\" (UID: \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\") " pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:28 crc kubenswrapper[4902]: I0121 15:35:28.104752 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-catalog-content\") pod \"redhat-operators-rrdhp\" (UID: \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\") " pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:28 crc kubenswrapper[4902]: I0121 15:35:28.104984 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-utilities\") pod \"redhat-operators-rrdhp\" (UID: \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\") " pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:28 crc kubenswrapper[4902]: I0121 15:35:28.125166 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42wkh\" (UniqueName: \"kubernetes.io/projected/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-kube-api-access-42wkh\") pod \"redhat-operators-rrdhp\" (UID: \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\") " pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:28 crc kubenswrapper[4902]: I0121 15:35:28.207438 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:28 crc kubenswrapper[4902]: I0121 15:35:28.673077 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rrdhp"] Jan 21 15:35:29 crc kubenswrapper[4902]: I0121 15:35:29.641244 4902 generic.go:334] "Generic (PLEG): container finished" podID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" containerID="828f9daf1127ffe2366fc257cd7eb968158b025f64de8f1eeda00f7bff957c80" exitCode=0 Jan 21 15:35:29 crc kubenswrapper[4902]: I0121 15:35:29.641310 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrdhp" event={"ID":"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f","Type":"ContainerDied","Data":"828f9daf1127ffe2366fc257cd7eb968158b025f64de8f1eeda00f7bff957c80"} Jan 21 15:35:29 crc kubenswrapper[4902]: I0121 15:35:29.641647 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrdhp" event={"ID":"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f","Type":"ContainerStarted","Data":"ce44022cdb20f101ec872cfce74fcd3ac6eaa437f7d48c42d63229c91b61642c"} Jan 21 15:35:31 crc kubenswrapper[4902]: I0121 15:35:31.665036 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrdhp" event={"ID":"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f","Type":"ContainerStarted","Data":"686b240c6ffdfc497e30c4c41658f30c7060e2e55f0a14e5732c7537e9030851"} Jan 21 15:35:32 crc kubenswrapper[4902]: I0121 15:35:32.674851 4902 generic.go:334] "Generic (PLEG): container finished" podID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" containerID="686b240c6ffdfc497e30c4c41658f30c7060e2e55f0a14e5732c7537e9030851" exitCode=0 Jan 21 15:35:32 crc kubenswrapper[4902]: I0121 15:35:32.674910 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrdhp" event={"ID":"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f","Type":"ContainerDied","Data":"686b240c6ffdfc497e30c4c41658f30c7060e2e55f0a14e5732c7537e9030851"} Jan 21 15:35:33 crc kubenswrapper[4902]: I0121 15:35:33.686467 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrdhp" event={"ID":"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f","Type":"ContainerStarted","Data":"24b813e80bc07f3368f385fb08ce6b8441d40a49c4b80989502abfe186c7178e"} Jan 21 15:35:38 crc kubenswrapper[4902]: I0121 15:35:38.208169 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:38 crc kubenswrapper[4902]: I0121 15:35:38.208635 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:39 crc kubenswrapper[4902]: I0121 15:35:39.269196 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rrdhp" podUID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" containerName="registry-server" probeResult="failure" output=< Jan 21 15:35:39 crc kubenswrapper[4902]: timeout: failed to connect service ":50051" within 1s Jan 21 15:35:39 crc kubenswrapper[4902]: > Jan 21 15:35:48 crc kubenswrapper[4902]: I0121 15:35:48.285227 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:48 crc kubenswrapper[4902]: I0121 15:35:48.324670 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rrdhp" podStartSLOduration=17.65265842 podStartE2EDuration="21.324644721s" podCreationTimestamp="2026-01-21 15:35:27 +0000 UTC" firstStartedPulling="2026-01-21 15:35:29.643145484 +0000 UTC m=+3691.719978523" lastFinishedPulling="2026-01-21 15:35:33.315131795 +0000 UTC m=+3695.391964824" observedRunningTime="2026-01-21 15:35:33.712524587 +0000 UTC m=+3695.789357666" watchObservedRunningTime="2026-01-21 15:35:48.324644721 +0000 UTC m=+3710.401477780" Jan 21 15:35:48 crc kubenswrapper[4902]: I0121 15:35:48.370849 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:48 crc kubenswrapper[4902]: I0121 15:35:48.536714 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rrdhp"] Jan 21 15:35:49 crc kubenswrapper[4902]: I0121 15:35:49.826334 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rrdhp" podUID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" containerName="registry-server" containerID="cri-o://24b813e80bc07f3368f385fb08ce6b8441d40a49c4b80989502abfe186c7178e" gracePeriod=2 Jan 21 15:35:50 crc kubenswrapper[4902]: I0121 15:35:50.840852 4902 generic.go:334] "Generic (PLEG): container finished" podID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" containerID="24b813e80bc07f3368f385fb08ce6b8441d40a49c4b80989502abfe186c7178e" exitCode=0 Jan 21 15:35:50 crc kubenswrapper[4902]: I0121 15:35:50.840955 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrdhp" event={"ID":"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f","Type":"ContainerDied","Data":"24b813e80bc07f3368f385fb08ce6b8441d40a49c4b80989502abfe186c7178e"} Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.343603 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.377435 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-catalog-content\") pod \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\" (UID: \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\") " Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.377574 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42wkh\" (UniqueName: \"kubernetes.io/projected/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-kube-api-access-42wkh\") pod \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\" (UID: \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\") " Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.377605 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-utilities\") pod \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\" (UID: \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\") " Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.379371 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-utilities" (OuterVolumeSpecName: "utilities") pod "a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" (UID: "a48d7c61-552f-4d48-9b2e-cd7d1099fb3f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.385854 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-kube-api-access-42wkh" (OuterVolumeSpecName: "kube-api-access-42wkh") pod "a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" (UID: "a48d7c61-552f-4d48-9b2e-cd7d1099fb3f"). InnerVolumeSpecName "kube-api-access-42wkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.478521 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42wkh\" (UniqueName: \"kubernetes.io/projected/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-kube-api-access-42wkh\") on node \"crc\" DevicePath \"\"" Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.478556 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.534457 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" (UID: "a48d7c61-552f-4d48-9b2e-cd7d1099fb3f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.579572 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.861893 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrdhp" event={"ID":"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f","Type":"ContainerDied","Data":"ce44022cdb20f101ec872cfce74fcd3ac6eaa437f7d48c42d63229c91b61642c"} Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.861987 4902 scope.go:117] "RemoveContainer" containerID="24b813e80bc07f3368f385fb08ce6b8441d40a49c4b80989502abfe186c7178e" Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.863337 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.906208 4902 scope.go:117] "RemoveContainer" containerID="686b240c6ffdfc497e30c4c41658f30c7060e2e55f0a14e5732c7537e9030851" Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.927424 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rrdhp"] Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.938108 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rrdhp"] Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.952707 4902 scope.go:117] "RemoveContainer" containerID="828f9daf1127ffe2366fc257cd7eb968158b025f64de8f1eeda00f7bff957c80" Jan 21 15:35:52 crc kubenswrapper[4902]: I0121 15:35:52.313870 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" path="/var/lib/kubelet/pods/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f/volumes" Jan 21 15:37:17 crc kubenswrapper[4902]: I0121 15:37:17.770374 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:37:17 crc kubenswrapper[4902]: I0121 15:37:17.771022 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:37:47 crc kubenswrapper[4902]: I0121 15:37:47.770217 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:37:47 crc kubenswrapper[4902]: I0121 15:37:47.772227 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:38:17 crc kubenswrapper[4902]: I0121 15:38:17.769394 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:38:17 crc kubenswrapper[4902]: I0121 15:38:17.769992 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:38:17 crc kubenswrapper[4902]: I0121 15:38:17.770059 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 15:38:17 crc kubenswrapper[4902]: I0121 15:38:17.770761 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:38:17 crc kubenswrapper[4902]: I0121 15:38:17.770824 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" gracePeriod=600 Jan 21 15:38:17 crc kubenswrapper[4902]: E0121 15:38:17.914283 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:38:18 crc kubenswrapper[4902]: I0121 15:38:18.085143 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" exitCode=0 Jan 21 15:38:18 crc kubenswrapper[4902]: I0121 15:38:18.085196 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321"} Jan 21 15:38:18 crc kubenswrapper[4902]: I0121 15:38:18.085240 4902 scope.go:117] "RemoveContainer" containerID="d2b268969053a6288fc1ae2239677e73b8d0905d0f4f4bd5a3225c287ca914ed" Jan 21 15:38:18 crc kubenswrapper[4902]: I0121 15:38:18.085847 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:38:18 crc kubenswrapper[4902]: E0121 15:38:18.086281 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:38:32 crc kubenswrapper[4902]: I0121 15:38:32.294655 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:38:32 crc kubenswrapper[4902]: E0121 15:38:32.295468 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:38:47 crc kubenswrapper[4902]: I0121 15:38:47.297371 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:38:47 crc kubenswrapper[4902]: E0121 15:38:47.298508 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:39:02 crc kubenswrapper[4902]: I0121 15:39:02.295199 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:39:02 crc kubenswrapper[4902]: E0121 15:39:02.295901 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:39:16 crc kubenswrapper[4902]: I0121 15:39:16.294903 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:39:16 crc kubenswrapper[4902]: E0121 15:39:16.295773 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:39:28 crc kubenswrapper[4902]: I0121 15:39:28.301158 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:39:28 crc kubenswrapper[4902]: E0121 15:39:28.301815 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:39:40 crc kubenswrapper[4902]: I0121 15:39:40.295650 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:39:40 crc kubenswrapper[4902]: E0121 15:39:40.296907 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:39:54 crc kubenswrapper[4902]: I0121 15:39:54.294923 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:39:54 crc kubenswrapper[4902]: E0121 15:39:54.296012 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:40:07 crc kubenswrapper[4902]: I0121 15:40:07.295253 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:40:07 crc kubenswrapper[4902]: E0121 15:40:07.298285 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:40:11 crc kubenswrapper[4902]: I0121 15:40:11.811935 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vchwj"] Jan 21 15:40:11 crc kubenswrapper[4902]: E0121 15:40:11.813559 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" containerName="extract-utilities" Jan 21 15:40:11 crc kubenswrapper[4902]: I0121 15:40:11.813640 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" containerName="extract-utilities" Jan 21 15:40:11 crc kubenswrapper[4902]: E0121 15:40:11.813692 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" containerName="extract-content" Jan 21 15:40:11 crc kubenswrapper[4902]: I0121 15:40:11.813706 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" containerName="extract-content" Jan 21 15:40:11 crc kubenswrapper[4902]: E0121 15:40:11.813731 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" containerName="registry-server" Jan 21 15:40:11 crc kubenswrapper[4902]: I0121 15:40:11.813744 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" containerName="registry-server" Jan 21 15:40:11 crc kubenswrapper[4902]: I0121 15:40:11.814101 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" containerName="registry-server" Jan 21 15:40:11 crc kubenswrapper[4902]: I0121 15:40:11.816213 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:11 crc kubenswrapper[4902]: I0121 15:40:11.823713 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vchwj"] Jan 21 15:40:11 crc kubenswrapper[4902]: I0121 15:40:11.945180 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q9f6\" (UniqueName: \"kubernetes.io/projected/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-kube-api-access-5q9f6\") pod \"certified-operators-vchwj\" (UID: \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\") " pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:11 crc kubenswrapper[4902]: I0121 15:40:11.945307 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-catalog-content\") pod \"certified-operators-vchwj\" (UID: \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\") " pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:11 crc kubenswrapper[4902]: I0121 15:40:11.945346 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-utilities\") pod \"certified-operators-vchwj\" (UID: \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\") " pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:12 crc kubenswrapper[4902]: I0121 15:40:12.046895 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q9f6\" (UniqueName: \"kubernetes.io/projected/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-kube-api-access-5q9f6\") pod \"certified-operators-vchwj\" (UID: \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\") " pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:12 crc kubenswrapper[4902]: I0121 15:40:12.046966 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-catalog-content\") pod \"certified-operators-vchwj\" (UID: \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\") " pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:12 crc kubenswrapper[4902]: I0121 15:40:12.046991 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-utilities\") pod \"certified-operators-vchwj\" (UID: \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\") " pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:12 crc kubenswrapper[4902]: I0121 15:40:12.047514 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-catalog-content\") pod \"certified-operators-vchwj\" (UID: \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\") " pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:12 crc kubenswrapper[4902]: I0121 15:40:12.047542 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-utilities\") pod \"certified-operators-vchwj\" (UID: \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\") " pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:12 crc kubenswrapper[4902]: I0121 15:40:12.080726 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q9f6\" (UniqueName: \"kubernetes.io/projected/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-kube-api-access-5q9f6\") pod \"certified-operators-vchwj\" (UID: \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\") " pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:12 crc kubenswrapper[4902]: I0121 15:40:12.164791 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:12 crc kubenswrapper[4902]: I0121 15:40:12.618996 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vchwj"] Jan 21 15:40:13 crc kubenswrapper[4902]: I0121 15:40:13.047296 4902 generic.go:334] "Generic (PLEG): container finished" podID="ec10f1dd-7bfa-4767-921e-d67dc1b461c7" containerID="ce25437daa0a10656e6fb4ab8dde255e9d713d68d2732ca1435d43e8b4409daf" exitCode=0 Jan 21 15:40:13 crc kubenswrapper[4902]: I0121 15:40:13.047345 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vchwj" event={"ID":"ec10f1dd-7bfa-4767-921e-d67dc1b461c7","Type":"ContainerDied","Data":"ce25437daa0a10656e6fb4ab8dde255e9d713d68d2732ca1435d43e8b4409daf"} Jan 21 15:40:13 crc kubenswrapper[4902]: I0121 15:40:13.047607 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vchwj" event={"ID":"ec10f1dd-7bfa-4767-921e-d67dc1b461c7","Type":"ContainerStarted","Data":"a12d6fa48cd8508032045083a1ac196784061211f8ca33abf1e88badd2348a1a"} Jan 21 15:40:13 crc kubenswrapper[4902]: I0121 15:40:13.050503 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:40:14 crc kubenswrapper[4902]: I0121 15:40:14.058537 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vchwj" event={"ID":"ec10f1dd-7bfa-4767-921e-d67dc1b461c7","Type":"ContainerStarted","Data":"0775d9102892fef44c9259c87571f1df473896bb9b2a29499794140a31de1efa"} Jan 21 15:40:15 crc kubenswrapper[4902]: I0121 15:40:15.071372 4902 generic.go:334] "Generic (PLEG): container finished" podID="ec10f1dd-7bfa-4767-921e-d67dc1b461c7" containerID="0775d9102892fef44c9259c87571f1df473896bb9b2a29499794140a31de1efa" exitCode=0 Jan 21 15:40:15 crc kubenswrapper[4902]: I0121 15:40:15.071448 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vchwj" event={"ID":"ec10f1dd-7bfa-4767-921e-d67dc1b461c7","Type":"ContainerDied","Data":"0775d9102892fef44c9259c87571f1df473896bb9b2a29499794140a31de1efa"} Jan 21 15:40:16 crc kubenswrapper[4902]: I0121 15:40:16.082251 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vchwj" event={"ID":"ec10f1dd-7bfa-4767-921e-d67dc1b461c7","Type":"ContainerStarted","Data":"28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb"} Jan 21 15:40:16 crc kubenswrapper[4902]: I0121 15:40:16.114240 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vchwj" podStartSLOduration=2.678047382 podStartE2EDuration="5.11422234s" podCreationTimestamp="2026-01-21 15:40:11 +0000 UTC" firstStartedPulling="2026-01-21 15:40:13.049998252 +0000 UTC m=+3975.126831281" lastFinishedPulling="2026-01-21 15:40:15.48617321 +0000 UTC m=+3977.563006239" observedRunningTime="2026-01-21 15:40:16.110189977 +0000 UTC m=+3978.187023016" watchObservedRunningTime="2026-01-21 15:40:16.11422234 +0000 UTC m=+3978.191055379" Jan 21 15:40:22 crc kubenswrapper[4902]: I0121 15:40:22.177447 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:22 crc kubenswrapper[4902]: I0121 15:40:22.178272 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:22 crc kubenswrapper[4902]: I0121 15:40:22.240338 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:22 crc kubenswrapper[4902]: I0121 15:40:22.295203 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:40:22 crc kubenswrapper[4902]: E0121 15:40:22.295523 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:40:23 crc kubenswrapper[4902]: I0121 15:40:23.211597 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:23 crc kubenswrapper[4902]: I0121 15:40:23.282390 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vchwj"] Jan 21 15:40:25 crc kubenswrapper[4902]: I0121 15:40:25.162145 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vchwj" podUID="ec10f1dd-7bfa-4767-921e-d67dc1b461c7" containerName="registry-server" containerID="cri-o://28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb" gracePeriod=2 Jan 21 15:40:25 crc kubenswrapper[4902]: I0121 15:40:25.644433 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:25 crc kubenswrapper[4902]: I0121 15:40:25.806381 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-catalog-content\") pod \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\" (UID: \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\") " Jan 21 15:40:25 crc kubenswrapper[4902]: I0121 15:40:25.806497 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-utilities\") pod \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\" (UID: \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\") " Jan 21 15:40:25 crc kubenswrapper[4902]: I0121 15:40:25.806570 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q9f6\" (UniqueName: \"kubernetes.io/projected/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-kube-api-access-5q9f6\") pod \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\" (UID: \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\") " Jan 21 15:40:25 crc kubenswrapper[4902]: I0121 15:40:25.807691 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-utilities" (OuterVolumeSpecName: "utilities") pod "ec10f1dd-7bfa-4767-921e-d67dc1b461c7" (UID: "ec10f1dd-7bfa-4767-921e-d67dc1b461c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:40:25 crc kubenswrapper[4902]: I0121 15:40:25.813689 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-kube-api-access-5q9f6" (OuterVolumeSpecName: "kube-api-access-5q9f6") pod "ec10f1dd-7bfa-4767-921e-d67dc1b461c7" (UID: "ec10f1dd-7bfa-4767-921e-d67dc1b461c7"). InnerVolumeSpecName "kube-api-access-5q9f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:40:25 crc kubenswrapper[4902]: I0121 15:40:25.851559 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec10f1dd-7bfa-4767-921e-d67dc1b461c7" (UID: "ec10f1dd-7bfa-4767-921e-d67dc1b461c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:40:25 crc kubenswrapper[4902]: I0121 15:40:25.908035 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:40:25 crc kubenswrapper[4902]: I0121 15:40:25.908094 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:40:25 crc kubenswrapper[4902]: I0121 15:40:25.908111 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q9f6\" (UniqueName: \"kubernetes.io/projected/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-kube-api-access-5q9f6\") on node \"crc\" DevicePath \"\"" Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.171874 4902 generic.go:334] "Generic (PLEG): container finished" podID="ec10f1dd-7bfa-4767-921e-d67dc1b461c7" containerID="28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb" exitCode=0 Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.171981 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.171979 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vchwj" event={"ID":"ec10f1dd-7bfa-4767-921e-d67dc1b461c7","Type":"ContainerDied","Data":"28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb"} Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.173421 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vchwj" event={"ID":"ec10f1dd-7bfa-4767-921e-d67dc1b461c7","Type":"ContainerDied","Data":"a12d6fa48cd8508032045083a1ac196784061211f8ca33abf1e88badd2348a1a"} Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.173460 4902 scope.go:117] "RemoveContainer" containerID="28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb" Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.201489 4902 scope.go:117] "RemoveContainer" containerID="0775d9102892fef44c9259c87571f1df473896bb9b2a29499794140a31de1efa" Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.224212 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vchwj"] Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.233759 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vchwj"] Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.235696 4902 scope.go:117] "RemoveContainer" containerID="ce25437daa0a10656e6fb4ab8dde255e9d713d68d2732ca1435d43e8b4409daf" Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.269241 4902 scope.go:117] "RemoveContainer" containerID="28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb" Jan 21 15:40:26 crc kubenswrapper[4902]: E0121 15:40:26.269992 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb\": container with ID starting with 28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb not found: ID does not exist" containerID="28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb" Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.270066 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb"} err="failed to get container status \"28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb\": rpc error: code = NotFound desc = could not find container \"28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb\": container with ID starting with 28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb not found: ID does not exist" Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.270112 4902 scope.go:117] "RemoveContainer" containerID="0775d9102892fef44c9259c87571f1df473896bb9b2a29499794140a31de1efa" Jan 21 15:40:26 crc kubenswrapper[4902]: E0121 15:40:26.270587 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0775d9102892fef44c9259c87571f1df473896bb9b2a29499794140a31de1efa\": container with ID starting with 0775d9102892fef44c9259c87571f1df473896bb9b2a29499794140a31de1efa not found: ID does not exist" containerID="0775d9102892fef44c9259c87571f1df473896bb9b2a29499794140a31de1efa" Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.270639 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0775d9102892fef44c9259c87571f1df473896bb9b2a29499794140a31de1efa"} err="failed to get container status \"0775d9102892fef44c9259c87571f1df473896bb9b2a29499794140a31de1efa\": rpc error: code = NotFound desc = could not find container \"0775d9102892fef44c9259c87571f1df473896bb9b2a29499794140a31de1efa\": container with ID starting with 0775d9102892fef44c9259c87571f1df473896bb9b2a29499794140a31de1efa not found: ID does not exist" Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.270673 4902 scope.go:117] "RemoveContainer" containerID="ce25437daa0a10656e6fb4ab8dde255e9d713d68d2732ca1435d43e8b4409daf" Jan 21 15:40:26 crc kubenswrapper[4902]: E0121 15:40:26.271011 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce25437daa0a10656e6fb4ab8dde255e9d713d68d2732ca1435d43e8b4409daf\": container with ID starting with ce25437daa0a10656e6fb4ab8dde255e9d713d68d2732ca1435d43e8b4409daf not found: ID does not exist" containerID="ce25437daa0a10656e6fb4ab8dde255e9d713d68d2732ca1435d43e8b4409daf" Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.271065 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce25437daa0a10656e6fb4ab8dde255e9d713d68d2732ca1435d43e8b4409daf"} err="failed to get container status \"ce25437daa0a10656e6fb4ab8dde255e9d713d68d2732ca1435d43e8b4409daf\": rpc error: code = NotFound desc = could not find container \"ce25437daa0a10656e6fb4ab8dde255e9d713d68d2732ca1435d43e8b4409daf\": container with ID starting with ce25437daa0a10656e6fb4ab8dde255e9d713d68d2732ca1435d43e8b4409daf not found: ID does not exist" Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.309776 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec10f1dd-7bfa-4767-921e-d67dc1b461c7" path="/var/lib/kubelet/pods/ec10f1dd-7bfa-4767-921e-d67dc1b461c7/volumes" Jan 21 15:40:36 crc kubenswrapper[4902]: I0121 15:40:36.295576 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:40:36 crc kubenswrapper[4902]: E0121 15:40:36.298574 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:40:49 crc kubenswrapper[4902]: I0121 15:40:49.295210 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:40:49 crc kubenswrapper[4902]: E0121 15:40:49.296224 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:41:02 crc kubenswrapper[4902]: I0121 15:41:02.295209 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:41:02 crc kubenswrapper[4902]: E0121 15:41:02.295914 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:41:13 crc kubenswrapper[4902]: I0121 15:41:13.294728 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:41:13 crc kubenswrapper[4902]: E0121 15:41:13.295787 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:41:24 crc kubenswrapper[4902]: I0121 15:41:24.295177 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:41:24 crc kubenswrapper[4902]: E0121 15:41:24.296202 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:41:39 crc kubenswrapper[4902]: I0121 15:41:39.295613 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:41:39 crc kubenswrapper[4902]: E0121 15:41:39.296669 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.280174 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wvsn8"] Jan 21 15:41:46 crc kubenswrapper[4902]: E0121 15:41:46.281205 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec10f1dd-7bfa-4767-921e-d67dc1b461c7" containerName="extract-content" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.281221 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec10f1dd-7bfa-4767-921e-d67dc1b461c7" containerName="extract-content" Jan 21 15:41:46 crc kubenswrapper[4902]: E0121 15:41:46.281251 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec10f1dd-7bfa-4767-921e-d67dc1b461c7" containerName="extract-utilities" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.281261 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec10f1dd-7bfa-4767-921e-d67dc1b461c7" containerName="extract-utilities" Jan 21 15:41:46 crc kubenswrapper[4902]: E0121 15:41:46.281276 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec10f1dd-7bfa-4767-921e-d67dc1b461c7" containerName="registry-server" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.281284 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec10f1dd-7bfa-4767-921e-d67dc1b461c7" containerName="registry-server" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.281459 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec10f1dd-7bfa-4767-921e-d67dc1b461c7" containerName="registry-server" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.283174 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.317036 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvsn8"] Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.471395 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b1aca2-2d07-48af-8875-7f4600c6761c-utilities\") pod \"redhat-marketplace-wvsn8\" (UID: \"69b1aca2-2d07-48af-8875-7f4600c6761c\") " pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.471470 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b1aca2-2d07-48af-8875-7f4600c6761c-catalog-content\") pod \"redhat-marketplace-wvsn8\" (UID: \"69b1aca2-2d07-48af-8875-7f4600c6761c\") " pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.471605 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l59p4\" (UniqueName: \"kubernetes.io/projected/69b1aca2-2d07-48af-8875-7f4600c6761c-kube-api-access-l59p4\") pod \"redhat-marketplace-wvsn8\" (UID: \"69b1aca2-2d07-48af-8875-7f4600c6761c\") " pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.572781 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b1aca2-2d07-48af-8875-7f4600c6761c-utilities\") pod \"redhat-marketplace-wvsn8\" (UID: \"69b1aca2-2d07-48af-8875-7f4600c6761c\") " pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.572876 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b1aca2-2d07-48af-8875-7f4600c6761c-catalog-content\") pod \"redhat-marketplace-wvsn8\" (UID: \"69b1aca2-2d07-48af-8875-7f4600c6761c\") " pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.572938 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l59p4\" (UniqueName: \"kubernetes.io/projected/69b1aca2-2d07-48af-8875-7f4600c6761c-kube-api-access-l59p4\") pod \"redhat-marketplace-wvsn8\" (UID: \"69b1aca2-2d07-48af-8875-7f4600c6761c\") " pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.573582 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b1aca2-2d07-48af-8875-7f4600c6761c-utilities\") pod \"redhat-marketplace-wvsn8\" (UID: \"69b1aca2-2d07-48af-8875-7f4600c6761c\") " pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.573687 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b1aca2-2d07-48af-8875-7f4600c6761c-catalog-content\") pod \"redhat-marketplace-wvsn8\" (UID: \"69b1aca2-2d07-48af-8875-7f4600c6761c\") " pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.600655 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l59p4\" (UniqueName: \"kubernetes.io/projected/69b1aca2-2d07-48af-8875-7f4600c6761c-kube-api-access-l59p4\") pod \"redhat-marketplace-wvsn8\" (UID: \"69b1aca2-2d07-48af-8875-7f4600c6761c\") " pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.610392 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:47 crc kubenswrapper[4902]: I0121 15:41:47.051716 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvsn8"] Jan 21 15:41:47 crc kubenswrapper[4902]: I0121 15:41:47.924258 4902 generic.go:334] "Generic (PLEG): container finished" podID="69b1aca2-2d07-48af-8875-7f4600c6761c" containerID="b9f4399bbaf4897e03bc040bc09f00c05acd204b6bc6dc96db1b2de938557297" exitCode=0 Jan 21 15:41:47 crc kubenswrapper[4902]: I0121 15:41:47.924372 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvsn8" event={"ID":"69b1aca2-2d07-48af-8875-7f4600c6761c","Type":"ContainerDied","Data":"b9f4399bbaf4897e03bc040bc09f00c05acd204b6bc6dc96db1b2de938557297"} Jan 21 15:41:47 crc kubenswrapper[4902]: I0121 15:41:47.924536 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvsn8" event={"ID":"69b1aca2-2d07-48af-8875-7f4600c6761c","Type":"ContainerStarted","Data":"08bfe779f035ed88017c1165646e93b26c0f9c40cf978a2efa9fac64b28aafd0"} Jan 21 15:41:48 crc kubenswrapper[4902]: I0121 15:41:48.936931 4902 generic.go:334] "Generic (PLEG): container finished" podID="69b1aca2-2d07-48af-8875-7f4600c6761c" containerID="85ff174f936d179ddfd6297bff76d9a9771838b858429659faed4e47e0545ef5" exitCode=0 Jan 21 15:41:48 crc kubenswrapper[4902]: I0121 15:41:48.937086 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvsn8" event={"ID":"69b1aca2-2d07-48af-8875-7f4600c6761c","Type":"ContainerDied","Data":"85ff174f936d179ddfd6297bff76d9a9771838b858429659faed4e47e0545ef5"} Jan 21 15:41:49 crc kubenswrapper[4902]: I0121 15:41:49.949306 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvsn8" event={"ID":"69b1aca2-2d07-48af-8875-7f4600c6761c","Type":"ContainerStarted","Data":"86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105"} Jan 21 15:41:49 crc kubenswrapper[4902]: I0121 15:41:49.993302 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wvsn8" podStartSLOduration=2.580537099 podStartE2EDuration="3.993219744s" podCreationTimestamp="2026-01-21 15:41:46 +0000 UTC" firstStartedPulling="2026-01-21 15:41:47.926942248 +0000 UTC m=+4070.003775297" lastFinishedPulling="2026-01-21 15:41:49.339624883 +0000 UTC m=+4071.416457942" observedRunningTime="2026-01-21 15:41:49.97587694 +0000 UTC m=+4072.052710009" watchObservedRunningTime="2026-01-21 15:41:49.993219744 +0000 UTC m=+4072.070052823" Jan 21 15:41:52 crc kubenswrapper[4902]: I0121 15:41:52.295412 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:41:52 crc kubenswrapper[4902]: E0121 15:41:52.296106 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:41:56 crc kubenswrapper[4902]: I0121 15:41:56.611505 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:56 crc kubenswrapper[4902]: I0121 15:41:56.611998 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:56 crc kubenswrapper[4902]: I0121 15:41:56.686217 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:57 crc kubenswrapper[4902]: I0121 15:41:57.081221 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:57 crc kubenswrapper[4902]: I0121 15:41:57.142129 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvsn8"] Jan 21 15:41:59 crc kubenswrapper[4902]: I0121 15:41:59.019024 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wvsn8" podUID="69b1aca2-2d07-48af-8875-7f4600c6761c" containerName="registry-server" containerID="cri-o://86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105" gracePeriod=2 Jan 21 15:41:59 crc kubenswrapper[4902]: I0121 15:41:59.938523 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.020312 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b1aca2-2d07-48af-8875-7f4600c6761c-utilities\") pod \"69b1aca2-2d07-48af-8875-7f4600c6761c\" (UID: \"69b1aca2-2d07-48af-8875-7f4600c6761c\") " Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.020388 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l59p4\" (UniqueName: \"kubernetes.io/projected/69b1aca2-2d07-48af-8875-7f4600c6761c-kube-api-access-l59p4\") pod \"69b1aca2-2d07-48af-8875-7f4600c6761c\" (UID: \"69b1aca2-2d07-48af-8875-7f4600c6761c\") " Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.020431 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b1aca2-2d07-48af-8875-7f4600c6761c-catalog-content\") pod \"69b1aca2-2d07-48af-8875-7f4600c6761c\" (UID: \"69b1aca2-2d07-48af-8875-7f4600c6761c\") " Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.021326 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69b1aca2-2d07-48af-8875-7f4600c6761c-utilities" (OuterVolumeSpecName: "utilities") pod "69b1aca2-2d07-48af-8875-7f4600c6761c" (UID: "69b1aca2-2d07-48af-8875-7f4600c6761c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.026833 4902 generic.go:334] "Generic (PLEG): container finished" podID="69b1aca2-2d07-48af-8875-7f4600c6761c" containerID="86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105" exitCode=0 Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.026871 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvsn8" event={"ID":"69b1aca2-2d07-48af-8875-7f4600c6761c","Type":"ContainerDied","Data":"86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105"} Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.026904 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvsn8" event={"ID":"69b1aca2-2d07-48af-8875-7f4600c6761c","Type":"ContainerDied","Data":"08bfe779f035ed88017c1165646e93b26c0f9c40cf978a2efa9fac64b28aafd0"} Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.026921 4902 scope.go:117] "RemoveContainer" containerID="86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.026952 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.029600 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69b1aca2-2d07-48af-8875-7f4600c6761c-kube-api-access-l59p4" (OuterVolumeSpecName: "kube-api-access-l59p4") pod "69b1aca2-2d07-48af-8875-7f4600c6761c" (UID: "69b1aca2-2d07-48af-8875-7f4600c6761c"). InnerVolumeSpecName "kube-api-access-l59p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.044832 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69b1aca2-2d07-48af-8875-7f4600c6761c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69b1aca2-2d07-48af-8875-7f4600c6761c" (UID: "69b1aca2-2d07-48af-8875-7f4600c6761c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.060262 4902 scope.go:117] "RemoveContainer" containerID="85ff174f936d179ddfd6297bff76d9a9771838b858429659faed4e47e0545ef5" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.078602 4902 scope.go:117] "RemoveContainer" containerID="b9f4399bbaf4897e03bc040bc09f00c05acd204b6bc6dc96db1b2de938557297" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.101559 4902 scope.go:117] "RemoveContainer" containerID="86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105" Jan 21 15:42:00 crc kubenswrapper[4902]: E0121 15:42:00.102170 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105\": container with ID starting with 86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105 not found: ID does not exist" containerID="86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.102247 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105"} err="failed to get container status \"86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105\": rpc error: code = NotFound desc = could not find container \"86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105\": container with ID starting with 86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105 not found: ID does not exist" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.102280 4902 scope.go:117] "RemoveContainer" containerID="85ff174f936d179ddfd6297bff76d9a9771838b858429659faed4e47e0545ef5" Jan 21 15:42:00 crc kubenswrapper[4902]: E0121 15:42:00.102709 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85ff174f936d179ddfd6297bff76d9a9771838b858429659faed4e47e0545ef5\": container with ID starting with 85ff174f936d179ddfd6297bff76d9a9771838b858429659faed4e47e0545ef5 not found: ID does not exist" containerID="85ff174f936d179ddfd6297bff76d9a9771838b858429659faed4e47e0545ef5" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.102853 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85ff174f936d179ddfd6297bff76d9a9771838b858429659faed4e47e0545ef5"} err="failed to get container status \"85ff174f936d179ddfd6297bff76d9a9771838b858429659faed4e47e0545ef5\": rpc error: code = NotFound desc = could not find container \"85ff174f936d179ddfd6297bff76d9a9771838b858429659faed4e47e0545ef5\": container with ID starting with 85ff174f936d179ddfd6297bff76d9a9771838b858429659faed4e47e0545ef5 not found: ID does not exist" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.102889 4902 scope.go:117] "RemoveContainer" containerID="b9f4399bbaf4897e03bc040bc09f00c05acd204b6bc6dc96db1b2de938557297" Jan 21 15:42:00 crc kubenswrapper[4902]: E0121 15:42:00.103189 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9f4399bbaf4897e03bc040bc09f00c05acd204b6bc6dc96db1b2de938557297\": container with ID starting with b9f4399bbaf4897e03bc040bc09f00c05acd204b6bc6dc96db1b2de938557297 not found: ID does not exist" containerID="b9f4399bbaf4897e03bc040bc09f00c05acd204b6bc6dc96db1b2de938557297" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.103215 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9f4399bbaf4897e03bc040bc09f00c05acd204b6bc6dc96db1b2de938557297"} err="failed to get container status \"b9f4399bbaf4897e03bc040bc09f00c05acd204b6bc6dc96db1b2de938557297\": rpc error: code = NotFound desc = could not find container \"b9f4399bbaf4897e03bc040bc09f00c05acd204b6bc6dc96db1b2de938557297\": container with ID starting with b9f4399bbaf4897e03bc040bc09f00c05acd204b6bc6dc96db1b2de938557297 not found: ID does not exist" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.121405 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b1aca2-2d07-48af-8875-7f4600c6761c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.121434 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l59p4\" (UniqueName: \"kubernetes.io/projected/69b1aca2-2d07-48af-8875-7f4600c6761c-kube-api-access-l59p4\") on node \"crc\" DevicePath \"\"" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.121445 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b1aca2-2d07-48af-8875-7f4600c6761c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.392507 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvsn8"] Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.397569 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvsn8"] Jan 21 15:42:02 crc kubenswrapper[4902]: I0121 15:42:02.308629 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69b1aca2-2d07-48af-8875-7f4600c6761c" path="/var/lib/kubelet/pods/69b1aca2-2d07-48af-8875-7f4600c6761c/volumes" Jan 21 15:42:04 crc kubenswrapper[4902]: I0121 15:42:04.299490 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:42:04 crc kubenswrapper[4902]: E0121 15:42:04.300105 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:42:15 crc kubenswrapper[4902]: I0121 15:42:15.294343 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:42:15 crc kubenswrapper[4902]: E0121 15:42:15.295157 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:42:26 crc kubenswrapper[4902]: I0121 15:42:26.295254 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:42:26 crc kubenswrapper[4902]: E0121 15:42:26.295865 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:42:39 crc kubenswrapper[4902]: I0121 15:42:39.295198 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:42:39 crc kubenswrapper[4902]: E0121 15:42:39.295985 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:42:50 crc kubenswrapper[4902]: I0121 15:42:50.295395 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:42:50 crc kubenswrapper[4902]: E0121 15:42:50.296267 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:43:01 crc kubenswrapper[4902]: I0121 15:43:01.295142 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:43:01 crc kubenswrapper[4902]: E0121 15:43:01.296196 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:43:14 crc kubenswrapper[4902]: I0121 15:43:14.294872 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:43:14 crc kubenswrapper[4902]: E0121 15:43:14.295717 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:43:27 crc kubenswrapper[4902]: I0121 15:43:27.294830 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:43:27 crc kubenswrapper[4902]: I0121 15:43:27.800712 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"96a0c468edaf7e0a12819e67dd2a8451666decc72c19f24c18c03a05bac8ba27"} Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.246704 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vl7zv"] Jan 21 15:44:16 crc kubenswrapper[4902]: E0121 15:44:16.249756 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b1aca2-2d07-48af-8875-7f4600c6761c" containerName="extract-content" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.249821 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b1aca2-2d07-48af-8875-7f4600c6761c" containerName="extract-content" Jan 21 15:44:16 crc kubenswrapper[4902]: E0121 15:44:16.249867 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b1aca2-2d07-48af-8875-7f4600c6761c" containerName="extract-utilities" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.249884 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b1aca2-2d07-48af-8875-7f4600c6761c" containerName="extract-utilities" Jan 21 15:44:16 crc kubenswrapper[4902]: E0121 15:44:16.249943 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b1aca2-2d07-48af-8875-7f4600c6761c" containerName="registry-server" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.249960 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b1aca2-2d07-48af-8875-7f4600c6761c" containerName="registry-server" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.250336 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="69b1aca2-2d07-48af-8875-7f4600c6761c" containerName="registry-server" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.252537 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.312646 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vl7zv"] Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.428484 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca9083b7-b28b-4908-8185-7284e29e74d9-catalog-content\") pod \"community-operators-vl7zv\" (UID: \"ca9083b7-b28b-4908-8185-7284e29e74d9\") " pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.428538 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca9083b7-b28b-4908-8185-7284e29e74d9-utilities\") pod \"community-operators-vl7zv\" (UID: \"ca9083b7-b28b-4908-8185-7284e29e74d9\") " pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.428623 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lfsz\" (UniqueName: \"kubernetes.io/projected/ca9083b7-b28b-4908-8185-7284e29e74d9-kube-api-access-7lfsz\") pod \"community-operators-vl7zv\" (UID: \"ca9083b7-b28b-4908-8185-7284e29e74d9\") " pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.530610 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lfsz\" (UniqueName: \"kubernetes.io/projected/ca9083b7-b28b-4908-8185-7284e29e74d9-kube-api-access-7lfsz\") pod \"community-operators-vl7zv\" (UID: \"ca9083b7-b28b-4908-8185-7284e29e74d9\") " pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.530693 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca9083b7-b28b-4908-8185-7284e29e74d9-catalog-content\") pod \"community-operators-vl7zv\" (UID: \"ca9083b7-b28b-4908-8185-7284e29e74d9\") " pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.530725 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca9083b7-b28b-4908-8185-7284e29e74d9-utilities\") pod \"community-operators-vl7zv\" (UID: \"ca9083b7-b28b-4908-8185-7284e29e74d9\") " pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.531453 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca9083b7-b28b-4908-8185-7284e29e74d9-utilities\") pod \"community-operators-vl7zv\" (UID: \"ca9083b7-b28b-4908-8185-7284e29e74d9\") " pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.531749 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca9083b7-b28b-4908-8185-7284e29e74d9-catalog-content\") pod \"community-operators-vl7zv\" (UID: \"ca9083b7-b28b-4908-8185-7284e29e74d9\") " pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.555220 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lfsz\" (UniqueName: \"kubernetes.io/projected/ca9083b7-b28b-4908-8185-7284e29e74d9-kube-api-access-7lfsz\") pod \"community-operators-vl7zv\" (UID: \"ca9083b7-b28b-4908-8185-7284e29e74d9\") " pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.607677 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.925964 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vl7zv"] Jan 21 15:44:17 crc kubenswrapper[4902]: I0121 15:44:17.199589 4902 generic.go:334] "Generic (PLEG): container finished" podID="ca9083b7-b28b-4908-8185-7284e29e74d9" containerID="51a6784d0a68c2ae3efcdd0218588e07793335aa2e7cb59e8e1d6ecab61f8e07" exitCode=0 Jan 21 15:44:17 crc kubenswrapper[4902]: I0121 15:44:17.199637 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl7zv" event={"ID":"ca9083b7-b28b-4908-8185-7284e29e74d9","Type":"ContainerDied","Data":"51a6784d0a68c2ae3efcdd0218588e07793335aa2e7cb59e8e1d6ecab61f8e07"} Jan 21 15:44:17 crc kubenswrapper[4902]: I0121 15:44:17.199875 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl7zv" event={"ID":"ca9083b7-b28b-4908-8185-7284e29e74d9","Type":"ContainerStarted","Data":"f60df5e2d75103334e1413a85fb112dc4d9fef95fbf03278ba675ec2bcfacaf4"} Jan 21 15:44:18 crc kubenswrapper[4902]: I0121 15:44:18.211837 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl7zv" event={"ID":"ca9083b7-b28b-4908-8185-7284e29e74d9","Type":"ContainerStarted","Data":"f6780a8c02a9189221781f9d3cf9782a5e83cfe7b2b2ee40c8ef78c897007999"} Jan 21 15:44:19 crc kubenswrapper[4902]: I0121 15:44:19.223320 4902 generic.go:334] "Generic (PLEG): container finished" podID="ca9083b7-b28b-4908-8185-7284e29e74d9" containerID="f6780a8c02a9189221781f9d3cf9782a5e83cfe7b2b2ee40c8ef78c897007999" exitCode=0 Jan 21 15:44:19 crc kubenswrapper[4902]: I0121 15:44:19.223435 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl7zv" event={"ID":"ca9083b7-b28b-4908-8185-7284e29e74d9","Type":"ContainerDied","Data":"f6780a8c02a9189221781f9d3cf9782a5e83cfe7b2b2ee40c8ef78c897007999"} Jan 21 15:44:21 crc kubenswrapper[4902]: I0121 15:44:21.240227 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl7zv" event={"ID":"ca9083b7-b28b-4908-8185-7284e29e74d9","Type":"ContainerStarted","Data":"a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436"} Jan 21 15:44:26 crc kubenswrapper[4902]: I0121 15:44:26.608841 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:26 crc kubenswrapper[4902]: I0121 15:44:26.609573 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:26 crc kubenswrapper[4902]: I0121 15:44:26.689648 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:26 crc kubenswrapper[4902]: I0121 15:44:26.721601 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vl7zv" podStartSLOduration=7.254519193 podStartE2EDuration="10.721582256s" podCreationTimestamp="2026-01-21 15:44:16 +0000 UTC" firstStartedPulling="2026-01-21 15:44:17.200958266 +0000 UTC m=+4219.277791295" lastFinishedPulling="2026-01-21 15:44:20.668021289 +0000 UTC m=+4222.744854358" observedRunningTime="2026-01-21 15:44:21.272603882 +0000 UTC m=+4223.349436911" watchObservedRunningTime="2026-01-21 15:44:26.721582256 +0000 UTC m=+4228.798415295" Jan 21 15:44:27 crc kubenswrapper[4902]: I0121 15:44:27.359385 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:27 crc kubenswrapper[4902]: I0121 15:44:27.410578 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vl7zv"] Jan 21 15:44:29 crc kubenswrapper[4902]: I0121 15:44:29.305222 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vl7zv" podUID="ca9083b7-b28b-4908-8185-7284e29e74d9" containerName="registry-server" containerID="cri-o://a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436" gracePeriod=2 Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.224563 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.319300 4902 generic.go:334] "Generic (PLEG): container finished" podID="ca9083b7-b28b-4908-8185-7284e29e74d9" containerID="a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436" exitCode=0 Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.319408 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.319428 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl7zv" event={"ID":"ca9083b7-b28b-4908-8185-7284e29e74d9","Type":"ContainerDied","Data":"a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436"} Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.319797 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl7zv" event={"ID":"ca9083b7-b28b-4908-8185-7284e29e74d9","Type":"ContainerDied","Data":"f60df5e2d75103334e1413a85fb112dc4d9fef95fbf03278ba675ec2bcfacaf4"} Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.319818 4902 scope.go:117] "RemoveContainer" containerID="a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.341361 4902 scope.go:117] "RemoveContainer" containerID="f6780a8c02a9189221781f9d3cf9782a5e83cfe7b2b2ee40c8ef78c897007999" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.353302 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca9083b7-b28b-4908-8185-7284e29e74d9-utilities\") pod \"ca9083b7-b28b-4908-8185-7284e29e74d9\" (UID: \"ca9083b7-b28b-4908-8185-7284e29e74d9\") " Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.353391 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lfsz\" (UniqueName: \"kubernetes.io/projected/ca9083b7-b28b-4908-8185-7284e29e74d9-kube-api-access-7lfsz\") pod \"ca9083b7-b28b-4908-8185-7284e29e74d9\" (UID: \"ca9083b7-b28b-4908-8185-7284e29e74d9\") " Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.353529 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca9083b7-b28b-4908-8185-7284e29e74d9-catalog-content\") pod \"ca9083b7-b28b-4908-8185-7284e29e74d9\" (UID: \"ca9083b7-b28b-4908-8185-7284e29e74d9\") " Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.354394 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca9083b7-b28b-4908-8185-7284e29e74d9-utilities" (OuterVolumeSpecName: "utilities") pod "ca9083b7-b28b-4908-8185-7284e29e74d9" (UID: "ca9083b7-b28b-4908-8185-7284e29e74d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.359005 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca9083b7-b28b-4908-8185-7284e29e74d9-kube-api-access-7lfsz" (OuterVolumeSpecName: "kube-api-access-7lfsz") pod "ca9083b7-b28b-4908-8185-7284e29e74d9" (UID: "ca9083b7-b28b-4908-8185-7284e29e74d9"). InnerVolumeSpecName "kube-api-access-7lfsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.367509 4902 scope.go:117] "RemoveContainer" containerID="51a6784d0a68c2ae3efcdd0218588e07793335aa2e7cb59e8e1d6ecab61f8e07" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.400617 4902 scope.go:117] "RemoveContainer" containerID="a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436" Jan 21 15:44:30 crc kubenswrapper[4902]: E0121 15:44:30.401163 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436\": container with ID starting with a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436 not found: ID does not exist" containerID="a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.401205 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436"} err="failed to get container status \"a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436\": rpc error: code = NotFound desc = could not find container \"a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436\": container with ID starting with a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436 not found: ID does not exist" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.401231 4902 scope.go:117] "RemoveContainer" containerID="f6780a8c02a9189221781f9d3cf9782a5e83cfe7b2b2ee40c8ef78c897007999" Jan 21 15:44:30 crc kubenswrapper[4902]: E0121 15:44:30.401865 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6780a8c02a9189221781f9d3cf9782a5e83cfe7b2b2ee40c8ef78c897007999\": container with ID starting with f6780a8c02a9189221781f9d3cf9782a5e83cfe7b2b2ee40c8ef78c897007999 not found: ID does not exist" containerID="f6780a8c02a9189221781f9d3cf9782a5e83cfe7b2b2ee40c8ef78c897007999" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.401937 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6780a8c02a9189221781f9d3cf9782a5e83cfe7b2b2ee40c8ef78c897007999"} err="failed to get container status \"f6780a8c02a9189221781f9d3cf9782a5e83cfe7b2b2ee40c8ef78c897007999\": rpc error: code = NotFound desc = could not find container \"f6780a8c02a9189221781f9d3cf9782a5e83cfe7b2b2ee40c8ef78c897007999\": container with ID starting with f6780a8c02a9189221781f9d3cf9782a5e83cfe7b2b2ee40c8ef78c897007999 not found: ID does not exist" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.401983 4902 scope.go:117] "RemoveContainer" containerID="51a6784d0a68c2ae3efcdd0218588e07793335aa2e7cb59e8e1d6ecab61f8e07" Jan 21 15:44:30 crc kubenswrapper[4902]: E0121 15:44:30.402486 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51a6784d0a68c2ae3efcdd0218588e07793335aa2e7cb59e8e1d6ecab61f8e07\": container with ID starting with 51a6784d0a68c2ae3efcdd0218588e07793335aa2e7cb59e8e1d6ecab61f8e07 not found: ID does not exist" containerID="51a6784d0a68c2ae3efcdd0218588e07793335aa2e7cb59e8e1d6ecab61f8e07" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.402524 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51a6784d0a68c2ae3efcdd0218588e07793335aa2e7cb59e8e1d6ecab61f8e07"} err="failed to get container status \"51a6784d0a68c2ae3efcdd0218588e07793335aa2e7cb59e8e1d6ecab61f8e07\": rpc error: code = NotFound desc = could not find container \"51a6784d0a68c2ae3efcdd0218588e07793335aa2e7cb59e8e1d6ecab61f8e07\": container with ID starting with 51a6784d0a68c2ae3efcdd0218588e07793335aa2e7cb59e8e1d6ecab61f8e07 not found: ID does not exist" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.415732 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca9083b7-b28b-4908-8185-7284e29e74d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca9083b7-b28b-4908-8185-7284e29e74d9" (UID: "ca9083b7-b28b-4908-8185-7284e29e74d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.455501 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca9083b7-b28b-4908-8185-7284e29e74d9-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.455535 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lfsz\" (UniqueName: \"kubernetes.io/projected/ca9083b7-b28b-4908-8185-7284e29e74d9-kube-api-access-7lfsz\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.455548 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca9083b7-b28b-4908-8185-7284e29e74d9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.656162 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vl7zv"] Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.663521 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vl7zv"] Jan 21 15:44:32 crc kubenswrapper[4902]: I0121 15:44:32.306269 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca9083b7-b28b-4908-8185-7284e29e74d9" path="/var/lib/kubelet/pods/ca9083b7-b28b-4908-8185-7284e29e74d9/volumes" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.183417 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m"] Jan 21 15:45:00 crc kubenswrapper[4902]: E0121 15:45:00.184300 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca9083b7-b28b-4908-8185-7284e29e74d9" containerName="registry-server" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.184318 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca9083b7-b28b-4908-8185-7284e29e74d9" containerName="registry-server" Jan 21 15:45:00 crc kubenswrapper[4902]: E0121 15:45:00.184352 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca9083b7-b28b-4908-8185-7284e29e74d9" containerName="extract-content" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.184361 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca9083b7-b28b-4908-8185-7284e29e74d9" containerName="extract-content" Jan 21 15:45:00 crc kubenswrapper[4902]: E0121 15:45:00.184377 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca9083b7-b28b-4908-8185-7284e29e74d9" containerName="extract-utilities" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.184386 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca9083b7-b28b-4908-8185-7284e29e74d9" containerName="extract-utilities" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.184544 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca9083b7-b28b-4908-8185-7284e29e74d9" containerName="registry-server" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.185180 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.188523 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.188968 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.191014 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m"] Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.322081 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6893ec42-9882-4d98-9d44-ab57d7366115-config-volume\") pod \"collect-profiles-29483505-qjs6m\" (UID: \"6893ec42-9882-4d98-9d44-ab57d7366115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.322147 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6893ec42-9882-4d98-9d44-ab57d7366115-secret-volume\") pod \"collect-profiles-29483505-qjs6m\" (UID: \"6893ec42-9882-4d98-9d44-ab57d7366115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.322306 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrw8b\" (UniqueName: \"kubernetes.io/projected/6893ec42-9882-4d98-9d44-ab57d7366115-kube-api-access-mrw8b\") pod \"collect-profiles-29483505-qjs6m\" (UID: \"6893ec42-9882-4d98-9d44-ab57d7366115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.424073 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6893ec42-9882-4d98-9d44-ab57d7366115-config-volume\") pod \"collect-profiles-29483505-qjs6m\" (UID: \"6893ec42-9882-4d98-9d44-ab57d7366115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.424149 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6893ec42-9882-4d98-9d44-ab57d7366115-secret-volume\") pod \"collect-profiles-29483505-qjs6m\" (UID: \"6893ec42-9882-4d98-9d44-ab57d7366115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.424230 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrw8b\" (UniqueName: \"kubernetes.io/projected/6893ec42-9882-4d98-9d44-ab57d7366115-kube-api-access-mrw8b\") pod \"collect-profiles-29483505-qjs6m\" (UID: \"6893ec42-9882-4d98-9d44-ab57d7366115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.425698 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6893ec42-9882-4d98-9d44-ab57d7366115-config-volume\") pod \"collect-profiles-29483505-qjs6m\" (UID: \"6893ec42-9882-4d98-9d44-ab57d7366115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.431283 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6893ec42-9882-4d98-9d44-ab57d7366115-secret-volume\") pod \"collect-profiles-29483505-qjs6m\" (UID: \"6893ec42-9882-4d98-9d44-ab57d7366115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.445897 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrw8b\" (UniqueName: \"kubernetes.io/projected/6893ec42-9882-4d98-9d44-ab57d7366115-kube-api-access-mrw8b\") pod \"collect-profiles-29483505-qjs6m\" (UID: \"6893ec42-9882-4d98-9d44-ab57d7366115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.513322 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.936738 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m"] Jan 21 15:45:01 crc kubenswrapper[4902]: I0121 15:45:01.594872 4902 generic.go:334] "Generic (PLEG): container finished" podID="6893ec42-9882-4d98-9d44-ab57d7366115" containerID="d06aac15e4e0103b43e5e004729564b5803ddb7e6af160a1d792ad3827466cc3" exitCode=0 Jan 21 15:45:01 crc kubenswrapper[4902]: I0121 15:45:01.596040 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" event={"ID":"6893ec42-9882-4d98-9d44-ab57d7366115","Type":"ContainerDied","Data":"d06aac15e4e0103b43e5e004729564b5803ddb7e6af160a1d792ad3827466cc3"} Jan 21 15:45:01 crc kubenswrapper[4902]: I0121 15:45:01.596206 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" event={"ID":"6893ec42-9882-4d98-9d44-ab57d7366115","Type":"ContainerStarted","Data":"b765903e04dab520f1ef47d032e2c1d9572c41170886af8820e8387445ee2867"} Jan 21 15:45:02 crc kubenswrapper[4902]: I0121 15:45:02.833683 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:02 crc kubenswrapper[4902]: I0121 15:45:02.957869 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrw8b\" (UniqueName: \"kubernetes.io/projected/6893ec42-9882-4d98-9d44-ab57d7366115-kube-api-access-mrw8b\") pod \"6893ec42-9882-4d98-9d44-ab57d7366115\" (UID: \"6893ec42-9882-4d98-9d44-ab57d7366115\") " Jan 21 15:45:02 crc kubenswrapper[4902]: I0121 15:45:02.957965 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6893ec42-9882-4d98-9d44-ab57d7366115-config-volume\") pod \"6893ec42-9882-4d98-9d44-ab57d7366115\" (UID: \"6893ec42-9882-4d98-9d44-ab57d7366115\") " Jan 21 15:45:02 crc kubenswrapper[4902]: I0121 15:45:02.958161 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6893ec42-9882-4d98-9d44-ab57d7366115-secret-volume\") pod \"6893ec42-9882-4d98-9d44-ab57d7366115\" (UID: \"6893ec42-9882-4d98-9d44-ab57d7366115\") " Jan 21 15:45:02 crc kubenswrapper[4902]: I0121 15:45:02.958944 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6893ec42-9882-4d98-9d44-ab57d7366115-config-volume" (OuterVolumeSpecName: "config-volume") pod "6893ec42-9882-4d98-9d44-ab57d7366115" (UID: "6893ec42-9882-4d98-9d44-ab57d7366115"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:45:02 crc kubenswrapper[4902]: I0121 15:45:02.965194 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6893ec42-9882-4d98-9d44-ab57d7366115-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6893ec42-9882-4d98-9d44-ab57d7366115" (UID: "6893ec42-9882-4d98-9d44-ab57d7366115"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:45:02 crc kubenswrapper[4902]: I0121 15:45:02.965422 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6893ec42-9882-4d98-9d44-ab57d7366115-kube-api-access-mrw8b" (OuterVolumeSpecName: "kube-api-access-mrw8b") pod "6893ec42-9882-4d98-9d44-ab57d7366115" (UID: "6893ec42-9882-4d98-9d44-ab57d7366115"). InnerVolumeSpecName "kube-api-access-mrw8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:45:03 crc kubenswrapper[4902]: I0121 15:45:03.059919 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrw8b\" (UniqueName: \"kubernetes.io/projected/6893ec42-9882-4d98-9d44-ab57d7366115-kube-api-access-mrw8b\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:03 crc kubenswrapper[4902]: I0121 15:45:03.059962 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6893ec42-9882-4d98-9d44-ab57d7366115-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:03 crc kubenswrapper[4902]: I0121 15:45:03.059974 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6893ec42-9882-4d98-9d44-ab57d7366115-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:03 crc kubenswrapper[4902]: I0121 15:45:03.613690 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" event={"ID":"6893ec42-9882-4d98-9d44-ab57d7366115","Type":"ContainerDied","Data":"b765903e04dab520f1ef47d032e2c1d9572c41170886af8820e8387445ee2867"} Jan 21 15:45:03 crc kubenswrapper[4902]: I0121 15:45:03.613744 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b765903e04dab520f1ef47d032e2c1d9572c41170886af8820e8387445ee2867" Jan 21 15:45:03 crc kubenswrapper[4902]: I0121 15:45:03.613830 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:03 crc kubenswrapper[4902]: I0121 15:45:03.911961 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th"] Jan 21 15:45:03 crc kubenswrapper[4902]: I0121 15:45:03.917201 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th"] Jan 21 15:45:04 crc kubenswrapper[4902]: I0121 15:45:04.308541 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ada0d02-9902-4746-b1ad-42b3f9e711a7" path="/var/lib/kubelet/pods/0ada0d02-9902-4746-b1ad-42b3f9e711a7/volumes" Jan 21 15:45:22 crc kubenswrapper[4902]: I0121 15:45:22.431761 4902 scope.go:117] "RemoveContainer" containerID="7ee1e059c9213e4cad45fc2396c6626d215288fb3b3b38f6079f8306a505e407" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.307710 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2zfl9"] Jan 21 15:45:41 crc kubenswrapper[4902]: E0121 15:45:41.308552 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6893ec42-9882-4d98-9d44-ab57d7366115" containerName="collect-profiles" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.308570 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="6893ec42-9882-4d98-9d44-ab57d7366115" containerName="collect-profiles" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.308750 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="6893ec42-9882-4d98-9d44-ab57d7366115" containerName="collect-profiles" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.309957 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.320086 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2zfl9"] Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.485830 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-utilities\") pod \"redhat-operators-2zfl9\" (UID: \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\") " pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.485880 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pl8t\" (UniqueName: \"kubernetes.io/projected/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-kube-api-access-7pl8t\") pod \"redhat-operators-2zfl9\" (UID: \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\") " pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.485950 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-catalog-content\") pod \"redhat-operators-2zfl9\" (UID: \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\") " pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.587101 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-utilities\") pod \"redhat-operators-2zfl9\" (UID: \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\") " pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.587383 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pl8t\" (UniqueName: \"kubernetes.io/projected/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-kube-api-access-7pl8t\") pod \"redhat-operators-2zfl9\" (UID: \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\") " pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.587505 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-catalog-content\") pod \"redhat-operators-2zfl9\" (UID: \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\") " pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.587637 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-utilities\") pod \"redhat-operators-2zfl9\" (UID: \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\") " pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.587969 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-catalog-content\") pod \"redhat-operators-2zfl9\" (UID: \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\") " pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.613016 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pl8t\" (UniqueName: \"kubernetes.io/projected/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-kube-api-access-7pl8t\") pod \"redhat-operators-2zfl9\" (UID: \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\") " pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.633064 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:42 crc kubenswrapper[4902]: I0121 15:45:42.113960 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2zfl9"] Jan 21 15:45:42 crc kubenswrapper[4902]: I0121 15:45:42.940081 4902 generic.go:334] "Generic (PLEG): container finished" podID="0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" containerID="636d22adb461bb6373e2fe80b61c78f4fbed5473aeb591006417a99bb62d7944" exitCode=0 Jan 21 15:45:42 crc kubenswrapper[4902]: I0121 15:45:42.940332 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfl9" event={"ID":"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3","Type":"ContainerDied","Data":"636d22adb461bb6373e2fe80b61c78f4fbed5473aeb591006417a99bb62d7944"} Jan 21 15:45:42 crc kubenswrapper[4902]: I0121 15:45:42.940359 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfl9" event={"ID":"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3","Type":"ContainerStarted","Data":"18a0fa1321e32542790fd0c6a88b5c886bd6611b61f239edf8b213986060be22"} Jan 21 15:45:42 crc kubenswrapper[4902]: I0121 15:45:42.941986 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:45:43 crc kubenswrapper[4902]: I0121 15:45:43.949017 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfl9" event={"ID":"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3","Type":"ContainerStarted","Data":"dd11e45cc0c74ef372b06f28ed0e8d30f8550b3d0f4207853db07f59a58acb6c"} Jan 21 15:45:44 crc kubenswrapper[4902]: I0121 15:45:44.958557 4902 generic.go:334] "Generic (PLEG): container finished" podID="0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" containerID="dd11e45cc0c74ef372b06f28ed0e8d30f8550b3d0f4207853db07f59a58acb6c" exitCode=0 Jan 21 15:45:44 crc kubenswrapper[4902]: I0121 15:45:44.958619 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfl9" event={"ID":"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3","Type":"ContainerDied","Data":"dd11e45cc0c74ef372b06f28ed0e8d30f8550b3d0f4207853db07f59a58acb6c"} Jan 21 15:45:45 crc kubenswrapper[4902]: I0121 15:45:45.967411 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfl9" event={"ID":"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3","Type":"ContainerStarted","Data":"80bde30b7f08416842fa9bf564e5ae365cc209b3408a91a7116d04988188a363"} Jan 21 15:45:45 crc kubenswrapper[4902]: I0121 15:45:45.986346 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2zfl9" podStartSLOduration=2.576641285 podStartE2EDuration="4.986324474s" podCreationTimestamp="2026-01-21 15:45:41 +0000 UTC" firstStartedPulling="2026-01-21 15:45:42.94174838 +0000 UTC m=+4305.018581409" lastFinishedPulling="2026-01-21 15:45:45.351431569 +0000 UTC m=+4307.428264598" observedRunningTime="2026-01-21 15:45:45.985532491 +0000 UTC m=+4308.062365520" watchObservedRunningTime="2026-01-21 15:45:45.986324474 +0000 UTC m=+4308.063157513" Jan 21 15:45:47 crc kubenswrapper[4902]: I0121 15:45:47.769454 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:45:47 crc kubenswrapper[4902]: I0121 15:45:47.769791 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:45:51 crc kubenswrapper[4902]: I0121 15:45:51.633752 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:51 crc kubenswrapper[4902]: I0121 15:45:51.634083 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:51 crc kubenswrapper[4902]: I0121 15:45:51.678911 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:52 crc kubenswrapper[4902]: I0121 15:45:52.098842 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:52 crc kubenswrapper[4902]: I0121 15:45:52.166287 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2zfl9"] Jan 21 15:45:54 crc kubenswrapper[4902]: I0121 15:45:54.018767 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2zfl9" podUID="0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" containerName="registry-server" containerID="cri-o://80bde30b7f08416842fa9bf564e5ae365cc209b3408a91a7116d04988188a363" gracePeriod=2 Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.040714 4902 generic.go:334] "Generic (PLEG): container finished" podID="0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" containerID="80bde30b7f08416842fa9bf564e5ae365cc209b3408a91a7116d04988188a363" exitCode=0 Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.040813 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfl9" event={"ID":"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3","Type":"ContainerDied","Data":"80bde30b7f08416842fa9bf564e5ae365cc209b3408a91a7116d04988188a363"} Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.041268 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfl9" event={"ID":"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3","Type":"ContainerDied","Data":"18a0fa1321e32542790fd0c6a88b5c886bd6611b61f239edf8b213986060be22"} Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.041285 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18a0fa1321e32542790fd0c6a88b5c886bd6611b61f239edf8b213986060be22" Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.069424 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.210734 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-catalog-content\") pod \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\" (UID: \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\") " Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.210839 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-utilities\") pod \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\" (UID: \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\") " Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.210914 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pl8t\" (UniqueName: \"kubernetes.io/projected/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-kube-api-access-7pl8t\") pod \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\" (UID: \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\") " Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.211883 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-utilities" (OuterVolumeSpecName: "utilities") pod "0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" (UID: "0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.221315 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-kube-api-access-7pl8t" (OuterVolumeSpecName: "kube-api-access-7pl8t") pod "0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" (UID: "0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3"). InnerVolumeSpecName "kube-api-access-7pl8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.313613 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.313789 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pl8t\" (UniqueName: \"kubernetes.io/projected/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-kube-api-access-7pl8t\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.328132 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" (UID: "0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.414993 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:58 crc kubenswrapper[4902]: I0121 15:45:58.046936 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:58 crc kubenswrapper[4902]: I0121 15:45:58.076671 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2zfl9"] Jan 21 15:45:58 crc kubenswrapper[4902]: I0121 15:45:58.088623 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2zfl9"] Jan 21 15:45:58 crc kubenswrapper[4902]: I0121 15:45:58.307419 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" path="/var/lib/kubelet/pods/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3/volumes" Jan 21 15:46:17 crc kubenswrapper[4902]: I0121 15:46:17.769663 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:46:17 crc kubenswrapper[4902]: I0121 15:46:17.770311 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:46:47 crc kubenswrapper[4902]: I0121 15:46:47.769495 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:46:47 crc kubenswrapper[4902]: I0121 15:46:47.770260 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:46:47 crc kubenswrapper[4902]: I0121 15:46:47.770324 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 15:46:47 crc kubenswrapper[4902]: I0121 15:46:47.771204 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"96a0c468edaf7e0a12819e67dd2a8451666decc72c19f24c18c03a05bac8ba27"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:46:47 crc kubenswrapper[4902]: I0121 15:46:47.771294 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://96a0c468edaf7e0a12819e67dd2a8451666decc72c19f24c18c03a05bac8ba27" gracePeriod=600 Jan 21 15:46:48 crc kubenswrapper[4902]: I0121 15:46:48.430004 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="96a0c468edaf7e0a12819e67dd2a8451666decc72c19f24c18c03a05bac8ba27" exitCode=0 Jan 21 15:46:48 crc kubenswrapper[4902]: I0121 15:46:48.430333 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"96a0c468edaf7e0a12819e67dd2a8451666decc72c19f24c18c03a05bac8ba27"} Jan 21 15:46:48 crc kubenswrapper[4902]: I0121 15:46:48.430366 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea"} Jan 21 15:46:48 crc kubenswrapper[4902]: I0121 15:46:48.430386 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:49:17 crc kubenswrapper[4902]: I0121 15:49:17.770289 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:49:17 crc kubenswrapper[4902]: I0121 15:49:17.770875 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:49:47 crc kubenswrapper[4902]: I0121 15:49:47.769378 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:49:47 crc kubenswrapper[4902]: I0121 15:49:47.770070 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:50:17 crc kubenswrapper[4902]: I0121 15:50:17.769347 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:50:17 crc kubenswrapper[4902]: I0121 15:50:17.769866 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:50:17 crc kubenswrapper[4902]: I0121 15:50:17.769929 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 15:50:17 crc kubenswrapper[4902]: I0121 15:50:17.770707 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:50:17 crc kubenswrapper[4902]: I0121 15:50:17.770784 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" gracePeriod=600 Jan 21 15:50:17 crc kubenswrapper[4902]: E0121 15:50:17.895333 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:50:17 crc kubenswrapper[4902]: I0121 15:50:17.989915 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" exitCode=0 Jan 21 15:50:17 crc kubenswrapper[4902]: I0121 15:50:17.989967 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea"} Jan 21 15:50:17 crc kubenswrapper[4902]: I0121 15:50:17.990008 4902 scope.go:117] "RemoveContainer" containerID="96a0c468edaf7e0a12819e67dd2a8451666decc72c19f24c18c03a05bac8ba27" Jan 21 15:50:17 crc kubenswrapper[4902]: I0121 15:50:17.990527 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:50:17 crc kubenswrapper[4902]: E0121 15:50:17.990788 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.627493 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x6vbm"] Jan 21 15:50:30 crc kubenswrapper[4902]: E0121 15:50:30.628188 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" containerName="extract-utilities" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.628200 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" containerName="extract-utilities" Jan 21 15:50:30 crc kubenswrapper[4902]: E0121 15:50:30.628219 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" containerName="extract-content" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.628224 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" containerName="extract-content" Jan 21 15:50:30 crc kubenswrapper[4902]: E0121 15:50:30.628242 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" containerName="registry-server" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.628248 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" containerName="registry-server" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.628371 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" containerName="registry-server" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.631439 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.635655 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x6vbm"] Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.748412 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9df3040-081e-4e88-8681-9a9f78cc758b-utilities\") pod \"certified-operators-x6vbm\" (UID: \"c9df3040-081e-4e88-8681-9a9f78cc758b\") " pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.748500 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv95l\" (UniqueName: \"kubernetes.io/projected/c9df3040-081e-4e88-8681-9a9f78cc758b-kube-api-access-jv95l\") pod \"certified-operators-x6vbm\" (UID: \"c9df3040-081e-4e88-8681-9a9f78cc758b\") " pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.748552 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9df3040-081e-4e88-8681-9a9f78cc758b-catalog-content\") pod \"certified-operators-x6vbm\" (UID: \"c9df3040-081e-4e88-8681-9a9f78cc758b\") " pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.850832 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9df3040-081e-4e88-8681-9a9f78cc758b-catalog-content\") pod \"certified-operators-x6vbm\" (UID: \"c9df3040-081e-4e88-8681-9a9f78cc758b\") " pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.850974 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9df3040-081e-4e88-8681-9a9f78cc758b-utilities\") pod \"certified-operators-x6vbm\" (UID: \"c9df3040-081e-4e88-8681-9a9f78cc758b\") " pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.851010 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv95l\" (UniqueName: \"kubernetes.io/projected/c9df3040-081e-4e88-8681-9a9f78cc758b-kube-api-access-jv95l\") pod \"certified-operators-x6vbm\" (UID: \"c9df3040-081e-4e88-8681-9a9f78cc758b\") " pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.851528 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9df3040-081e-4e88-8681-9a9f78cc758b-utilities\") pod \"certified-operators-x6vbm\" (UID: \"c9df3040-081e-4e88-8681-9a9f78cc758b\") " pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.851527 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9df3040-081e-4e88-8681-9a9f78cc758b-catalog-content\") pod \"certified-operators-x6vbm\" (UID: \"c9df3040-081e-4e88-8681-9a9f78cc758b\") " pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.873029 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv95l\" (UniqueName: \"kubernetes.io/projected/c9df3040-081e-4e88-8681-9a9f78cc758b-kube-api-access-jv95l\") pod \"certified-operators-x6vbm\" (UID: \"c9df3040-081e-4e88-8681-9a9f78cc758b\") " pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.954088 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:31 crc kubenswrapper[4902]: I0121 15:50:31.468034 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x6vbm"] Jan 21 15:50:32 crc kubenswrapper[4902]: I0121 15:50:32.104968 4902 generic.go:334] "Generic (PLEG): container finished" podID="c9df3040-081e-4e88-8681-9a9f78cc758b" containerID="3fd7269ed4af2b5ed8789200b615c63fc1a7f708f657559905419462e7af7de1" exitCode=0 Jan 21 15:50:32 crc kubenswrapper[4902]: I0121 15:50:32.105134 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6vbm" event={"ID":"c9df3040-081e-4e88-8681-9a9f78cc758b","Type":"ContainerDied","Data":"3fd7269ed4af2b5ed8789200b615c63fc1a7f708f657559905419462e7af7de1"} Jan 21 15:50:32 crc kubenswrapper[4902]: I0121 15:50:32.105377 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6vbm" event={"ID":"c9df3040-081e-4e88-8681-9a9f78cc758b","Type":"ContainerStarted","Data":"c7ba1146533bc1aa0237851f5de15a6b9343afd48314d2c3d47a704f104f667a"} Jan 21 15:50:32 crc kubenswrapper[4902]: I0121 15:50:32.294804 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:50:32 crc kubenswrapper[4902]: E0121 15:50:32.295235 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:50:33 crc kubenswrapper[4902]: I0121 15:50:33.112961 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6vbm" event={"ID":"c9df3040-081e-4e88-8681-9a9f78cc758b","Type":"ContainerStarted","Data":"832bdc2244fcacc08faf09f474999607b365e44c63c97c499a7f0ae90cc52a03"} Jan 21 15:50:34 crc kubenswrapper[4902]: I0121 15:50:34.132383 4902 generic.go:334] "Generic (PLEG): container finished" podID="c9df3040-081e-4e88-8681-9a9f78cc758b" containerID="832bdc2244fcacc08faf09f474999607b365e44c63c97c499a7f0ae90cc52a03" exitCode=0 Jan 21 15:50:34 crc kubenswrapper[4902]: I0121 15:50:34.132475 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6vbm" event={"ID":"c9df3040-081e-4e88-8681-9a9f78cc758b","Type":"ContainerDied","Data":"832bdc2244fcacc08faf09f474999607b365e44c63c97c499a7f0ae90cc52a03"} Jan 21 15:50:35 crc kubenswrapper[4902]: I0121 15:50:35.142798 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6vbm" event={"ID":"c9df3040-081e-4e88-8681-9a9f78cc758b","Type":"ContainerStarted","Data":"01b1e3385b91a0ac713735c08ca6d5002c8c460c4cfa3d2e686ace79189fad0a"} Jan 21 15:50:35 crc kubenswrapper[4902]: I0121 15:50:35.169009 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x6vbm" podStartSLOduration=2.526322616 podStartE2EDuration="5.168983512s" podCreationTimestamp="2026-01-21 15:50:30 +0000 UTC" firstStartedPulling="2026-01-21 15:50:32.107099694 +0000 UTC m=+4594.183932723" lastFinishedPulling="2026-01-21 15:50:34.74976059 +0000 UTC m=+4596.826593619" observedRunningTime="2026-01-21 15:50:35.156260734 +0000 UTC m=+4597.233093773" watchObservedRunningTime="2026-01-21 15:50:35.168983512 +0000 UTC m=+4597.245816541" Jan 21 15:50:40 crc kubenswrapper[4902]: I0121 15:50:40.955270 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:40 crc kubenswrapper[4902]: I0121 15:50:40.955732 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:41 crc kubenswrapper[4902]: I0121 15:50:41.446646 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:41 crc kubenswrapper[4902]: I0121 15:50:41.493134 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:41 crc kubenswrapper[4902]: I0121 15:50:41.685121 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x6vbm"] Jan 21 15:50:43 crc kubenswrapper[4902]: I0121 15:50:43.197859 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x6vbm" podUID="c9df3040-081e-4e88-8681-9a9f78cc758b" containerName="registry-server" containerID="cri-o://01b1e3385b91a0ac713735c08ca6d5002c8c460c4cfa3d2e686ace79189fad0a" gracePeriod=2 Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.206899 4902 generic.go:334] "Generic (PLEG): container finished" podID="c9df3040-081e-4e88-8681-9a9f78cc758b" containerID="01b1e3385b91a0ac713735c08ca6d5002c8c460c4cfa3d2e686ace79189fad0a" exitCode=0 Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.207005 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6vbm" event={"ID":"c9df3040-081e-4e88-8681-9a9f78cc758b","Type":"ContainerDied","Data":"01b1e3385b91a0ac713735c08ca6d5002c8c460c4cfa3d2e686ace79189fad0a"} Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.207305 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6vbm" event={"ID":"c9df3040-081e-4e88-8681-9a9f78cc758b","Type":"ContainerDied","Data":"c7ba1146533bc1aa0237851f5de15a6b9343afd48314d2c3d47a704f104f667a"} Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.207325 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7ba1146533bc1aa0237851f5de15a6b9343afd48314d2c3d47a704f104f667a" Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.259322 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.348104 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9df3040-081e-4e88-8681-9a9f78cc758b-catalog-content\") pod \"c9df3040-081e-4e88-8681-9a9f78cc758b\" (UID: \"c9df3040-081e-4e88-8681-9a9f78cc758b\") " Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.348189 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv95l\" (UniqueName: \"kubernetes.io/projected/c9df3040-081e-4e88-8681-9a9f78cc758b-kube-api-access-jv95l\") pod \"c9df3040-081e-4e88-8681-9a9f78cc758b\" (UID: \"c9df3040-081e-4e88-8681-9a9f78cc758b\") " Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.348251 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9df3040-081e-4e88-8681-9a9f78cc758b-utilities\") pod \"c9df3040-081e-4e88-8681-9a9f78cc758b\" (UID: \"c9df3040-081e-4e88-8681-9a9f78cc758b\") " Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.356131 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9df3040-081e-4e88-8681-9a9f78cc758b-utilities" (OuterVolumeSpecName: "utilities") pod "c9df3040-081e-4e88-8681-9a9f78cc758b" (UID: "c9df3040-081e-4e88-8681-9a9f78cc758b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.357298 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9df3040-081e-4e88-8681-9a9f78cc758b-kube-api-access-jv95l" (OuterVolumeSpecName: "kube-api-access-jv95l") pod "c9df3040-081e-4e88-8681-9a9f78cc758b" (UID: "c9df3040-081e-4e88-8681-9a9f78cc758b"). InnerVolumeSpecName "kube-api-access-jv95l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.400901 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9df3040-081e-4e88-8681-9a9f78cc758b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9df3040-081e-4e88-8681-9a9f78cc758b" (UID: "c9df3040-081e-4e88-8681-9a9f78cc758b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.449574 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9df3040-081e-4e88-8681-9a9f78cc758b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.449617 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv95l\" (UniqueName: \"kubernetes.io/projected/c9df3040-081e-4e88-8681-9a9f78cc758b-kube-api-access-jv95l\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.449635 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9df3040-081e-4e88-8681-9a9f78cc758b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:45 crc kubenswrapper[4902]: I0121 15:50:45.214410 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:45 crc kubenswrapper[4902]: I0121 15:50:45.257999 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x6vbm"] Jan 21 15:50:45 crc kubenswrapper[4902]: I0121 15:50:45.267164 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x6vbm"] Jan 21 15:50:46 crc kubenswrapper[4902]: I0121 15:50:46.295006 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:50:46 crc kubenswrapper[4902]: E0121 15:50:46.295321 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:50:46 crc kubenswrapper[4902]: I0121 15:50:46.310416 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9df3040-081e-4e88-8681-9a9f78cc758b" path="/var/lib/kubelet/pods/c9df3040-081e-4e88-8681-9a9f78cc758b/volumes" Jan 21 15:51:00 crc kubenswrapper[4902]: I0121 15:51:00.294894 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:51:00 crc kubenswrapper[4902]: E0121 15:51:00.295472 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:51:12 crc kubenswrapper[4902]: I0121 15:51:12.294848 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:51:12 crc kubenswrapper[4902]: E0121 15:51:12.296630 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:51:24 crc kubenswrapper[4902]: I0121 15:51:24.294681 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:51:24 crc kubenswrapper[4902]: E0121 15:51:24.296555 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:51:38 crc kubenswrapper[4902]: I0121 15:51:38.299300 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:51:38 crc kubenswrapper[4902]: E0121 15:51:38.300054 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:51:49 crc kubenswrapper[4902]: I0121 15:51:49.959366 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5djtc"] Jan 21 15:51:49 crc kubenswrapper[4902]: E0121 15:51:49.960409 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9df3040-081e-4e88-8681-9a9f78cc758b" containerName="extract-content" Jan 21 15:51:49 crc kubenswrapper[4902]: I0121 15:51:49.960429 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9df3040-081e-4e88-8681-9a9f78cc758b" containerName="extract-content" Jan 21 15:51:49 crc kubenswrapper[4902]: E0121 15:51:49.960473 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9df3040-081e-4e88-8681-9a9f78cc758b" containerName="extract-utilities" Jan 21 15:51:49 crc kubenswrapper[4902]: I0121 15:51:49.960485 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9df3040-081e-4e88-8681-9a9f78cc758b" containerName="extract-utilities" Jan 21 15:51:49 crc kubenswrapper[4902]: E0121 15:51:49.960506 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9df3040-081e-4e88-8681-9a9f78cc758b" containerName="registry-server" Jan 21 15:51:49 crc kubenswrapper[4902]: I0121 15:51:49.960518 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9df3040-081e-4e88-8681-9a9f78cc758b" containerName="registry-server" Jan 21 15:51:49 crc kubenswrapper[4902]: I0121 15:51:49.960729 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9df3040-081e-4e88-8681-9a9f78cc758b" containerName="registry-server" Jan 21 15:51:49 crc kubenswrapper[4902]: I0121 15:51:49.962354 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:51:49 crc kubenswrapper[4902]: I0121 15:51:49.978411 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5djtc"] Jan 21 15:51:50 crc kubenswrapper[4902]: I0121 15:51:50.025797 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-utilities\") pod \"redhat-marketplace-5djtc\" (UID: \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\") " pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:51:50 crc kubenswrapper[4902]: I0121 15:51:50.025850 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-catalog-content\") pod \"redhat-marketplace-5djtc\" (UID: \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\") " pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:51:50 crc kubenswrapper[4902]: I0121 15:51:50.026140 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cgxh\" (UniqueName: \"kubernetes.io/projected/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-kube-api-access-6cgxh\") pod \"redhat-marketplace-5djtc\" (UID: \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\") " pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:51:50 crc kubenswrapper[4902]: I0121 15:51:50.127829 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-utilities\") pod \"redhat-marketplace-5djtc\" (UID: \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\") " pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:51:50 crc kubenswrapper[4902]: I0121 15:51:50.127920 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-catalog-content\") pod \"redhat-marketplace-5djtc\" (UID: \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\") " pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:51:50 crc kubenswrapper[4902]: I0121 15:51:50.127998 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cgxh\" (UniqueName: \"kubernetes.io/projected/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-kube-api-access-6cgxh\") pod \"redhat-marketplace-5djtc\" (UID: \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\") " pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:51:50 crc kubenswrapper[4902]: I0121 15:51:50.128404 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-utilities\") pod \"redhat-marketplace-5djtc\" (UID: \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\") " pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:51:50 crc kubenswrapper[4902]: I0121 15:51:50.128415 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-catalog-content\") pod \"redhat-marketplace-5djtc\" (UID: \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\") " pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:51:50 crc kubenswrapper[4902]: I0121 15:51:50.146821 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cgxh\" (UniqueName: \"kubernetes.io/projected/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-kube-api-access-6cgxh\") pod \"redhat-marketplace-5djtc\" (UID: \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\") " pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:51:50 crc kubenswrapper[4902]: I0121 15:51:50.290087 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:51:50 crc kubenswrapper[4902]: I0121 15:51:50.509752 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5djtc"] Jan 21 15:51:50 crc kubenswrapper[4902]: I0121 15:51:50.710274 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5djtc" event={"ID":"16275b0c-9958-4f4c-aacb-bdeed1dea4e9","Type":"ContainerStarted","Data":"758b8cb5d5d9a16f8f5849e8f336c87678d827aa78b7f59e5af43acd178efc32"} Jan 21 15:51:51 crc kubenswrapper[4902]: I0121 15:51:51.295358 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:51:51 crc kubenswrapper[4902]: E0121 15:51:51.295607 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:51:51 crc kubenswrapper[4902]: I0121 15:51:51.727606 4902 generic.go:334] "Generic (PLEG): container finished" podID="16275b0c-9958-4f4c-aacb-bdeed1dea4e9" containerID="283aa2972e0fc118599651d1e581abbd577bd66bd4fb6144d2437d9db0ff88e4" exitCode=0 Jan 21 15:51:51 crc kubenswrapper[4902]: I0121 15:51:51.727692 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5djtc" event={"ID":"16275b0c-9958-4f4c-aacb-bdeed1dea4e9","Type":"ContainerDied","Data":"283aa2972e0fc118599651d1e581abbd577bd66bd4fb6144d2437d9db0ff88e4"} Jan 21 15:51:51 crc kubenswrapper[4902]: I0121 15:51:51.730974 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:51:53 crc kubenswrapper[4902]: E0121 15:51:53.005134 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16275b0c_9958_4f4c_aacb_bdeed1dea4e9.slice/crio-conmon-cb47402a2d2b7ddc2592d2f61fcd51b2b2a31030c8c5a0793b7af614e954c1a7.scope\": RecentStats: unable to find data in memory cache]" Jan 21 15:51:53 crc kubenswrapper[4902]: I0121 15:51:53.746897 4902 generic.go:334] "Generic (PLEG): container finished" podID="16275b0c-9958-4f4c-aacb-bdeed1dea4e9" containerID="cb47402a2d2b7ddc2592d2f61fcd51b2b2a31030c8c5a0793b7af614e954c1a7" exitCode=0 Jan 21 15:51:53 crc kubenswrapper[4902]: I0121 15:51:53.747026 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5djtc" event={"ID":"16275b0c-9958-4f4c-aacb-bdeed1dea4e9","Type":"ContainerDied","Data":"cb47402a2d2b7ddc2592d2f61fcd51b2b2a31030c8c5a0793b7af614e954c1a7"} Jan 21 15:51:54 crc kubenswrapper[4902]: I0121 15:51:54.757064 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5djtc" event={"ID":"16275b0c-9958-4f4c-aacb-bdeed1dea4e9","Type":"ContainerStarted","Data":"21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f"} Jan 21 15:51:54 crc kubenswrapper[4902]: I0121 15:51:54.779539 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5djtc" podStartSLOduration=3.376056637 podStartE2EDuration="5.77951659s" podCreationTimestamp="2026-01-21 15:51:49 +0000 UTC" firstStartedPulling="2026-01-21 15:51:51.730690569 +0000 UTC m=+4673.807523598" lastFinishedPulling="2026-01-21 15:51:54.134150512 +0000 UTC m=+4676.210983551" observedRunningTime="2026-01-21 15:51:54.773735467 +0000 UTC m=+4676.850568536" watchObservedRunningTime="2026-01-21 15:51:54.77951659 +0000 UTC m=+4676.856349639" Jan 21 15:52:00 crc kubenswrapper[4902]: I0121 15:52:00.290823 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:52:00 crc kubenswrapper[4902]: I0121 15:52:00.291575 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:52:00 crc kubenswrapper[4902]: I0121 15:52:00.337835 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:52:00 crc kubenswrapper[4902]: I0121 15:52:00.865538 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:52:00 crc kubenswrapper[4902]: I0121 15:52:00.918676 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5djtc"] Jan 21 15:52:02 crc kubenswrapper[4902]: I0121 15:52:02.813493 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5djtc" podUID="16275b0c-9958-4f4c-aacb-bdeed1dea4e9" containerName="registry-server" containerID="cri-o://21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f" gracePeriod=2 Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.261875 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.336143 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-utilities\") pod \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\" (UID: \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\") " Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.336270 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-catalog-content\") pod \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\" (UID: \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\") " Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.336489 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cgxh\" (UniqueName: \"kubernetes.io/projected/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-kube-api-access-6cgxh\") pod \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\" (UID: \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\") " Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.336892 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-utilities" (OuterVolumeSpecName: "utilities") pod "16275b0c-9958-4f4c-aacb-bdeed1dea4e9" (UID: "16275b0c-9958-4f4c-aacb-bdeed1dea4e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.337098 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.341848 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-kube-api-access-6cgxh" (OuterVolumeSpecName: "kube-api-access-6cgxh") pod "16275b0c-9958-4f4c-aacb-bdeed1dea4e9" (UID: "16275b0c-9958-4f4c-aacb-bdeed1dea4e9"). InnerVolumeSpecName "kube-api-access-6cgxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.365848 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16275b0c-9958-4f4c-aacb-bdeed1dea4e9" (UID: "16275b0c-9958-4f4c-aacb-bdeed1dea4e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.437862 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.437902 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cgxh\" (UniqueName: \"kubernetes.io/projected/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-kube-api-access-6cgxh\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.825854 4902 generic.go:334] "Generic (PLEG): container finished" podID="16275b0c-9958-4f4c-aacb-bdeed1dea4e9" containerID="21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f" exitCode=0 Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.825919 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5djtc" event={"ID":"16275b0c-9958-4f4c-aacb-bdeed1dea4e9","Type":"ContainerDied","Data":"21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f"} Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.825960 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5djtc" event={"ID":"16275b0c-9958-4f4c-aacb-bdeed1dea4e9","Type":"ContainerDied","Data":"758b8cb5d5d9a16f8f5849e8f336c87678d827aa78b7f59e5af43acd178efc32"} Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.825989 4902 scope.go:117] "RemoveContainer" containerID="21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.828239 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.852524 4902 scope.go:117] "RemoveContainer" containerID="cb47402a2d2b7ddc2592d2f61fcd51b2b2a31030c8c5a0793b7af614e954c1a7" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.883722 4902 scope.go:117] "RemoveContainer" containerID="283aa2972e0fc118599651d1e581abbd577bd66bd4fb6144d2437d9db0ff88e4" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.883773 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5djtc"] Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.907659 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5djtc"] Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.912705 4902 scope.go:117] "RemoveContainer" containerID="21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f" Jan 21 15:52:03 crc kubenswrapper[4902]: E0121 15:52:03.913484 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f\": container with ID starting with 21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f not found: ID does not exist" containerID="21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.913586 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f"} err="failed to get container status \"21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f\": rpc error: code = NotFound desc = could not find container \"21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f\": container with ID starting with 21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f not found: ID does not exist" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.913671 4902 scope.go:117] "RemoveContainer" containerID="cb47402a2d2b7ddc2592d2f61fcd51b2b2a31030c8c5a0793b7af614e954c1a7" Jan 21 15:52:03 crc kubenswrapper[4902]: E0121 15:52:03.914059 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb47402a2d2b7ddc2592d2f61fcd51b2b2a31030c8c5a0793b7af614e954c1a7\": container with ID starting with cb47402a2d2b7ddc2592d2f61fcd51b2b2a31030c8c5a0793b7af614e954c1a7 not found: ID does not exist" containerID="cb47402a2d2b7ddc2592d2f61fcd51b2b2a31030c8c5a0793b7af614e954c1a7" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.914120 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb47402a2d2b7ddc2592d2f61fcd51b2b2a31030c8c5a0793b7af614e954c1a7"} err="failed to get container status \"cb47402a2d2b7ddc2592d2f61fcd51b2b2a31030c8c5a0793b7af614e954c1a7\": rpc error: code = NotFound desc = could not find container \"cb47402a2d2b7ddc2592d2f61fcd51b2b2a31030c8c5a0793b7af614e954c1a7\": container with ID starting with cb47402a2d2b7ddc2592d2f61fcd51b2b2a31030c8c5a0793b7af614e954c1a7 not found: ID does not exist" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.914155 4902 scope.go:117] "RemoveContainer" containerID="283aa2972e0fc118599651d1e581abbd577bd66bd4fb6144d2437d9db0ff88e4" Jan 21 15:52:03 crc kubenswrapper[4902]: E0121 15:52:03.914721 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"283aa2972e0fc118599651d1e581abbd577bd66bd4fb6144d2437d9db0ff88e4\": container with ID starting with 283aa2972e0fc118599651d1e581abbd577bd66bd4fb6144d2437d9db0ff88e4 not found: ID does not exist" containerID="283aa2972e0fc118599651d1e581abbd577bd66bd4fb6144d2437d9db0ff88e4" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.914871 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"283aa2972e0fc118599651d1e581abbd577bd66bd4fb6144d2437d9db0ff88e4"} err="failed to get container status \"283aa2972e0fc118599651d1e581abbd577bd66bd4fb6144d2437d9db0ff88e4\": rpc error: code = NotFound desc = could not find container \"283aa2972e0fc118599651d1e581abbd577bd66bd4fb6144d2437d9db0ff88e4\": container with ID starting with 283aa2972e0fc118599651d1e581abbd577bd66bd4fb6144d2437d9db0ff88e4 not found: ID does not exist" Jan 21 15:52:04 crc kubenswrapper[4902]: I0121 15:52:04.307093 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16275b0c-9958-4f4c-aacb-bdeed1dea4e9" path="/var/lib/kubelet/pods/16275b0c-9958-4f4c-aacb-bdeed1dea4e9/volumes" Jan 21 15:52:06 crc kubenswrapper[4902]: I0121 15:52:06.295630 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:52:06 crc kubenswrapper[4902]: E0121 15:52:06.296120 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:52:20 crc kubenswrapper[4902]: I0121 15:52:20.295352 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:52:20 crc kubenswrapper[4902]: E0121 15:52:20.296495 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:52:22 crc kubenswrapper[4902]: I0121 15:52:22.576792 4902 scope.go:117] "RemoveContainer" containerID="dd11e45cc0c74ef372b06f28ed0e8d30f8550b3d0f4207853db07f59a58acb6c" Jan 21 15:52:22 crc kubenswrapper[4902]: I0121 15:52:22.604065 4902 scope.go:117] "RemoveContainer" containerID="80bde30b7f08416842fa9bf564e5ae365cc209b3408a91a7116d04988188a363" Jan 21 15:52:22 crc kubenswrapper[4902]: I0121 15:52:22.658230 4902 scope.go:117] "RemoveContainer" containerID="636d22adb461bb6373e2fe80b61c78f4fbed5473aeb591006417a99bb62d7944" Jan 21 15:52:32 crc kubenswrapper[4902]: I0121 15:52:32.295097 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:52:32 crc kubenswrapper[4902]: E0121 15:52:32.296312 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:52:45 crc kubenswrapper[4902]: I0121 15:52:45.295997 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:52:45 crc kubenswrapper[4902]: E0121 15:52:45.296950 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:52:57 crc kubenswrapper[4902]: I0121 15:52:57.295077 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:52:57 crc kubenswrapper[4902]: E0121 15:52:57.296233 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:53:08 crc kubenswrapper[4902]: I0121 15:53:08.299689 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:53:08 crc kubenswrapper[4902]: E0121 15:53:08.300691 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:53:23 crc kubenswrapper[4902]: I0121 15:53:23.295354 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:53:23 crc kubenswrapper[4902]: E0121 15:53:23.296296 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:53:35 crc kubenswrapper[4902]: I0121 15:53:35.295414 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:53:35 crc kubenswrapper[4902]: E0121 15:53:35.296421 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:53:47 crc kubenswrapper[4902]: I0121 15:53:47.294842 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:53:47 crc kubenswrapper[4902]: E0121 15:53:47.295718 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.402283 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-2v7g4"] Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.413771 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-2v7g4"] Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.543832 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-q2dqw"] Jan 21 15:53:56 crc kubenswrapper[4902]: E0121 15:53:56.544216 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16275b0c-9958-4f4c-aacb-bdeed1dea4e9" containerName="extract-utilities" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.544236 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="16275b0c-9958-4f4c-aacb-bdeed1dea4e9" containerName="extract-utilities" Jan 21 15:53:56 crc kubenswrapper[4902]: E0121 15:53:56.544261 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16275b0c-9958-4f4c-aacb-bdeed1dea4e9" containerName="extract-content" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.544271 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="16275b0c-9958-4f4c-aacb-bdeed1dea4e9" containerName="extract-content" Jan 21 15:53:56 crc kubenswrapper[4902]: E0121 15:53:56.544296 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16275b0c-9958-4f4c-aacb-bdeed1dea4e9" containerName="registry-server" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.544304 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="16275b0c-9958-4f4c-aacb-bdeed1dea4e9" containerName="registry-server" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.544476 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="16275b0c-9958-4f4c-aacb-bdeed1dea4e9" containerName="registry-server" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.544990 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.547670 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.547699 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.547681 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.549154 4902 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-qwwr2" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.559326 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-q2dqw"] Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.682910 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/446c29bb-358e-4b5a-adaa-e4b06dc62edf-node-mnt\") pod \"crc-storage-crc-q2dqw\" (UID: \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\") " pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.682955 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4h8z\" (UniqueName: \"kubernetes.io/projected/446c29bb-358e-4b5a-adaa-e4b06dc62edf-kube-api-access-j4h8z\") pod \"crc-storage-crc-q2dqw\" (UID: \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\") " pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.683076 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/446c29bb-358e-4b5a-adaa-e4b06dc62edf-crc-storage\") pod \"crc-storage-crc-q2dqw\" (UID: \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\") " pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.784787 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/446c29bb-358e-4b5a-adaa-e4b06dc62edf-crc-storage\") pod \"crc-storage-crc-q2dqw\" (UID: \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\") " pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.784877 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/446c29bb-358e-4b5a-adaa-e4b06dc62edf-node-mnt\") pod \"crc-storage-crc-q2dqw\" (UID: \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\") " pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.784913 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4h8z\" (UniqueName: \"kubernetes.io/projected/446c29bb-358e-4b5a-adaa-e4b06dc62edf-kube-api-access-j4h8z\") pod \"crc-storage-crc-q2dqw\" (UID: \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\") " pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.785832 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/446c29bb-358e-4b5a-adaa-e4b06dc62edf-node-mnt\") pod \"crc-storage-crc-q2dqw\" (UID: \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\") " pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.786402 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/446c29bb-358e-4b5a-adaa-e4b06dc62edf-crc-storage\") pod \"crc-storage-crc-q2dqw\" (UID: \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\") " pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.815862 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4h8z\" (UniqueName: \"kubernetes.io/projected/446c29bb-358e-4b5a-adaa-e4b06dc62edf-kube-api-access-j4h8z\") pod \"crc-storage-crc-q2dqw\" (UID: \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\") " pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.917605 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:53:57 crc kubenswrapper[4902]: I0121 15:53:57.396988 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-q2dqw"] Jan 21 15:53:57 crc kubenswrapper[4902]: I0121 15:53:57.740423 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-q2dqw" event={"ID":"446c29bb-358e-4b5a-adaa-e4b06dc62edf","Type":"ContainerStarted","Data":"5055af7fe172f0c127bf4f00512c45bf861271d3b80f2acd1cf94660106078be"} Jan 21 15:53:58 crc kubenswrapper[4902]: I0121 15:53:58.320935 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33301553-deaa-4183-9538-1a43f822be80" path="/var/lib/kubelet/pods/33301553-deaa-4183-9538-1a43f822be80/volumes" Jan 21 15:53:58 crc kubenswrapper[4902]: I0121 15:53:58.750754 4902 generic.go:334] "Generic (PLEG): container finished" podID="446c29bb-358e-4b5a-adaa-e4b06dc62edf" containerID="8cefa707fcc5de9979cdbb8b42dd928ba6a77070fd6ce0a791939df6996a702e" exitCode=0 Jan 21 15:53:58 crc kubenswrapper[4902]: I0121 15:53:58.752067 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-q2dqw" event={"ID":"446c29bb-358e-4b5a-adaa-e4b06dc62edf","Type":"ContainerDied","Data":"8cefa707fcc5de9979cdbb8b42dd928ba6a77070fd6ce0a791939df6996a702e"} Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.014508 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.138758 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4h8z\" (UniqueName: \"kubernetes.io/projected/446c29bb-358e-4b5a-adaa-e4b06dc62edf-kube-api-access-j4h8z\") pod \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\" (UID: \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\") " Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.138826 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/446c29bb-358e-4b5a-adaa-e4b06dc62edf-crc-storage\") pod \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\" (UID: \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\") " Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.138871 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/446c29bb-358e-4b5a-adaa-e4b06dc62edf-node-mnt\") pod \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\" (UID: \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\") " Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.139463 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/446c29bb-358e-4b5a-adaa-e4b06dc62edf-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "446c29bb-358e-4b5a-adaa-e4b06dc62edf" (UID: "446c29bb-358e-4b5a-adaa-e4b06dc62edf"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.145910 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/446c29bb-358e-4b5a-adaa-e4b06dc62edf-kube-api-access-j4h8z" (OuterVolumeSpecName: "kube-api-access-j4h8z") pod "446c29bb-358e-4b5a-adaa-e4b06dc62edf" (UID: "446c29bb-358e-4b5a-adaa-e4b06dc62edf"). InnerVolumeSpecName "kube-api-access-j4h8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.161723 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/446c29bb-358e-4b5a-adaa-e4b06dc62edf-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "446c29bb-358e-4b5a-adaa-e4b06dc62edf" (UID: "446c29bb-358e-4b5a-adaa-e4b06dc62edf"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.240859 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4h8z\" (UniqueName: \"kubernetes.io/projected/446c29bb-358e-4b5a-adaa-e4b06dc62edf-kube-api-access-j4h8z\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.240905 4902 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/446c29bb-358e-4b5a-adaa-e4b06dc62edf-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.240925 4902 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/446c29bb-358e-4b5a-adaa-e4b06dc62edf-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.769870 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-q2dqw" event={"ID":"446c29bb-358e-4b5a-adaa-e4b06dc62edf","Type":"ContainerDied","Data":"5055af7fe172f0c127bf4f00512c45bf861271d3b80f2acd1cf94660106078be"} Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.769911 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5055af7fe172f0c127bf4f00512c45bf861271d3b80f2acd1cf94660106078be" Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.769973 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:54:01 crc kubenswrapper[4902]: I0121 15:54:01.295640 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:54:01 crc kubenswrapper[4902]: E0121 15:54:01.296212 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.306264 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-q2dqw"] Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.313214 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-q2dqw"] Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.449673 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-p7fjr"] Jan 21 15:54:02 crc kubenswrapper[4902]: E0121 15:54:02.449948 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="446c29bb-358e-4b5a-adaa-e4b06dc62edf" containerName="storage" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.449961 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="446c29bb-358e-4b5a-adaa-e4b06dc62edf" containerName="storage" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.450127 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="446c29bb-358e-4b5a-adaa-e4b06dc62edf" containerName="storage" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.450632 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.453074 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.453164 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.454331 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.455125 4902 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-qwwr2" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.466598 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-p7fjr"] Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.579555 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxwh5\" (UniqueName: \"kubernetes.io/projected/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-kube-api-access-dxwh5\") pod \"crc-storage-crc-p7fjr\" (UID: \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\") " pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.579693 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-node-mnt\") pod \"crc-storage-crc-p7fjr\" (UID: \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\") " pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.579767 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-crc-storage\") pod \"crc-storage-crc-p7fjr\" (UID: \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\") " pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.680653 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxwh5\" (UniqueName: \"kubernetes.io/projected/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-kube-api-access-dxwh5\") pod \"crc-storage-crc-p7fjr\" (UID: \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\") " pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.680717 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-node-mnt\") pod \"crc-storage-crc-p7fjr\" (UID: \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\") " pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.680751 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-crc-storage\") pod \"crc-storage-crc-p7fjr\" (UID: \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\") " pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.681214 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-node-mnt\") pod \"crc-storage-crc-p7fjr\" (UID: \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\") " pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.681943 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-crc-storage\") pod \"crc-storage-crc-p7fjr\" (UID: \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\") " pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.698912 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxwh5\" (UniqueName: \"kubernetes.io/projected/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-kube-api-access-dxwh5\") pod \"crc-storage-crc-p7fjr\" (UID: \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\") " pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.767204 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:03 crc kubenswrapper[4902]: I0121 15:54:03.005864 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-p7fjr"] Jan 21 15:54:03 crc kubenswrapper[4902]: W0121 15:54:03.008853 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8be47d9_db95_4ff5_8d65_2bea0c3d32be.slice/crio-99f85c2d6f8502e50cf501a7e3ef41712f820c0592badb8e8ad53ed816f55fdb WatchSource:0}: Error finding container 99f85c2d6f8502e50cf501a7e3ef41712f820c0592badb8e8ad53ed816f55fdb: Status 404 returned error can't find the container with id 99f85c2d6f8502e50cf501a7e3ef41712f820c0592badb8e8ad53ed816f55fdb Jan 21 15:54:03 crc kubenswrapper[4902]: I0121 15:54:03.793965 4902 generic.go:334] "Generic (PLEG): container finished" podID="b8be47d9-db95-4ff5-8d65-2bea0c3d32be" containerID="4ba7c3ca543296c161204a3805cebef49261b19ee3ebe778fe20553434c19786" exitCode=0 Jan 21 15:54:03 crc kubenswrapper[4902]: I0121 15:54:03.794234 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-p7fjr" event={"ID":"b8be47d9-db95-4ff5-8d65-2bea0c3d32be","Type":"ContainerDied","Data":"4ba7c3ca543296c161204a3805cebef49261b19ee3ebe778fe20553434c19786"} Jan 21 15:54:03 crc kubenswrapper[4902]: I0121 15:54:03.794263 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-p7fjr" event={"ID":"b8be47d9-db95-4ff5-8d65-2bea0c3d32be","Type":"ContainerStarted","Data":"99f85c2d6f8502e50cf501a7e3ef41712f820c0592badb8e8ad53ed816f55fdb"} Jan 21 15:54:04 crc kubenswrapper[4902]: I0121 15:54:04.302414 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="446c29bb-358e-4b5a-adaa-e4b06dc62edf" path="/var/lib/kubelet/pods/446c29bb-358e-4b5a-adaa-e4b06dc62edf/volumes" Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.478230 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.625718 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxwh5\" (UniqueName: \"kubernetes.io/projected/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-kube-api-access-dxwh5\") pod \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\" (UID: \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\") " Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.626297 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-node-mnt\") pod \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\" (UID: \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\") " Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.626421 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-crc-storage\") pod \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\" (UID: \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\") " Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.626432 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "b8be47d9-db95-4ff5-8d65-2bea0c3d32be" (UID: "b8be47d9-db95-4ff5-8d65-2bea0c3d32be"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.627228 4902 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.631156 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-kube-api-access-dxwh5" (OuterVolumeSpecName: "kube-api-access-dxwh5") pod "b8be47d9-db95-4ff5-8d65-2bea0c3d32be" (UID: "b8be47d9-db95-4ff5-8d65-2bea0c3d32be"). InnerVolumeSpecName "kube-api-access-dxwh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.645707 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "b8be47d9-db95-4ff5-8d65-2bea0c3d32be" (UID: "b8be47d9-db95-4ff5-8d65-2bea0c3d32be"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.728999 4902 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.729030 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxwh5\" (UniqueName: \"kubernetes.io/projected/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-kube-api-access-dxwh5\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.811685 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-p7fjr" event={"ID":"b8be47d9-db95-4ff5-8d65-2bea0c3d32be","Type":"ContainerDied","Data":"99f85c2d6f8502e50cf501a7e3ef41712f820c0592badb8e8ad53ed816f55fdb"} Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.811723 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99f85c2d6f8502e50cf501a7e3ef41712f820c0592badb8e8ad53ed816f55fdb" Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.811734 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:12 crc kubenswrapper[4902]: I0121 15:54:12.295364 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:54:12 crc kubenswrapper[4902]: E0121 15:54:12.296265 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:54:23 crc kubenswrapper[4902]: I0121 15:54:23.648626 4902 scope.go:117] "RemoveContainer" containerID="bea584749b1ccfd891d97d3ebbaf45ab41b6cc3e6efd100d0aa2c6701cc97c94" Jan 21 15:54:24 crc kubenswrapper[4902]: I0121 15:54:24.296210 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:54:24 crc kubenswrapper[4902]: E0121 15:54:24.297020 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:54:36 crc kubenswrapper[4902]: I0121 15:54:36.295631 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:54:36 crc kubenswrapper[4902]: E0121 15:54:36.297392 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:54:51 crc kubenswrapper[4902]: I0121 15:54:51.294625 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:54:51 crc kubenswrapper[4902]: E0121 15:54:51.295761 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:54:54 crc kubenswrapper[4902]: I0121 15:54:54.875240 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fszmf"] Jan 21 15:54:54 crc kubenswrapper[4902]: E0121 15:54:54.875597 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8be47d9-db95-4ff5-8d65-2bea0c3d32be" containerName="storage" Jan 21 15:54:54 crc kubenswrapper[4902]: I0121 15:54:54.875613 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8be47d9-db95-4ff5-8d65-2bea0c3d32be" containerName="storage" Jan 21 15:54:54 crc kubenswrapper[4902]: I0121 15:54:54.875799 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8be47d9-db95-4ff5-8d65-2bea0c3d32be" containerName="storage" Jan 21 15:54:54 crc kubenswrapper[4902]: I0121 15:54:54.879326 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:54:54 crc kubenswrapper[4902]: I0121 15:54:54.890128 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fszmf"] Jan 21 15:54:55 crc kubenswrapper[4902]: I0121 15:54:55.047750 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-utilities\") pod \"community-operators-fszmf\" (UID: \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\") " pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:54:55 crc kubenswrapper[4902]: I0121 15:54:55.047816 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkkdc\" (UniqueName: \"kubernetes.io/projected/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-kube-api-access-gkkdc\") pod \"community-operators-fszmf\" (UID: \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\") " pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:54:55 crc kubenswrapper[4902]: I0121 15:54:55.047842 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-catalog-content\") pod \"community-operators-fszmf\" (UID: \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\") " pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:54:55 crc kubenswrapper[4902]: I0121 15:54:55.149413 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-utilities\") pod \"community-operators-fszmf\" (UID: \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\") " pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:54:55 crc kubenswrapper[4902]: I0121 15:54:55.149757 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkkdc\" (UniqueName: \"kubernetes.io/projected/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-kube-api-access-gkkdc\") pod \"community-operators-fszmf\" (UID: \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\") " pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:54:55 crc kubenswrapper[4902]: I0121 15:54:55.149845 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-utilities\") pod \"community-operators-fszmf\" (UID: \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\") " pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:54:55 crc kubenswrapper[4902]: I0121 15:54:55.149912 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-catalog-content\") pod \"community-operators-fszmf\" (UID: \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\") " pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:54:55 crc kubenswrapper[4902]: I0121 15:54:55.150228 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-catalog-content\") pod \"community-operators-fszmf\" (UID: \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\") " pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:54:55 crc kubenswrapper[4902]: I0121 15:54:55.170004 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkkdc\" (UniqueName: \"kubernetes.io/projected/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-kube-api-access-gkkdc\") pod \"community-operators-fszmf\" (UID: \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\") " pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:54:55 crc kubenswrapper[4902]: I0121 15:54:55.198558 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:54:55 crc kubenswrapper[4902]: I0121 15:54:55.729196 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fszmf"] Jan 21 15:54:56 crc kubenswrapper[4902]: I0121 15:54:56.243838 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fszmf" event={"ID":"1ad44bd6-85c0-4945-8d3d-e009a0abc10c","Type":"ContainerStarted","Data":"c61df1d1e5639819b900598787c6c9d4d0639ced8074247f8471086728aefad4"} Jan 21 15:54:57 crc kubenswrapper[4902]: I0121 15:54:57.252247 4902 generic.go:334] "Generic (PLEG): container finished" podID="1ad44bd6-85c0-4945-8d3d-e009a0abc10c" containerID="e9c3b082b60fb672921b1a5f09c1b6f91d4f0b1a8e2ddc94f470bae53c566dfb" exitCode=0 Jan 21 15:54:57 crc kubenswrapper[4902]: I0121 15:54:57.252310 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fszmf" event={"ID":"1ad44bd6-85c0-4945-8d3d-e009a0abc10c","Type":"ContainerDied","Data":"e9c3b082b60fb672921b1a5f09c1b6f91d4f0b1a8e2ddc94f470bae53c566dfb"} Jan 21 15:54:58 crc kubenswrapper[4902]: I0121 15:54:58.286879 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fszmf" event={"ID":"1ad44bd6-85c0-4945-8d3d-e009a0abc10c","Type":"ContainerStarted","Data":"edc76b00aca584cdf879dd18110a5799c944cc08737388cb80875856e9509584"} Jan 21 15:54:59 crc kubenswrapper[4902]: I0121 15:54:59.298838 4902 generic.go:334] "Generic (PLEG): container finished" podID="1ad44bd6-85c0-4945-8d3d-e009a0abc10c" containerID="edc76b00aca584cdf879dd18110a5799c944cc08737388cb80875856e9509584" exitCode=0 Jan 21 15:54:59 crc kubenswrapper[4902]: I0121 15:54:59.298888 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fszmf" event={"ID":"1ad44bd6-85c0-4945-8d3d-e009a0abc10c","Type":"ContainerDied","Data":"edc76b00aca584cdf879dd18110a5799c944cc08737388cb80875856e9509584"} Jan 21 15:55:00 crc kubenswrapper[4902]: I0121 15:55:00.308399 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fszmf" event={"ID":"1ad44bd6-85c0-4945-8d3d-e009a0abc10c","Type":"ContainerStarted","Data":"38081958732e9846e753a85b2af88f19986db078acee28d84ec86e2d80ef2d2e"} Jan 21 15:55:00 crc kubenswrapper[4902]: I0121 15:55:00.332508 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fszmf" podStartSLOduration=3.901762625 podStartE2EDuration="6.332492984s" podCreationTimestamp="2026-01-21 15:54:54 +0000 UTC" firstStartedPulling="2026-01-21 15:54:57.254066569 +0000 UTC m=+4859.330899608" lastFinishedPulling="2026-01-21 15:54:59.684796938 +0000 UTC m=+4861.761629967" observedRunningTime="2026-01-21 15:55:00.330006084 +0000 UTC m=+4862.406839113" watchObservedRunningTime="2026-01-21 15:55:00.332492984 +0000 UTC m=+4862.409326013" Jan 21 15:55:04 crc kubenswrapper[4902]: I0121 15:55:04.295396 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:55:04 crc kubenswrapper[4902]: E0121 15:55:04.296387 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:55:05 crc kubenswrapper[4902]: I0121 15:55:05.199601 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:55:05 crc kubenswrapper[4902]: I0121 15:55:05.200002 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:55:05 crc kubenswrapper[4902]: I0121 15:55:05.270748 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:55:05 crc kubenswrapper[4902]: I0121 15:55:05.390941 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:55:05 crc kubenswrapper[4902]: I0121 15:55:05.509235 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fszmf"] Jan 21 15:55:07 crc kubenswrapper[4902]: I0121 15:55:07.358452 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fszmf" podUID="1ad44bd6-85c0-4945-8d3d-e009a0abc10c" containerName="registry-server" containerID="cri-o://38081958732e9846e753a85b2af88f19986db078acee28d84ec86e2d80ef2d2e" gracePeriod=2 Jan 21 15:55:08 crc kubenswrapper[4902]: I0121 15:55:08.367708 4902 generic.go:334] "Generic (PLEG): container finished" podID="1ad44bd6-85c0-4945-8d3d-e009a0abc10c" containerID="38081958732e9846e753a85b2af88f19986db078acee28d84ec86e2d80ef2d2e" exitCode=0 Jan 21 15:55:08 crc kubenswrapper[4902]: I0121 15:55:08.367759 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fszmf" event={"ID":"1ad44bd6-85c0-4945-8d3d-e009a0abc10c","Type":"ContainerDied","Data":"38081958732e9846e753a85b2af88f19986db078acee28d84ec86e2d80ef2d2e"} Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.301593 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.369144 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-catalog-content\") pod \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\" (UID: \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\") " Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.369236 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-utilities\") pod \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\" (UID: \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\") " Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.370294 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-utilities" (OuterVolumeSpecName: "utilities") pod "1ad44bd6-85c0-4945-8d3d-e009a0abc10c" (UID: "1ad44bd6-85c0-4945-8d3d-e009a0abc10c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.370620 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.377677 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fszmf" event={"ID":"1ad44bd6-85c0-4945-8d3d-e009a0abc10c","Type":"ContainerDied","Data":"c61df1d1e5639819b900598787c6c9d4d0639ced8074247f8471086728aefad4"} Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.377733 4902 scope.go:117] "RemoveContainer" containerID="38081958732e9846e753a85b2af88f19986db078acee28d84ec86e2d80ef2d2e" Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.377780 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.397703 4902 scope.go:117] "RemoveContainer" containerID="edc76b00aca584cdf879dd18110a5799c944cc08737388cb80875856e9509584" Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.415308 4902 scope.go:117] "RemoveContainer" containerID="e9c3b082b60fb672921b1a5f09c1b6f91d4f0b1a8e2ddc94f470bae53c566dfb" Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.433253 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ad44bd6-85c0-4945-8d3d-e009a0abc10c" (UID: "1ad44bd6-85c0-4945-8d3d-e009a0abc10c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.471804 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkkdc\" (UniqueName: \"kubernetes.io/projected/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-kube-api-access-gkkdc\") pod \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\" (UID: \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\") " Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.472163 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.477155 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-kube-api-access-gkkdc" (OuterVolumeSpecName: "kube-api-access-gkkdc") pod "1ad44bd6-85c0-4945-8d3d-e009a0abc10c" (UID: "1ad44bd6-85c0-4945-8d3d-e009a0abc10c"). InnerVolumeSpecName "kube-api-access-gkkdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.573427 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkkdc\" (UniqueName: \"kubernetes.io/projected/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-kube-api-access-gkkdc\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.715937 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fszmf"] Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.722743 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fszmf"] Jan 21 15:55:10 crc kubenswrapper[4902]: I0121 15:55:10.305655 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ad44bd6-85c0-4945-8d3d-e009a0abc10c" path="/var/lib/kubelet/pods/1ad44bd6-85c0-4945-8d3d-e009a0abc10c/volumes" Jan 21 15:55:17 crc kubenswrapper[4902]: I0121 15:55:17.295823 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:55:17 crc kubenswrapper[4902]: E0121 15:55:17.296553 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:55:30 crc kubenswrapper[4902]: I0121 15:55:30.295838 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:55:31 crc kubenswrapper[4902]: I0121 15:55:31.525705 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"a22fa5f015dccd31057dbf3720918ed0aa27b09d4ac48d4d56f2468401c0c0fb"} Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.616999 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-js569"] Jan 21 15:56:09 crc kubenswrapper[4902]: E0121 15:56:09.617885 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ad44bd6-85c0-4945-8d3d-e009a0abc10c" containerName="extract-utilities" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.617907 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ad44bd6-85c0-4945-8d3d-e009a0abc10c" containerName="extract-utilities" Jan 21 15:56:09 crc kubenswrapper[4902]: E0121 15:56:09.617929 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ad44bd6-85c0-4945-8d3d-e009a0abc10c" containerName="extract-content" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.617935 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ad44bd6-85c0-4945-8d3d-e009a0abc10c" containerName="extract-content" Jan 21 15:56:09 crc kubenswrapper[4902]: E0121 15:56:09.617954 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ad44bd6-85c0-4945-8d3d-e009a0abc10c" containerName="registry-server" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.617961 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ad44bd6-85c0-4945-8d3d-e009a0abc10c" containerName="registry-server" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.618112 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ad44bd6-85c0-4945-8d3d-e009a0abc10c" containerName="registry-server" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.619080 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.623691 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-js569"] Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.782676 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-utilities\") pod \"redhat-operators-js569\" (UID: \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\") " pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.782756 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd24d\" (UniqueName: \"kubernetes.io/projected/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-kube-api-access-hd24d\") pod \"redhat-operators-js569\" (UID: \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\") " pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.782822 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-catalog-content\") pod \"redhat-operators-js569\" (UID: \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\") " pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.883770 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-catalog-content\") pod \"redhat-operators-js569\" (UID: \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\") " pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.883850 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-utilities\") pod \"redhat-operators-js569\" (UID: \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\") " pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.883878 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd24d\" (UniqueName: \"kubernetes.io/projected/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-kube-api-access-hd24d\") pod \"redhat-operators-js569\" (UID: \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\") " pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.884299 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-catalog-content\") pod \"redhat-operators-js569\" (UID: \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\") " pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.884383 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-utilities\") pod \"redhat-operators-js569\" (UID: \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\") " pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.906574 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd24d\" (UniqueName: \"kubernetes.io/projected/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-kube-api-access-hd24d\") pod \"redhat-operators-js569\" (UID: \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\") " pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.936862 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:10 crc kubenswrapper[4902]: I0121 15:56:10.410254 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-js569"] Jan 21 15:56:10 crc kubenswrapper[4902]: I0121 15:56:10.847760 4902 generic.go:334] "Generic (PLEG): container finished" podID="a7e81ecf-2d0f-42ee-b056-8dcee4744f20" containerID="45eafb51fadf190387bd010aa4ae4e9a6224092300f2137c53a9ae4ccbd5cb46" exitCode=0 Jan 21 15:56:10 crc kubenswrapper[4902]: I0121 15:56:10.847856 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-js569" event={"ID":"a7e81ecf-2d0f-42ee-b056-8dcee4744f20","Type":"ContainerDied","Data":"45eafb51fadf190387bd010aa4ae4e9a6224092300f2137c53a9ae4ccbd5cb46"} Jan 21 15:56:10 crc kubenswrapper[4902]: I0121 15:56:10.848085 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-js569" event={"ID":"a7e81ecf-2d0f-42ee-b056-8dcee4744f20","Type":"ContainerStarted","Data":"36eba7ca272d5745d020ecf93e963e4f76dcea615b0e4afd3a2fb792e8ede2ff"} Jan 21 15:56:11 crc kubenswrapper[4902]: I0121 15:56:11.858354 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-js569" event={"ID":"a7e81ecf-2d0f-42ee-b056-8dcee4744f20","Type":"ContainerStarted","Data":"c2ac7d265daa5a076911e3ce19a8be39a14dbabcb6e40e288631dc7abfaee473"} Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.041351 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-vw29z"] Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.042869 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.045195 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.045464 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.045780 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.050841 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-t2t4d"] Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.052277 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.053400 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-z96z6" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.054671 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.058486 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-vw29z"] Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.064737 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-t2t4d"] Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.218180 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-dns-svc\") pod \"dnsmasq-dns-56bbd59dc5-t2t4d\" (UID: \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\") " pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.218252 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-config\") pod \"dnsmasq-dns-56bbd59dc5-t2t4d\" (UID: \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\") " pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.218286 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45252\" (UniqueName: \"kubernetes.io/projected/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-kube-api-access-45252\") pod \"dnsmasq-dns-56bbd59dc5-t2t4d\" (UID: \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\") " pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.218339 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbnzp\" (UniqueName: \"kubernetes.io/projected/30d00674-287c-403a-824a-b276b754f347-kube-api-access-wbnzp\") pod \"dnsmasq-dns-5986db9b4f-vw29z\" (UID: \"30d00674-287c-403a-824a-b276b754f347\") " pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.218359 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30d00674-287c-403a-824a-b276b754f347-config\") pod \"dnsmasq-dns-5986db9b4f-vw29z\" (UID: \"30d00674-287c-403a-824a-b276b754f347\") " pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.319689 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-config\") pod \"dnsmasq-dns-56bbd59dc5-t2t4d\" (UID: \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\") " pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.320162 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45252\" (UniqueName: \"kubernetes.io/projected/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-kube-api-access-45252\") pod \"dnsmasq-dns-56bbd59dc5-t2t4d\" (UID: \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\") " pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.320575 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbnzp\" (UniqueName: \"kubernetes.io/projected/30d00674-287c-403a-824a-b276b754f347-kube-api-access-wbnzp\") pod \"dnsmasq-dns-5986db9b4f-vw29z\" (UID: \"30d00674-287c-403a-824a-b276b754f347\") " pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.320664 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-config\") pod \"dnsmasq-dns-56bbd59dc5-t2t4d\" (UID: \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\") " pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.320800 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30d00674-287c-403a-824a-b276b754f347-config\") pod \"dnsmasq-dns-5986db9b4f-vw29z\" (UID: \"30d00674-287c-403a-824a-b276b754f347\") " pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.320866 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-dns-svc\") pod \"dnsmasq-dns-56bbd59dc5-t2t4d\" (UID: \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\") " pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.321543 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-dns-svc\") pod \"dnsmasq-dns-56bbd59dc5-t2t4d\" (UID: \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\") " pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.321708 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30d00674-287c-403a-824a-b276b754f347-config\") pod \"dnsmasq-dns-5986db9b4f-vw29z\" (UID: \"30d00674-287c-403a-824a-b276b754f347\") " pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.353762 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45252\" (UniqueName: \"kubernetes.io/projected/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-kube-api-access-45252\") pod \"dnsmasq-dns-56bbd59dc5-t2t4d\" (UID: \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\") " pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.353763 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbnzp\" (UniqueName: \"kubernetes.io/projected/30d00674-287c-403a-824a-b276b754f347-kube-api-access-wbnzp\") pod \"dnsmasq-dns-5986db9b4f-vw29z\" (UID: \"30d00674-287c-403a-824a-b276b754f347\") " pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.365195 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.382853 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.598302 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-t2t4d"] Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.659174 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-865d9b578f-zhthq"] Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.660630 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.684453 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865d9b578f-zhthq"] Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.734529 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-config\") pod \"dnsmasq-dns-865d9b578f-zhthq\" (UID: \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\") " pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.734767 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-dns-svc\") pod \"dnsmasq-dns-865d9b578f-zhthq\" (UID: \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\") " pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.734806 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmkgm\" (UniqueName: \"kubernetes.io/projected/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-kube-api-access-mmkgm\") pod \"dnsmasq-dns-865d9b578f-zhthq\" (UID: \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\") " pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.836219 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-config\") pod \"dnsmasq-dns-865d9b578f-zhthq\" (UID: \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\") " pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.836310 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-dns-svc\") pod \"dnsmasq-dns-865d9b578f-zhthq\" (UID: \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\") " pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.836331 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmkgm\" (UniqueName: \"kubernetes.io/projected/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-kube-api-access-mmkgm\") pod \"dnsmasq-dns-865d9b578f-zhthq\" (UID: \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\") " pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.837523 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-dns-svc\") pod \"dnsmasq-dns-865d9b578f-zhthq\" (UID: \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\") " pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.837564 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-config\") pod \"dnsmasq-dns-865d9b578f-zhthq\" (UID: \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\") " pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.861024 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmkgm\" (UniqueName: \"kubernetes.io/projected/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-kube-api-access-mmkgm\") pod \"dnsmasq-dns-865d9b578f-zhthq\" (UID: \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\") " pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.867656 4902 generic.go:334] "Generic (PLEG): container finished" podID="a7e81ecf-2d0f-42ee-b056-8dcee4744f20" containerID="c2ac7d265daa5a076911e3ce19a8be39a14dbabcb6e40e288631dc7abfaee473" exitCode=0 Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.867715 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-js569" event={"ID":"a7e81ecf-2d0f-42ee-b056-8dcee4744f20","Type":"ContainerDied","Data":"c2ac7d265daa5a076911e3ce19a8be39a14dbabcb6e40e288631dc7abfaee473"} Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.910125 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-vw29z"] Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.995157 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.084744 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-t2t4d"] Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.088644 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-vw29z"] Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.148894 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-bqqlm"] Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.150473 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.201676 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-bqqlm"] Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.348194 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrmzb\" (UniqueName: \"kubernetes.io/projected/09f238e8-eb6e-47ac-818b-3558f9f6a841-kube-api-access-nrmzb\") pod \"dnsmasq-dns-5d79f765b5-bqqlm\" (UID: \"09f238e8-eb6e-47ac-818b-3558f9f6a841\") " pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.348593 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f238e8-eb6e-47ac-818b-3558f9f6a841-config\") pod \"dnsmasq-dns-5d79f765b5-bqqlm\" (UID: \"09f238e8-eb6e-47ac-818b-3558f9f6a841\") " pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.348625 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09f238e8-eb6e-47ac-818b-3558f9f6a841-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-bqqlm\" (UID: \"09f238e8-eb6e-47ac-818b-3558f9f6a841\") " pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.375500 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865d9b578f-zhthq"] Jan 21 15:56:13 crc kubenswrapper[4902]: W0121 15:56:13.414858 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2fd66a2_371b_44b8_bdd4_b6be36c4093f.slice/crio-d3e469d97419e7030a7fc682f2e583305b06427dd0e454f49cfc9cbeb69e90f4 WatchSource:0}: Error finding container d3e469d97419e7030a7fc682f2e583305b06427dd0e454f49cfc9cbeb69e90f4: Status 404 returned error can't find the container with id d3e469d97419e7030a7fc682f2e583305b06427dd0e454f49cfc9cbeb69e90f4 Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.449636 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrmzb\" (UniqueName: \"kubernetes.io/projected/09f238e8-eb6e-47ac-818b-3558f9f6a841-kube-api-access-nrmzb\") pod \"dnsmasq-dns-5d79f765b5-bqqlm\" (UID: \"09f238e8-eb6e-47ac-818b-3558f9f6a841\") " pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.449723 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f238e8-eb6e-47ac-818b-3558f9f6a841-config\") pod \"dnsmasq-dns-5d79f765b5-bqqlm\" (UID: \"09f238e8-eb6e-47ac-818b-3558f9f6a841\") " pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.449761 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09f238e8-eb6e-47ac-818b-3558f9f6a841-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-bqqlm\" (UID: \"09f238e8-eb6e-47ac-818b-3558f9f6a841\") " pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.450790 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09f238e8-eb6e-47ac-818b-3558f9f6a841-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-bqqlm\" (UID: \"09f238e8-eb6e-47ac-818b-3558f9f6a841\") " pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.451459 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f238e8-eb6e-47ac-818b-3558f9f6a841-config\") pod \"dnsmasq-dns-5d79f765b5-bqqlm\" (UID: \"09f238e8-eb6e-47ac-818b-3558f9f6a841\") " pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.472065 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrmzb\" (UniqueName: \"kubernetes.io/projected/09f238e8-eb6e-47ac-818b-3558f9f6a841-kube-api-access-nrmzb\") pod \"dnsmasq-dns-5d79f765b5-bqqlm\" (UID: \"09f238e8-eb6e-47ac-818b-3558f9f6a841\") " pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.516603 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.786359 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-bqqlm"] Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.811197 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.833163 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.837271 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.837432 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.837550 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-928bn" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.837719 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.837886 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.838430 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.838828 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.843021 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.907754 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" event={"ID":"09f238e8-eb6e-47ac-818b-3558f9f6a841","Type":"ContainerStarted","Data":"cd7e0cd801ba79f538e3c63c7aa4f7926d46008854b1879da441818cd04cf0dc"} Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.911085 4902 generic.go:334] "Generic (PLEG): container finished" podID="30d00674-287c-403a-824a-b276b754f347" containerID="558fbcd6b35082dbaf76c770e098d73f89c3826407d9e535f25c3583444b1ed8" exitCode=0 Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.912837 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" event={"ID":"30d00674-287c-403a-824a-b276b754f347","Type":"ContainerDied","Data":"558fbcd6b35082dbaf76c770e098d73f89c3826407d9e535f25c3583444b1ed8"} Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.912936 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" event={"ID":"30d00674-287c-403a-824a-b276b754f347","Type":"ContainerStarted","Data":"94abbe62094a7750b098b45bd14aa3af3bfdaf14f32c54556cd1707199bba1ca"} Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.921570 4902 generic.go:334] "Generic (PLEG): container finished" podID="d2fd66a2-371b-44b8-bdd4-b6be36c4093f" containerID="0eec98b5d0b0be8d198331d620aaf26c943f2f70750ff630c0d78b7c5a83456c" exitCode=0 Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.921665 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" event={"ID":"d2fd66a2-371b-44b8-bdd4-b6be36c4093f","Type":"ContainerDied","Data":"0eec98b5d0b0be8d198331d620aaf26c943f2f70750ff630c0d78b7c5a83456c"} Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.921702 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" event={"ID":"d2fd66a2-371b-44b8-bdd4-b6be36c4093f","Type":"ContainerStarted","Data":"d3e469d97419e7030a7fc682f2e583305b06427dd0e454f49cfc9cbeb69e90f4"} Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.942806 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-js569" event={"ID":"a7e81ecf-2d0f-42ee-b056-8dcee4744f20","Type":"ContainerStarted","Data":"8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049"} Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.957840 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.957937 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.957964 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.957995 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c6f17a65-e372-463d-b875-c8acdd3a8a04-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.958016 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ct2z\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-kube-api-access-7ct2z\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.958068 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.958097 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.958128 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.958155 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.958179 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.958197 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c6f17a65-e372-463d-b875-c8acdd3a8a04-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.963345 4902 generic.go:334] "Generic (PLEG): container finished" podID="8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5" containerID="e174e52c2764056538bbb95c67918069f9399591d9ba7544f3fb5d6d28846bd3" exitCode=0 Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.963511 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" event={"ID":"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5","Type":"ContainerDied","Data":"e174e52c2764056538bbb95c67918069f9399591d9ba7544f3fb5d6d28846bd3"} Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.963552 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" event={"ID":"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5","Type":"ContainerStarted","Data":"269383cbad3588e0a0a04dc6eb7ceb2301277087918a863ab8238086c4a80bab"} Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.987566 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-js569" podStartSLOduration=2.504600654 podStartE2EDuration="4.987542045s" podCreationTimestamp="2026-01-21 15:56:09 +0000 UTC" firstStartedPulling="2026-01-21 15:56:10.849679344 +0000 UTC m=+4932.926512373" lastFinishedPulling="2026-01-21 15:56:13.332620725 +0000 UTC m=+4935.409453764" observedRunningTime="2026-01-21 15:56:13.978398657 +0000 UTC m=+4936.055231686" watchObservedRunningTime="2026-01-21 15:56:13.987542045 +0000 UTC m=+4936.064375084" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.059380 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c6f17a65-e372-463d-b875-c8acdd3a8a04-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.059426 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ct2z\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-kube-api-access-7ct2z\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.059478 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.059513 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.059564 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.059604 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.059631 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.059651 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c6f17a65-e372-463d-b875-c8acdd3a8a04-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.059688 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.059773 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.059802 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.064936 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c6f17a65-e372-463d-b875-c8acdd3a8a04-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.065268 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.065950 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.067994 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.074067 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.074665 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.075790 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.081647 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c6f17a65-e372-463d-b875-c8acdd3a8a04-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.082171 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.082216 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ca3246581f7b05cdf38cd2988972c40f4ce4dbd3e3f2637534a551fbe51cdea/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.086520 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.087289 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ct2z\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-kube-api-access-7ct2z\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.123877 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.154078 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: E0121 15:56:14.163814 4902 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 21 15:56:14 crc kubenswrapper[4902]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/d2fd66a2-371b-44b8-bdd4-b6be36c4093f/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 21 15:56:14 crc kubenswrapper[4902]: > podSandboxID="d3e469d97419e7030a7fc682f2e583305b06427dd0e454f49cfc9cbeb69e90f4" Jan 21 15:56:14 crc kubenswrapper[4902]: E0121 15:56:14.163981 4902 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 15:56:14 crc kubenswrapper[4902]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb6hc5h68h68h594h659hdbh679h65ch5f6hdch6h5b9h8fh55hfhf8h57fhc7h56ch687h669h559h678h5dhc7hf7h697h5d6h9ch669h54fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mmkgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-865d9b578f-zhthq_openstack(d2fd66a2-371b-44b8-bdd4-b6be36c4093f): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/d2fd66a2-371b-44b8-bdd4-b6be36c4093f/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 21 15:56:14 crc kubenswrapper[4902]: > logger="UnhandledError" Jan 21 15:56:14 crc kubenswrapper[4902]: E0121 15:56:14.166176 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/d2fd66a2-371b-44b8-bdd4-b6be36c4093f/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" podUID="d2fd66a2-371b-44b8-bdd4-b6be36c4093f" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.273253 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.274558 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.280417 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.280513 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.280624 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.280844 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.280949 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.281121 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.281230 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.281287 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ssxxh" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.287682 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.319641 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.366981 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45252\" (UniqueName: \"kubernetes.io/projected/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-kube-api-access-45252\") pod \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\" (UID: \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\") " Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.367035 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-config\") pod \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\" (UID: \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\") " Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.367109 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30d00674-287c-403a-824a-b276b754f347-config\") pod \"30d00674-287c-403a-824a-b276b754f347\" (UID: \"30d00674-287c-403a-824a-b276b754f347\") " Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.367141 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbnzp\" (UniqueName: \"kubernetes.io/projected/30d00674-287c-403a-824a-b276b754f347-kube-api-access-wbnzp\") pod \"30d00674-287c-403a-824a-b276b754f347\" (UID: \"30d00674-287c-403a-824a-b276b754f347\") " Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.367164 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-dns-svc\") pod \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\" (UID: \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\") " Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.367975 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.368019 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.368039 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/53c0907a-0c62-4813-af74-b0f97c0e3c16-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.368110 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.368127 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjczw\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-kube-api-access-xjczw\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.368154 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-server-conf\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.368193 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/53c0907a-0c62-4813-af74-b0f97c0e3c16-pod-info\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.368218 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.368232 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.368279 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.368301 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-config-data\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.371952 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-kube-api-access-45252" (OuterVolumeSpecName: "kube-api-access-45252") pod "8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5" (UID: "8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5"). InnerVolumeSpecName "kube-api-access-45252". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.375607 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30d00674-287c-403a-824a-b276b754f347-kube-api-access-wbnzp" (OuterVolumeSpecName: "kube-api-access-wbnzp") pod "30d00674-287c-403a-824a-b276b754f347" (UID: "30d00674-287c-403a-824a-b276b754f347"). InnerVolumeSpecName "kube-api-access-wbnzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.387059 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-config" (OuterVolumeSpecName: "config") pod "8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5" (UID: "8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.388130 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5" (UID: "8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.390398 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30d00674-287c-403a-824a-b276b754f347-config" (OuterVolumeSpecName: "config") pod "30d00674-287c-403a-824a-b276b754f347" (UID: "30d00674-287c-403a-824a-b276b754f347"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469159 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469211 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjczw\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-kube-api-access-xjczw\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469253 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-server-conf\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469290 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/53c0907a-0c62-4813-af74-b0f97c0e3c16-pod-info\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469318 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469338 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469382 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469406 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-config-data\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469434 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469531 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469558 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/53c0907a-0c62-4813-af74-b0f97c0e3c16-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469611 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30d00674-287c-403a-824a-b276b754f347-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469625 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbnzp\" (UniqueName: \"kubernetes.io/projected/30d00674-287c-403a-824a-b276b754f347-kube-api-access-wbnzp\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469637 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469649 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45252\" (UniqueName: \"kubernetes.io/projected/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-kube-api-access-45252\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469660 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.470063 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.470612 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-config-data\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.470791 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.472036 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-server-conf\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.472507 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.472540 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/044d17188a71d87a2f162043dfcb436253bd0043d87dd6a91403116fc167aa96/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.473354 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.473792 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/53c0907a-0c62-4813-af74-b0f97c0e3c16-pod-info\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.473819 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.473983 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/53c0907a-0c62-4813-af74-b0f97c0e3c16-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.476016 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.486630 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjczw\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-kube-api-access-xjczw\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.499191 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.606116 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: W0121 15:56:14.636069 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6f17a65_e372_463d_b875_c8acdd3a8a04.slice/crio-9d7549ef3e343170f623b1703d13ef1cc7e5adec835d42203926b1f5605c69d7 WatchSource:0}: Error finding container 9d7549ef3e343170f623b1703d13ef1cc7e5adec835d42203926b1f5605c69d7: Status 404 returned error can't find the container with id 9d7549ef3e343170f623b1703d13ef1cc7e5adec835d42203926b1f5605c69d7 Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.638463 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.825941 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 21 15:56:14 crc kubenswrapper[4902]: E0121 15:56:14.826899 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5" containerName="init" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.826924 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5" containerName="init" Jan 21 15:56:14 crc kubenswrapper[4902]: E0121 15:56:14.826948 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d00674-287c-403a-824a-b276b754f347" containerName="init" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.826955 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d00674-287c-403a-824a-b276b754f347" containerName="init" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.827138 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d00674-287c-403a-824a-b276b754f347" containerName="init" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.827166 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5" containerName="init" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.828252 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.836881 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.837327 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-zxvnh" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.837429 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.840595 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.843101 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.845791 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.972905 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.972897 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" event={"ID":"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5","Type":"ContainerDied","Data":"269383cbad3588e0a0a04dc6eb7ceb2301277087918a863ab8238086c4a80bab"} Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.973158 4902 scope.go:117] "RemoveContainer" containerID="e174e52c2764056538bbb95c67918069f9399591d9ba7544f3fb5d6d28846bd3" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.975189 4902 generic.go:334] "Generic (PLEG): container finished" podID="09f238e8-eb6e-47ac-818b-3558f9f6a841" containerID="d0d1ff36d9c251f2b2fbf7c284bbe148be1cf281267966cd9400c8ef5a5fdfad" exitCode=0 Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.975340 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" event={"ID":"09f238e8-eb6e-47ac-818b-3558f9f6a841","Type":"ContainerDied","Data":"d0d1ff36d9c251f2b2fbf7c284bbe148be1cf281267966cd9400c8ef5a5fdfad"} Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.977656 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz5cl\" (UniqueName: \"kubernetes.io/projected/a02660d2-21f1-4d0b-9351-efc03413d6f8-kube-api-access-hz5cl\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.977814 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a02660d2-21f1-4d0b-9351-efc03413d6f8-kolla-config\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.977993 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a02660d2-21f1-4d0b-9351-efc03413d6f8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.978121 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a02660d2-21f1-4d0b-9351-efc03413d6f8-config-data-default\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.978267 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3ddf73e8-c0ce-4e43-a7d5-8101c6d15663\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ddf73e8-c0ce-4e43-a7d5-8101c6d15663\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.978555 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a02660d2-21f1-4d0b-9351-efc03413d6f8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.978605 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a02660d2-21f1-4d0b-9351-efc03413d6f8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.978659 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a02660d2-21f1-4d0b-9351-efc03413d6f8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.979157 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" event={"ID":"30d00674-287c-403a-824a-b276b754f347","Type":"ContainerDied","Data":"94abbe62094a7750b098b45bd14aa3af3bfdaf14f32c54556cd1707199bba1ca"} Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.979848 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.003844 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c6f17a65-e372-463d-b875-c8acdd3a8a04","Type":"ContainerStarted","Data":"9d7549ef3e343170f623b1703d13ef1cc7e5adec835d42203926b1f5605c69d7"} Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.083012 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3ddf73e8-c0ce-4e43-a7d5-8101c6d15663\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ddf73e8-c0ce-4e43-a7d5-8101c6d15663\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.093626 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.094721 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a02660d2-21f1-4d0b-9351-efc03413d6f8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.094773 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a02660d2-21f1-4d0b-9351-efc03413d6f8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.094825 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a02660d2-21f1-4d0b-9351-efc03413d6f8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.094900 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz5cl\" (UniqueName: \"kubernetes.io/projected/a02660d2-21f1-4d0b-9351-efc03413d6f8-kube-api-access-hz5cl\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.094963 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a02660d2-21f1-4d0b-9351-efc03413d6f8-kolla-config\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.094992 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a02660d2-21f1-4d0b-9351-efc03413d6f8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.095052 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a02660d2-21f1-4d0b-9351-efc03413d6f8-config-data-default\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.096701 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a02660d2-21f1-4d0b-9351-efc03413d6f8-kolla-config\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.097889 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a02660d2-21f1-4d0b-9351-efc03413d6f8-config-data-default\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.098519 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a02660d2-21f1-4d0b-9351-efc03413d6f8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.101877 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a02660d2-21f1-4d0b-9351-efc03413d6f8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: W0121 15:56:15.107253 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53c0907a_0c62_4813_af74_b0f97c0e3c16.slice/crio-823499dcc6be68200313f9990f3b406f719dc54b8a2e736053275316c037d578 WatchSource:0}: Error finding container 823499dcc6be68200313f9990f3b406f719dc54b8a2e736053275316c037d578: Status 404 returned error can't find the container with id 823499dcc6be68200313f9990f3b406f719dc54b8a2e736053275316c037d578 Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.110999 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a02660d2-21f1-4d0b-9351-efc03413d6f8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.112224 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a02660d2-21f1-4d0b-9351-efc03413d6f8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.121404 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz5cl\" (UniqueName: \"kubernetes.io/projected/a02660d2-21f1-4d0b-9351-efc03413d6f8-kube-api-access-hz5cl\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.122327 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.122376 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3ddf73e8-c0ce-4e43-a7d5-8101c6d15663\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ddf73e8-c0ce-4e43-a7d5-8101c6d15663\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a452236eb5d88b410c04f3c61b2f470566f27d1a0d65069000c34e834ad468d2/globalmount\"" pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.171427 4902 scope.go:117] "RemoveContainer" containerID="558fbcd6b35082dbaf76c770e098d73f89c3826407d9e535f25c3583444b1ed8" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.194164 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3ddf73e8-c0ce-4e43-a7d5-8101c6d15663\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ddf73e8-c0ce-4e43-a7d5-8101c6d15663\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.202117 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-vw29z"] Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.207073 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-vw29z"] Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.240646 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-t2t4d"] Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.251762 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-t2t4d"] Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.452173 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.986566 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 15:56:16 crc kubenswrapper[4902]: W0121 15:56:15.997833 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda02660d2_21f1_4d0b_9351_efc03413d6f8.slice/crio-cb87312ee131e2f24b830e2bd7827493dbfb37c1a3852de93b33b3b7d6a43539 WatchSource:0}: Error finding container cb87312ee131e2f24b830e2bd7827493dbfb37c1a3852de93b33b3b7d6a43539: Status 404 returned error can't find the container with id cb87312ee131e2f24b830e2bd7827493dbfb37c1a3852de93b33b3b7d6a43539 Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.014958 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" event={"ID":"d2fd66a2-371b-44b8-bdd4-b6be36c4093f","Type":"ContainerStarted","Data":"69071f8a5cdc3de0675cee96512e97a433df1c9cd0588a392299c704d9d943f7"} Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.016132 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.028036 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" event={"ID":"09f238e8-eb6e-47ac-818b-3558f9f6a841","Type":"ContainerStarted","Data":"df1c8824638373bd513fe569a7cfc99ba5575ba306170d90a24a3e259265c66d"} Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.028672 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.030945 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a02660d2-21f1-4d0b-9351-efc03413d6f8","Type":"ContainerStarted","Data":"cb87312ee131e2f24b830e2bd7827493dbfb37c1a3852de93b33b3b7d6a43539"} Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.038579 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c6f17a65-e372-463d-b875-c8acdd3a8a04","Type":"ContainerStarted","Data":"56fede2a049d35557b9b5addf25353c93e8b313f6e9712ce4b830ecec8079d28"} Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.041540 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" podStartSLOduration=4.041521089 podStartE2EDuration="4.041521089s" podCreationTimestamp="2026-01-21 15:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:56:16.033562955 +0000 UTC m=+4938.110395984" watchObservedRunningTime="2026-01-21 15:56:16.041521089 +0000 UTC m=+4938.118354118" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.042752 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"53c0907a-0c62-4813-af74-b0f97c0e3c16","Type":"ContainerStarted","Data":"823499dcc6be68200313f9990f3b406f719dc54b8a2e736053275316c037d578"} Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.060565 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" podStartSLOduration=3.060548256 podStartE2EDuration="3.060548256s" podCreationTimestamp="2026-01-21 15:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:56:16.053590529 +0000 UTC m=+4938.130423568" watchObservedRunningTime="2026-01-21 15:56:16.060548256 +0000 UTC m=+4938.137381285" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.283118 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.284608 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.287065 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.287271 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.287332 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.289535 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-m7kxs" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.306856 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30d00674-287c-403a-824a-b276b754f347" path="/var/lib/kubelet/pods/30d00674-287c-403a-824a-b276b754f347/volumes" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.307889 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5" path="/var/lib/kubelet/pods/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5/volumes" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.308818 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.423380 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt2nt\" (UniqueName: \"kubernetes.io/projected/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-kube-api-access-gt2nt\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.423439 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.423466 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.423541 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.423725 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9a688da8-421e-4a07-8548-ad3f7b610235\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a688da8-421e-4a07-8548-ad3f7b610235\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.423785 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.423875 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.423913 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.525663 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9a688da8-421e-4a07-8548-ad3f7b610235\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a688da8-421e-4a07-8548-ad3f7b610235\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.525736 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.525770 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.525785 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.525815 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt2nt\" (UniqueName: \"kubernetes.io/projected/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-kube-api-access-gt2nt\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.525830 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.525845 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.525900 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.526827 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.527325 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.527486 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.528163 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.531569 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.531826 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9a688da8-421e-4a07-8548-ad3f7b610235\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a688da8-421e-4a07-8548-ad3f7b610235\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5f39cebcc9f41876819dcdd0155f6be58c57563341918546311a827df2908cfd/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.531886 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.536160 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.546108 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt2nt\" (UniqueName: \"kubernetes.io/projected/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-kube-api-access-gt2nt\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.558540 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9a688da8-421e-4a07-8548-ad3f7b610235\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a688da8-421e-4a07-8548-ad3f7b610235\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.600908 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.646934 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.647889 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.652012 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-9s5nr" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.652447 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.652640 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.665262 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.730833 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/32eae2d9-5b57-4ae9-8451-fa00bd7be443-kolla-config\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.730912 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32eae2d9-5b57-4ae9-8451-fa00bd7be443-combined-ca-bundle\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.730960 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qwk7\" (UniqueName: \"kubernetes.io/projected/32eae2d9-5b57-4ae9-8451-fa00bd7be443-kube-api-access-9qwk7\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.731004 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32eae2d9-5b57-4ae9-8451-fa00bd7be443-config-data\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.731130 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/32eae2d9-5b57-4ae9-8451-fa00bd7be443-memcached-tls-certs\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.832631 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/32eae2d9-5b57-4ae9-8451-fa00bd7be443-memcached-tls-certs\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.832708 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/32eae2d9-5b57-4ae9-8451-fa00bd7be443-kolla-config\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.832753 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32eae2d9-5b57-4ae9-8451-fa00bd7be443-combined-ca-bundle\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.832779 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qwk7\" (UniqueName: \"kubernetes.io/projected/32eae2d9-5b57-4ae9-8451-fa00bd7be443-kube-api-access-9qwk7\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.832826 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32eae2d9-5b57-4ae9-8451-fa00bd7be443-config-data\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.833784 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/32eae2d9-5b57-4ae9-8451-fa00bd7be443-kolla-config\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.834146 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32eae2d9-5b57-4ae9-8451-fa00bd7be443-config-data\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.836754 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/32eae2d9-5b57-4ae9-8451-fa00bd7be443-memcached-tls-certs\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.837806 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32eae2d9-5b57-4ae9-8451-fa00bd7be443-combined-ca-bundle\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.858736 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qwk7\" (UniqueName: \"kubernetes.io/projected/32eae2d9-5b57-4ae9-8451-fa00bd7be443-kube-api-access-9qwk7\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.971788 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 15:56:17 crc kubenswrapper[4902]: I0121 15:56:17.062932 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 15:56:17 crc kubenswrapper[4902]: I0121 15:56:17.067783 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"53c0907a-0c62-4813-af74-b0f97c0e3c16","Type":"ContainerStarted","Data":"f8b75d54a767665b098236f966e923a1937e74120339bc2fce010451c54f7757"} Jan 21 15:56:17 crc kubenswrapper[4902]: I0121 15:56:17.072527 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a02660d2-21f1-4d0b-9351-efc03413d6f8","Type":"ContainerStarted","Data":"d08735009117ed5e41317063f52447415b62dc5b644c6f8391c47548e16f143f"} Jan 21 15:56:17 crc kubenswrapper[4902]: W0121 15:56:17.425379 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32eae2d9_5b57_4ae9_8451_fa00bd7be443.slice/crio-9afefa1e769a5d379636b4c6993d140a9c0b1f8c5ac584a16c0fee77b4bb4fcd WatchSource:0}: Error finding container 9afefa1e769a5d379636b4c6993d140a9c0b1f8c5ac584a16c0fee77b4bb4fcd: Status 404 returned error can't find the container with id 9afefa1e769a5d379636b4c6993d140a9c0b1f8c5ac584a16c0fee77b4bb4fcd Jan 21 15:56:17 crc kubenswrapper[4902]: I0121 15:56:17.427396 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 15:56:18 crc kubenswrapper[4902]: I0121 15:56:18.089077 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a211ebd7-f82f-4cc7-91d3-77ec265a5d11","Type":"ContainerStarted","Data":"d5bb4603423cd8d93efab95c695e74107c0f4c4cb84aa804302e21ed56b1a624"} Jan 21 15:56:18 crc kubenswrapper[4902]: I0121 15:56:18.089389 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a211ebd7-f82f-4cc7-91d3-77ec265a5d11","Type":"ContainerStarted","Data":"58ef484cb0df0811e97cf23f2a71589b8f87fe0790ccc5a55ea32683c71203a6"} Jan 21 15:56:18 crc kubenswrapper[4902]: I0121 15:56:18.091221 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"32eae2d9-5b57-4ae9-8451-fa00bd7be443","Type":"ContainerStarted","Data":"9afefa1e769a5d379636b4c6993d140a9c0b1f8c5ac584a16c0fee77b4bb4fcd"} Jan 21 15:56:19 crc kubenswrapper[4902]: I0121 15:56:19.101732 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"32eae2d9-5b57-4ae9-8451-fa00bd7be443","Type":"ContainerStarted","Data":"1b9363ebee1c365ac9cef072f722170cff59ebd8e56ca16a3fa2b4b46f37d173"} Jan 21 15:56:19 crc kubenswrapper[4902]: I0121 15:56:19.101978 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 21 15:56:19 crc kubenswrapper[4902]: I0121 15:56:19.149175 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.149158228 podStartE2EDuration="3.149158228s" podCreationTimestamp="2026-01-21 15:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:56:19.124799042 +0000 UTC m=+4941.201632071" watchObservedRunningTime="2026-01-21 15:56:19.149158228 +0000 UTC m=+4941.225991257" Jan 21 15:56:19 crc kubenswrapper[4902]: I0121 15:56:19.937674 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:19 crc kubenswrapper[4902]: I0121 15:56:19.937731 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:19 crc kubenswrapper[4902]: I0121 15:56:19.974071 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:20 crc kubenswrapper[4902]: I0121 15:56:20.158810 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:20 crc kubenswrapper[4902]: I0121 15:56:20.208134 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-js569"] Jan 21 15:56:21 crc kubenswrapper[4902]: I0121 15:56:21.122384 4902 generic.go:334] "Generic (PLEG): container finished" podID="a02660d2-21f1-4d0b-9351-efc03413d6f8" containerID="d08735009117ed5e41317063f52447415b62dc5b644c6f8391c47548e16f143f" exitCode=0 Jan 21 15:56:21 crc kubenswrapper[4902]: I0121 15:56:21.122992 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a02660d2-21f1-4d0b-9351-efc03413d6f8","Type":"ContainerDied","Data":"d08735009117ed5e41317063f52447415b62dc5b644c6f8391c47548e16f143f"} Jan 21 15:56:22 crc kubenswrapper[4902]: I0121 15:56:22.133819 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a02660d2-21f1-4d0b-9351-efc03413d6f8","Type":"ContainerStarted","Data":"f5ffef6a5b71b1522eea9137dcf815e4b0a7f5d6af3716783afce880f81f2ba4"} Jan 21 15:56:22 crc kubenswrapper[4902]: I0121 15:56:22.134074 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-js569" podUID="a7e81ecf-2d0f-42ee-b056-8dcee4744f20" containerName="registry-server" containerID="cri-o://8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049" gracePeriod=2 Jan 21 15:56:22 crc kubenswrapper[4902]: I0121 15:56:22.167403 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.167381145 podStartE2EDuration="9.167381145s" podCreationTimestamp="2026-01-21 15:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:56:22.153838834 +0000 UTC m=+4944.230671853" watchObservedRunningTime="2026-01-21 15:56:22.167381145 +0000 UTC m=+4944.244214164" Jan 21 15:56:22 crc kubenswrapper[4902]: I0121 15:56:22.586746 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:22 crc kubenswrapper[4902]: I0121 15:56:22.729327 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-catalog-content\") pod \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\" (UID: \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\") " Jan 21 15:56:22 crc kubenswrapper[4902]: I0121 15:56:22.729729 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd24d\" (UniqueName: \"kubernetes.io/projected/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-kube-api-access-hd24d\") pod \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\" (UID: \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\") " Jan 21 15:56:22 crc kubenswrapper[4902]: I0121 15:56:22.729765 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-utilities\") pod \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\" (UID: \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\") " Jan 21 15:56:22 crc kubenswrapper[4902]: I0121 15:56:22.730698 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-utilities" (OuterVolumeSpecName: "utilities") pod "a7e81ecf-2d0f-42ee-b056-8dcee4744f20" (UID: "a7e81ecf-2d0f-42ee-b056-8dcee4744f20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:56:22 crc kubenswrapper[4902]: I0121 15:56:22.737851 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-kube-api-access-hd24d" (OuterVolumeSpecName: "kube-api-access-hd24d") pod "a7e81ecf-2d0f-42ee-b056-8dcee4744f20" (UID: "a7e81ecf-2d0f-42ee-b056-8dcee4744f20"). InnerVolumeSpecName "kube-api-access-hd24d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:56:22 crc kubenswrapper[4902]: I0121 15:56:22.842849 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd24d\" (UniqueName: \"kubernetes.io/projected/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-kube-api-access-hd24d\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:22 crc kubenswrapper[4902]: I0121 15:56:22.842904 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:22 crc kubenswrapper[4902]: I0121 15:56:22.996415 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.142597 4902 generic.go:334] "Generic (PLEG): container finished" podID="a7e81ecf-2d0f-42ee-b056-8dcee4744f20" containerID="8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049" exitCode=0 Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.142648 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-js569" event={"ID":"a7e81ecf-2d0f-42ee-b056-8dcee4744f20","Type":"ContainerDied","Data":"8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049"} Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.142684 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-js569" event={"ID":"a7e81ecf-2d0f-42ee-b056-8dcee4744f20","Type":"ContainerDied","Data":"36eba7ca272d5745d020ecf93e963e4f76dcea615b0e4afd3a2fb792e8ede2ff"} Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.142695 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.142707 4902 scope.go:117] "RemoveContainer" containerID="8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049" Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.158332 4902 scope.go:117] "RemoveContainer" containerID="c2ac7d265daa5a076911e3ce19a8be39a14dbabcb6e40e288631dc7abfaee473" Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.174716 4902 scope.go:117] "RemoveContainer" containerID="45eafb51fadf190387bd010aa4ae4e9a6224092300f2137c53a9ae4ccbd5cb46" Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.200455 4902 scope.go:117] "RemoveContainer" containerID="8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049" Jan 21 15:56:23 crc kubenswrapper[4902]: E0121 15:56:23.200882 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049\": container with ID starting with 8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049 not found: ID does not exist" containerID="8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049" Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.200921 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049"} err="failed to get container status \"8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049\": rpc error: code = NotFound desc = could not find container \"8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049\": container with ID starting with 8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049 not found: ID does not exist" Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.200972 4902 scope.go:117] "RemoveContainer" containerID="c2ac7d265daa5a076911e3ce19a8be39a14dbabcb6e40e288631dc7abfaee473" Jan 21 15:56:23 crc kubenswrapper[4902]: E0121 15:56:23.201373 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2ac7d265daa5a076911e3ce19a8be39a14dbabcb6e40e288631dc7abfaee473\": container with ID starting with c2ac7d265daa5a076911e3ce19a8be39a14dbabcb6e40e288631dc7abfaee473 not found: ID does not exist" containerID="c2ac7d265daa5a076911e3ce19a8be39a14dbabcb6e40e288631dc7abfaee473" Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.201410 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2ac7d265daa5a076911e3ce19a8be39a14dbabcb6e40e288631dc7abfaee473"} err="failed to get container status \"c2ac7d265daa5a076911e3ce19a8be39a14dbabcb6e40e288631dc7abfaee473\": rpc error: code = NotFound desc = could not find container \"c2ac7d265daa5a076911e3ce19a8be39a14dbabcb6e40e288631dc7abfaee473\": container with ID starting with c2ac7d265daa5a076911e3ce19a8be39a14dbabcb6e40e288631dc7abfaee473 not found: ID does not exist" Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.201434 4902 scope.go:117] "RemoveContainer" containerID="45eafb51fadf190387bd010aa4ae4e9a6224092300f2137c53a9ae4ccbd5cb46" Jan 21 15:56:23 crc kubenswrapper[4902]: E0121 15:56:23.201783 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45eafb51fadf190387bd010aa4ae4e9a6224092300f2137c53a9ae4ccbd5cb46\": container with ID starting with 45eafb51fadf190387bd010aa4ae4e9a6224092300f2137c53a9ae4ccbd5cb46 not found: ID does not exist" containerID="45eafb51fadf190387bd010aa4ae4e9a6224092300f2137c53a9ae4ccbd5cb46" Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.201820 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45eafb51fadf190387bd010aa4ae4e9a6224092300f2137c53a9ae4ccbd5cb46"} err="failed to get container status \"45eafb51fadf190387bd010aa4ae4e9a6224092300f2137c53a9ae4ccbd5cb46\": rpc error: code = NotFound desc = could not find container \"45eafb51fadf190387bd010aa4ae4e9a6224092300f2137c53a9ae4ccbd5cb46\": container with ID starting with 45eafb51fadf190387bd010aa4ae4e9a6224092300f2137c53a9ae4ccbd5cb46 not found: ID does not exist" Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.518967 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.580868 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865d9b578f-zhthq"] Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.582375 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" podUID="d2fd66a2-371b-44b8-bdd4-b6be36c4093f" containerName="dnsmasq-dns" containerID="cri-o://69071f8a5cdc3de0675cee96512e97a433df1c9cd0588a392299c704d9d943f7" gracePeriod=10 Jan 21 15:56:24 crc kubenswrapper[4902]: I0121 15:56:24.153754 4902 generic.go:334] "Generic (PLEG): container finished" podID="a211ebd7-f82f-4cc7-91d3-77ec265a5d11" containerID="d5bb4603423cd8d93efab95c695e74107c0f4c4cb84aa804302e21ed56b1a624" exitCode=0 Jan 21 15:56:24 crc kubenswrapper[4902]: I0121 15:56:24.153804 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a211ebd7-f82f-4cc7-91d3-77ec265a5d11","Type":"ContainerDied","Data":"d5bb4603423cd8d93efab95c695e74107c0f4c4cb84aa804302e21ed56b1a624"} Jan 21 15:56:24 crc kubenswrapper[4902]: I0121 15:56:24.993793 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7e81ecf-2d0f-42ee-b056-8dcee4744f20" (UID: "a7e81ecf-2d0f-42ee-b056-8dcee4744f20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.077581 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.163873 4902 generic.go:334] "Generic (PLEG): container finished" podID="d2fd66a2-371b-44b8-bdd4-b6be36c4093f" containerID="69071f8a5cdc3de0675cee96512e97a433df1c9cd0588a392299c704d9d943f7" exitCode=0 Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.163951 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" event={"ID":"d2fd66a2-371b-44b8-bdd4-b6be36c4093f","Type":"ContainerDied","Data":"69071f8a5cdc3de0675cee96512e97a433df1c9cd0588a392299c704d9d943f7"} Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.166118 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a211ebd7-f82f-4cc7-91d3-77ec265a5d11","Type":"ContainerStarted","Data":"2354a29fe6856435345d22b0bc640801d3d7413f19461a21bc3f276792b7ec26"} Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.188996 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=10.188976887 podStartE2EDuration="10.188976887s" podCreationTimestamp="2026-01-21 15:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:56:25.183727879 +0000 UTC m=+4947.260560908" watchObservedRunningTime="2026-01-21 15:56:25.188976887 +0000 UTC m=+4947.265809916" Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.269732 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-js569"] Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.292285 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-js569"] Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.453688 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.454222 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 21 15:56:25 crc kubenswrapper[4902]: E0121 15:56:25.753882 4902 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.21:58802->38.129.56.21:44701: write tcp 38.129.56.21:58802->38.129.56.21:44701: write: broken pipe Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.866485 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.990451 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-config\") pod \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\" (UID: \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\") " Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.990641 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmkgm\" (UniqueName: \"kubernetes.io/projected/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-kube-api-access-mmkgm\") pod \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\" (UID: \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\") " Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.990676 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-dns-svc\") pod \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\" (UID: \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\") " Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.003254 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-kube-api-access-mmkgm" (OuterVolumeSpecName: "kube-api-access-mmkgm") pod "d2fd66a2-371b-44b8-bdd4-b6be36c4093f" (UID: "d2fd66a2-371b-44b8-bdd4-b6be36c4093f"). InnerVolumeSpecName "kube-api-access-mmkgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.023348 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-config" (OuterVolumeSpecName: "config") pod "d2fd66a2-371b-44b8-bdd4-b6be36c4093f" (UID: "d2fd66a2-371b-44b8-bdd4-b6be36c4093f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.032203 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2fd66a2-371b-44b8-bdd4-b6be36c4093f" (UID: "d2fd66a2-371b-44b8-bdd4-b6be36c4093f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.092737 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmkgm\" (UniqueName: \"kubernetes.io/projected/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-kube-api-access-mmkgm\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.092791 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.092804 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.175856 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" event={"ID":"d2fd66a2-371b-44b8-bdd4-b6be36c4093f","Type":"ContainerDied","Data":"d3e469d97419e7030a7fc682f2e583305b06427dd0e454f49cfc9cbeb69e90f4"} Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.175921 4902 scope.go:117] "RemoveContainer" containerID="69071f8a5cdc3de0675cee96512e97a433df1c9cd0588a392299c704d9d943f7" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.176257 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.204929 4902 scope.go:117] "RemoveContainer" containerID="0eec98b5d0b0be8d198331d620aaf26c943f2f70750ff630c0d78b7c5a83456c" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.209540 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865d9b578f-zhthq"] Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.215908 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-865d9b578f-zhthq"] Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.303499 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7e81ecf-2d0f-42ee-b056-8dcee4744f20" path="/var/lib/kubelet/pods/a7e81ecf-2d0f-42ee-b056-8dcee4744f20/volumes" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.304281 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2fd66a2-371b-44b8-bdd4-b6be36c4093f" path="/var/lib/kubelet/pods/d2fd66a2-371b-44b8-bdd4-b6be36c4093f/volumes" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.602440 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.602789 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.974615 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 21 15:56:29 crc kubenswrapper[4902]: I0121 15:56:29.149262 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:29 crc kubenswrapper[4902]: I0121 15:56:29.244146 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:30 crc kubenswrapper[4902]: I0121 15:56:30.052574 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 21 15:56:30 crc kubenswrapper[4902]: I0121 15:56:30.134540 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.752961 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-p5grh"] Jan 21 15:56:33 crc kubenswrapper[4902]: E0121 15:56:33.753480 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2fd66a2-371b-44b8-bdd4-b6be36c4093f" containerName="dnsmasq-dns" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.753493 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2fd66a2-371b-44b8-bdd4-b6be36c4093f" containerName="dnsmasq-dns" Jan 21 15:56:33 crc kubenswrapper[4902]: E0121 15:56:33.753501 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e81ecf-2d0f-42ee-b056-8dcee4744f20" containerName="registry-server" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.753507 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e81ecf-2d0f-42ee-b056-8dcee4744f20" containerName="registry-server" Jan 21 15:56:33 crc kubenswrapper[4902]: E0121 15:56:33.753524 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e81ecf-2d0f-42ee-b056-8dcee4744f20" containerName="extract-utilities" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.753530 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e81ecf-2d0f-42ee-b056-8dcee4744f20" containerName="extract-utilities" Jan 21 15:56:33 crc kubenswrapper[4902]: E0121 15:56:33.753539 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2fd66a2-371b-44b8-bdd4-b6be36c4093f" containerName="init" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.753545 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2fd66a2-371b-44b8-bdd4-b6be36c4093f" containerName="init" Jan 21 15:56:33 crc kubenswrapper[4902]: E0121 15:56:33.753560 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e81ecf-2d0f-42ee-b056-8dcee4744f20" containerName="extract-content" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.753566 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e81ecf-2d0f-42ee-b056-8dcee4744f20" containerName="extract-content" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.753688 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2fd66a2-371b-44b8-bdd4-b6be36c4093f" containerName="dnsmasq-dns" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.753714 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7e81ecf-2d0f-42ee-b056-8dcee4744f20" containerName="registry-server" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.754301 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p5grh" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.756780 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.764258 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-p5grh"] Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.814686 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53228908-4e69-4bbf-a0ed-aaf5a64f5443-operator-scripts\") pod \"root-account-create-update-p5grh\" (UID: \"53228908-4e69-4bbf-a0ed-aaf5a64f5443\") " pod="openstack/root-account-create-update-p5grh" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.814984 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lb8p\" (UniqueName: \"kubernetes.io/projected/53228908-4e69-4bbf-a0ed-aaf5a64f5443-kube-api-access-6lb8p\") pod \"root-account-create-update-p5grh\" (UID: \"53228908-4e69-4bbf-a0ed-aaf5a64f5443\") " pod="openstack/root-account-create-update-p5grh" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.916018 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lb8p\" (UniqueName: \"kubernetes.io/projected/53228908-4e69-4bbf-a0ed-aaf5a64f5443-kube-api-access-6lb8p\") pod \"root-account-create-update-p5grh\" (UID: \"53228908-4e69-4bbf-a0ed-aaf5a64f5443\") " pod="openstack/root-account-create-update-p5grh" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.916471 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53228908-4e69-4bbf-a0ed-aaf5a64f5443-operator-scripts\") pod \"root-account-create-update-p5grh\" (UID: \"53228908-4e69-4bbf-a0ed-aaf5a64f5443\") " pod="openstack/root-account-create-update-p5grh" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.917526 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53228908-4e69-4bbf-a0ed-aaf5a64f5443-operator-scripts\") pod \"root-account-create-update-p5grh\" (UID: \"53228908-4e69-4bbf-a0ed-aaf5a64f5443\") " pod="openstack/root-account-create-update-p5grh" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.935809 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lb8p\" (UniqueName: \"kubernetes.io/projected/53228908-4e69-4bbf-a0ed-aaf5a64f5443-kube-api-access-6lb8p\") pod \"root-account-create-update-p5grh\" (UID: \"53228908-4e69-4bbf-a0ed-aaf5a64f5443\") " pod="openstack/root-account-create-update-p5grh" Jan 21 15:56:34 crc kubenswrapper[4902]: I0121 15:56:34.090074 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p5grh" Jan 21 15:56:34 crc kubenswrapper[4902]: I0121 15:56:34.516884 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-p5grh"] Jan 21 15:56:35 crc kubenswrapper[4902]: I0121 15:56:35.244392 4902 generic.go:334] "Generic (PLEG): container finished" podID="53228908-4e69-4bbf-a0ed-aaf5a64f5443" containerID="22e6c4bda8a7a16db8551cc07c7e5779cb515519925f610dd7708c60c5c8a6fc" exitCode=0 Jan 21 15:56:35 crc kubenswrapper[4902]: I0121 15:56:35.244544 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p5grh" event={"ID":"53228908-4e69-4bbf-a0ed-aaf5a64f5443","Type":"ContainerDied","Data":"22e6c4bda8a7a16db8551cc07c7e5779cb515519925f610dd7708c60c5c8a6fc"} Jan 21 15:56:35 crc kubenswrapper[4902]: I0121 15:56:35.244696 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p5grh" event={"ID":"53228908-4e69-4bbf-a0ed-aaf5a64f5443","Type":"ContainerStarted","Data":"ed9272b6ec563938aa1aec284408bccdc33dd19498aacbe257717214ea1e4967"} Jan 21 15:56:36 crc kubenswrapper[4902]: I0121 15:56:36.569021 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p5grh" Jan 21 15:56:36 crc kubenswrapper[4902]: I0121 15:56:36.657894 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53228908-4e69-4bbf-a0ed-aaf5a64f5443-operator-scripts\") pod \"53228908-4e69-4bbf-a0ed-aaf5a64f5443\" (UID: \"53228908-4e69-4bbf-a0ed-aaf5a64f5443\") " Jan 21 15:56:36 crc kubenswrapper[4902]: I0121 15:56:36.658083 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lb8p\" (UniqueName: \"kubernetes.io/projected/53228908-4e69-4bbf-a0ed-aaf5a64f5443-kube-api-access-6lb8p\") pod \"53228908-4e69-4bbf-a0ed-aaf5a64f5443\" (UID: \"53228908-4e69-4bbf-a0ed-aaf5a64f5443\") " Jan 21 15:56:36 crc kubenswrapper[4902]: I0121 15:56:36.658608 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53228908-4e69-4bbf-a0ed-aaf5a64f5443-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "53228908-4e69-4bbf-a0ed-aaf5a64f5443" (UID: "53228908-4e69-4bbf-a0ed-aaf5a64f5443"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:56:36 crc kubenswrapper[4902]: I0121 15:56:36.664150 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53228908-4e69-4bbf-a0ed-aaf5a64f5443-kube-api-access-6lb8p" (OuterVolumeSpecName: "kube-api-access-6lb8p") pod "53228908-4e69-4bbf-a0ed-aaf5a64f5443" (UID: "53228908-4e69-4bbf-a0ed-aaf5a64f5443"). InnerVolumeSpecName "kube-api-access-6lb8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:56:36 crc kubenswrapper[4902]: I0121 15:56:36.759453 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lb8p\" (UniqueName: \"kubernetes.io/projected/53228908-4e69-4bbf-a0ed-aaf5a64f5443-kube-api-access-6lb8p\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:36 crc kubenswrapper[4902]: I0121 15:56:36.759497 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53228908-4e69-4bbf-a0ed-aaf5a64f5443-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:37 crc kubenswrapper[4902]: I0121 15:56:37.267956 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p5grh" event={"ID":"53228908-4e69-4bbf-a0ed-aaf5a64f5443","Type":"ContainerDied","Data":"ed9272b6ec563938aa1aec284408bccdc33dd19498aacbe257717214ea1e4967"} Jan 21 15:56:37 crc kubenswrapper[4902]: I0121 15:56:37.268013 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed9272b6ec563938aa1aec284408bccdc33dd19498aacbe257717214ea1e4967" Jan 21 15:56:37 crc kubenswrapper[4902]: I0121 15:56:37.268012 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p5grh" Jan 21 15:56:40 crc kubenswrapper[4902]: I0121 15:56:40.203332 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-p5grh"] Jan 21 15:56:40 crc kubenswrapper[4902]: I0121 15:56:40.209029 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-p5grh"] Jan 21 15:56:40 crc kubenswrapper[4902]: I0121 15:56:40.304652 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53228908-4e69-4bbf-a0ed-aaf5a64f5443" path="/var/lib/kubelet/pods/53228908-4e69-4bbf-a0ed-aaf5a64f5443/volumes" Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.219949 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qc8ct"] Jan 21 15:56:45 crc kubenswrapper[4902]: E0121 15:56:45.220673 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53228908-4e69-4bbf-a0ed-aaf5a64f5443" containerName="mariadb-account-create-update" Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.220687 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="53228908-4e69-4bbf-a0ed-aaf5a64f5443" containerName="mariadb-account-create-update" Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.220815 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="53228908-4e69-4bbf-a0ed-aaf5a64f5443" containerName="mariadb-account-create-update" Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.221383 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qc8ct" Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.223911 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.229478 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qc8ct"] Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.286101 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df8b6\" (UniqueName: \"kubernetes.io/projected/d642b708-8313-4edd-8183-4dcd679721b6-kube-api-access-df8b6\") pod \"root-account-create-update-qc8ct\" (UID: \"d642b708-8313-4edd-8183-4dcd679721b6\") " pod="openstack/root-account-create-update-qc8ct" Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.286310 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d642b708-8313-4edd-8183-4dcd679721b6-operator-scripts\") pod \"root-account-create-update-qc8ct\" (UID: \"d642b708-8313-4edd-8183-4dcd679721b6\") " pod="openstack/root-account-create-update-qc8ct" Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.387838 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df8b6\" (UniqueName: \"kubernetes.io/projected/d642b708-8313-4edd-8183-4dcd679721b6-kube-api-access-df8b6\") pod \"root-account-create-update-qc8ct\" (UID: \"d642b708-8313-4edd-8183-4dcd679721b6\") " pod="openstack/root-account-create-update-qc8ct" Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.387912 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d642b708-8313-4edd-8183-4dcd679721b6-operator-scripts\") pod \"root-account-create-update-qc8ct\" (UID: \"d642b708-8313-4edd-8183-4dcd679721b6\") " pod="openstack/root-account-create-update-qc8ct" Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.389181 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d642b708-8313-4edd-8183-4dcd679721b6-operator-scripts\") pod \"root-account-create-update-qc8ct\" (UID: \"d642b708-8313-4edd-8183-4dcd679721b6\") " pod="openstack/root-account-create-update-qc8ct" Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.412082 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df8b6\" (UniqueName: \"kubernetes.io/projected/d642b708-8313-4edd-8183-4dcd679721b6-kube-api-access-df8b6\") pod \"root-account-create-update-qc8ct\" (UID: \"d642b708-8313-4edd-8183-4dcd679721b6\") " pod="openstack/root-account-create-update-qc8ct" Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.540332 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qc8ct" Jan 21 15:56:46 crc kubenswrapper[4902]: I0121 15:56:45.966158 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qc8ct"] Jan 21 15:56:46 crc kubenswrapper[4902]: W0121 15:56:45.985330 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd642b708_8313_4edd_8183_4dcd679721b6.slice/crio-e82c3b0b36a1d38d468ed2cce701ef20f23ba2ba7ea1e628023569acec0027f3 WatchSource:0}: Error finding container e82c3b0b36a1d38d468ed2cce701ef20f23ba2ba7ea1e628023569acec0027f3: Status 404 returned error can't find the container with id e82c3b0b36a1d38d468ed2cce701ef20f23ba2ba7ea1e628023569acec0027f3 Jan 21 15:56:46 crc kubenswrapper[4902]: I0121 15:56:46.334142 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qc8ct" event={"ID":"d642b708-8313-4edd-8183-4dcd679721b6","Type":"ContainerStarted","Data":"2f8dc76ea47c61aa0225c738e775c625c670e1dc7f5e344791fe2553026ed3d2"} Jan 21 15:56:46 crc kubenswrapper[4902]: I0121 15:56:46.334489 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qc8ct" event={"ID":"d642b708-8313-4edd-8183-4dcd679721b6","Type":"ContainerStarted","Data":"e82c3b0b36a1d38d468ed2cce701ef20f23ba2ba7ea1e628023569acec0027f3"} Jan 21 15:56:47 crc kubenswrapper[4902]: I0121 15:56:47.342739 4902 generic.go:334] "Generic (PLEG): container finished" podID="d642b708-8313-4edd-8183-4dcd679721b6" containerID="2f8dc76ea47c61aa0225c738e775c625c670e1dc7f5e344791fe2553026ed3d2" exitCode=0 Jan 21 15:56:47 crc kubenswrapper[4902]: I0121 15:56:47.342783 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qc8ct" event={"ID":"d642b708-8313-4edd-8183-4dcd679721b6","Type":"ContainerDied","Data":"2f8dc76ea47c61aa0225c738e775c625c670e1dc7f5e344791fe2553026ed3d2"} Jan 21 15:56:47 crc kubenswrapper[4902]: I0121 15:56:47.690725 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qc8ct" Jan 21 15:56:47 crc kubenswrapper[4902]: I0121 15:56:47.825298 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d642b708-8313-4edd-8183-4dcd679721b6-operator-scripts\") pod \"d642b708-8313-4edd-8183-4dcd679721b6\" (UID: \"d642b708-8313-4edd-8183-4dcd679721b6\") " Jan 21 15:56:47 crc kubenswrapper[4902]: I0121 15:56:47.826108 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d642b708-8313-4edd-8183-4dcd679721b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d642b708-8313-4edd-8183-4dcd679721b6" (UID: "d642b708-8313-4edd-8183-4dcd679721b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:56:47 crc kubenswrapper[4902]: I0121 15:56:47.826216 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df8b6\" (UniqueName: \"kubernetes.io/projected/d642b708-8313-4edd-8183-4dcd679721b6-kube-api-access-df8b6\") pod \"d642b708-8313-4edd-8183-4dcd679721b6\" (UID: \"d642b708-8313-4edd-8183-4dcd679721b6\") " Jan 21 15:56:47 crc kubenswrapper[4902]: I0121 15:56:47.826705 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d642b708-8313-4edd-8183-4dcd679721b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:47 crc kubenswrapper[4902]: I0121 15:56:47.831696 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d642b708-8313-4edd-8183-4dcd679721b6-kube-api-access-df8b6" (OuterVolumeSpecName: "kube-api-access-df8b6") pod "d642b708-8313-4edd-8183-4dcd679721b6" (UID: "d642b708-8313-4edd-8183-4dcd679721b6"). InnerVolumeSpecName "kube-api-access-df8b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:56:47 crc kubenswrapper[4902]: I0121 15:56:47.928669 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df8b6\" (UniqueName: \"kubernetes.io/projected/d642b708-8313-4edd-8183-4dcd679721b6-kube-api-access-df8b6\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:48 crc kubenswrapper[4902]: I0121 15:56:48.351383 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qc8ct" event={"ID":"d642b708-8313-4edd-8183-4dcd679721b6","Type":"ContainerDied","Data":"e82c3b0b36a1d38d468ed2cce701ef20f23ba2ba7ea1e628023569acec0027f3"} Jan 21 15:56:48 crc kubenswrapper[4902]: I0121 15:56:48.352251 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e82c3b0b36a1d38d468ed2cce701ef20f23ba2ba7ea1e628023569acec0027f3" Jan 21 15:56:48 crc kubenswrapper[4902]: I0121 15:56:48.352368 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qc8ct" Jan 21 15:56:48 crc kubenswrapper[4902]: I0121 15:56:48.354497 4902 generic.go:334] "Generic (PLEG): container finished" podID="c6f17a65-e372-463d-b875-c8acdd3a8a04" containerID="56fede2a049d35557b9b5addf25353c93e8b313f6e9712ce4b830ecec8079d28" exitCode=0 Jan 21 15:56:48 crc kubenswrapper[4902]: I0121 15:56:48.354545 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c6f17a65-e372-463d-b875-c8acdd3a8a04","Type":"ContainerDied","Data":"56fede2a049d35557b9b5addf25353c93e8b313f6e9712ce4b830ecec8079d28"} Jan 21 15:56:48 crc kubenswrapper[4902]: I0121 15:56:48.358154 4902 generic.go:334] "Generic (PLEG): container finished" podID="53c0907a-0c62-4813-af74-b0f97c0e3c16" containerID="f8b75d54a767665b098236f966e923a1937e74120339bc2fce010451c54f7757" exitCode=0 Jan 21 15:56:48 crc kubenswrapper[4902]: I0121 15:56:48.358218 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"53c0907a-0c62-4813-af74-b0f97c0e3c16","Type":"ContainerDied","Data":"f8b75d54a767665b098236f966e923a1937e74120339bc2fce010451c54f7757"} Jan 21 15:56:49 crc kubenswrapper[4902]: I0121 15:56:49.368848 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c6f17a65-e372-463d-b875-c8acdd3a8a04","Type":"ContainerStarted","Data":"d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa"} Jan 21 15:56:49 crc kubenswrapper[4902]: I0121 15:56:49.369649 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:49 crc kubenswrapper[4902]: I0121 15:56:49.371179 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"53c0907a-0c62-4813-af74-b0f97c0e3c16","Type":"ContainerStarted","Data":"0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa"} Jan 21 15:56:49 crc kubenswrapper[4902]: I0121 15:56:49.371447 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 15:56:49 crc kubenswrapper[4902]: I0121 15:56:49.400504 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.400478326 podStartE2EDuration="37.400478326s" podCreationTimestamp="2026-01-21 15:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:56:49.393988773 +0000 UTC m=+4971.470821842" watchObservedRunningTime="2026-01-21 15:56:49.400478326 +0000 UTC m=+4971.477311365" Jan 21 15:56:49 crc kubenswrapper[4902]: I0121 15:56:49.422030 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.422006333 podStartE2EDuration="36.422006333s" podCreationTimestamp="2026-01-21 15:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:56:49.416605941 +0000 UTC m=+4971.493438980" watchObservedRunningTime="2026-01-21 15:56:49.422006333 +0000 UTC m=+4971.498839362" Jan 21 15:57:04 crc kubenswrapper[4902]: I0121 15:57:04.159726 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:04 crc kubenswrapper[4902]: I0121 15:57:04.609231 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 15:57:08 crc kubenswrapper[4902]: I0121 15:57:08.909147 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-699964fbc-tv7h5"] Jan 21 15:57:08 crc kubenswrapper[4902]: E0121 15:57:08.909854 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d642b708-8313-4edd-8183-4dcd679721b6" containerName="mariadb-account-create-update" Jan 21 15:57:08 crc kubenswrapper[4902]: I0121 15:57:08.909869 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d642b708-8313-4edd-8183-4dcd679721b6" containerName="mariadb-account-create-update" Jan 21 15:57:08 crc kubenswrapper[4902]: I0121 15:57:08.910028 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d642b708-8313-4edd-8183-4dcd679721b6" containerName="mariadb-account-create-update" Jan 21 15:57:08 crc kubenswrapper[4902]: I0121 15:57:08.914304 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:08 crc kubenswrapper[4902]: I0121 15:57:08.918897 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-tv7h5"] Jan 21 15:57:08 crc kubenswrapper[4902]: I0121 15:57:08.944099 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-dns-svc\") pod \"dnsmasq-dns-699964fbc-tv7h5\" (UID: \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\") " pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:08 crc kubenswrapper[4902]: I0121 15:57:08.944171 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-config\") pod \"dnsmasq-dns-699964fbc-tv7h5\" (UID: \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\") " pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:08 crc kubenswrapper[4902]: I0121 15:57:08.944227 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fngb\" (UniqueName: \"kubernetes.io/projected/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-kube-api-access-5fngb\") pod \"dnsmasq-dns-699964fbc-tv7h5\" (UID: \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\") " pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:09 crc kubenswrapper[4902]: I0121 15:57:09.045252 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-dns-svc\") pod \"dnsmasq-dns-699964fbc-tv7h5\" (UID: \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\") " pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:09 crc kubenswrapper[4902]: I0121 15:57:09.045317 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-config\") pod \"dnsmasq-dns-699964fbc-tv7h5\" (UID: \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\") " pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:09 crc kubenswrapper[4902]: I0121 15:57:09.045368 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fngb\" (UniqueName: \"kubernetes.io/projected/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-kube-api-access-5fngb\") pod \"dnsmasq-dns-699964fbc-tv7h5\" (UID: \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\") " pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:09 crc kubenswrapper[4902]: I0121 15:57:09.046329 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-config\") pod \"dnsmasq-dns-699964fbc-tv7h5\" (UID: \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\") " pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:09 crc kubenswrapper[4902]: I0121 15:57:09.046330 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-dns-svc\") pod \"dnsmasq-dns-699964fbc-tv7h5\" (UID: \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\") " pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:09 crc kubenswrapper[4902]: I0121 15:57:09.069308 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fngb\" (UniqueName: \"kubernetes.io/projected/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-kube-api-access-5fngb\") pod \"dnsmasq-dns-699964fbc-tv7h5\" (UID: \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\") " pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:09 crc kubenswrapper[4902]: I0121 15:57:09.235935 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:09 crc kubenswrapper[4902]: I0121 15:57:09.622817 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:57:09 crc kubenswrapper[4902]: I0121 15:57:09.695409 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-tv7h5"] Jan 21 15:57:10 crc kubenswrapper[4902]: I0121 15:57:10.157878 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:57:10 crc kubenswrapper[4902]: I0121 15:57:10.534658 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" event={"ID":"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495","Type":"ContainerStarted","Data":"55e0afd2388c802fda6ed46b943d3d217c1e55bd357a95a943ea57a5cb135bcf"} Jan 21 15:57:10 crc kubenswrapper[4902]: I0121 15:57:10.534699 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" event={"ID":"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495","Type":"ContainerStarted","Data":"44ae670f7eef2a4f69445e5a528bd2462006fde1e2ee9d0bfd1314e5b3fef469"} Jan 21 15:57:11 crc kubenswrapper[4902]: I0121 15:57:11.545377 4902 generic.go:334] "Generic (PLEG): container finished" podID="9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495" containerID="55e0afd2388c802fda6ed46b943d3d217c1e55bd357a95a943ea57a5cb135bcf" exitCode=0 Jan 21 15:57:11 crc kubenswrapper[4902]: I0121 15:57:11.545436 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" event={"ID":"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495","Type":"ContainerDied","Data":"55e0afd2388c802fda6ed46b943d3d217c1e55bd357a95a943ea57a5cb135bcf"} Jan 21 15:57:12 crc kubenswrapper[4902]: I0121 15:57:12.555967 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" event={"ID":"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495","Type":"ContainerStarted","Data":"dcfdb86c7fc37ba60155fa847c386b16cdc514c878f35f9ebd3ae35f87d4d133"} Jan 21 15:57:12 crc kubenswrapper[4902]: I0121 15:57:12.556176 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:12 crc kubenswrapper[4902]: I0121 15:57:12.580001 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" podStartSLOduration=4.5799800059999995 podStartE2EDuration="4.579980006s" podCreationTimestamp="2026-01-21 15:57:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:57:12.579085381 +0000 UTC m=+4994.655918420" watchObservedRunningTime="2026-01-21 15:57:12.579980006 +0000 UTC m=+4994.656813045" Jan 21 15:57:13 crc kubenswrapper[4902]: I0121 15:57:13.459486 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="53c0907a-0c62-4813-af74-b0f97c0e3c16" containerName="rabbitmq" containerID="cri-o://0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa" gracePeriod=604797 Jan 21 15:57:14 crc kubenswrapper[4902]: I0121 15:57:14.475254 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="c6f17a65-e372-463d-b875-c8acdd3a8a04" containerName="rabbitmq" containerID="cri-o://d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa" gracePeriod=604796 Jan 21 15:57:14 crc kubenswrapper[4902]: I0121 15:57:14.607156 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="53c0907a-0c62-4813-af74-b0f97c0e3c16" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.245:5671: connect: connection refused" Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.237827 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.303090 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-bqqlm"] Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.303437 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" podUID="09f238e8-eb6e-47ac-818b-3558f9f6a841" containerName="dnsmasq-dns" containerID="cri-o://df1c8824638373bd513fe569a7cfc99ba5575ba306170d90a24a3e259265c66d" gracePeriod=10 Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.616275 4902 generic.go:334] "Generic (PLEG): container finished" podID="09f238e8-eb6e-47ac-818b-3558f9f6a841" containerID="df1c8824638373bd513fe569a7cfc99ba5575ba306170d90a24a3e259265c66d" exitCode=0 Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.616378 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" event={"ID":"09f238e8-eb6e-47ac-818b-3558f9f6a841","Type":"ContainerDied","Data":"df1c8824638373bd513fe569a7cfc99ba5575ba306170d90a24a3e259265c66d"} Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.737544 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.814927 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09f238e8-eb6e-47ac-818b-3558f9f6a841-dns-svc\") pod \"09f238e8-eb6e-47ac-818b-3558f9f6a841\" (UID: \"09f238e8-eb6e-47ac-818b-3558f9f6a841\") " Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.815133 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f238e8-eb6e-47ac-818b-3558f9f6a841-config\") pod \"09f238e8-eb6e-47ac-818b-3558f9f6a841\" (UID: \"09f238e8-eb6e-47ac-818b-3558f9f6a841\") " Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.815226 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrmzb\" (UniqueName: \"kubernetes.io/projected/09f238e8-eb6e-47ac-818b-3558f9f6a841-kube-api-access-nrmzb\") pod \"09f238e8-eb6e-47ac-818b-3558f9f6a841\" (UID: \"09f238e8-eb6e-47ac-818b-3558f9f6a841\") " Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.820338 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09f238e8-eb6e-47ac-818b-3558f9f6a841-kube-api-access-nrmzb" (OuterVolumeSpecName: "kube-api-access-nrmzb") pod "09f238e8-eb6e-47ac-818b-3558f9f6a841" (UID: "09f238e8-eb6e-47ac-818b-3558f9f6a841"). InnerVolumeSpecName "kube-api-access-nrmzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.849987 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09f238e8-eb6e-47ac-818b-3558f9f6a841-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "09f238e8-eb6e-47ac-818b-3558f9f6a841" (UID: "09f238e8-eb6e-47ac-818b-3558f9f6a841"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.856006 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09f238e8-eb6e-47ac-818b-3558f9f6a841-config" (OuterVolumeSpecName: "config") pod "09f238e8-eb6e-47ac-818b-3558f9f6a841" (UID: "09f238e8-eb6e-47ac-818b-3558f9f6a841"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.916615 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f238e8-eb6e-47ac-818b-3558f9f6a841-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.916650 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrmzb\" (UniqueName: \"kubernetes.io/projected/09f238e8-eb6e-47ac-818b-3558f9f6a841-kube-api-access-nrmzb\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.916660 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09f238e8-eb6e-47ac-818b-3558f9f6a841-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.005094 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.018232 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-plugins\") pod \"53c0907a-0c62-4813-af74-b0f97c0e3c16\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.018299 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-confd\") pod \"53c0907a-0c62-4813-af74-b0f97c0e3c16\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.018345 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-config-data\") pod \"53c0907a-0c62-4813-af74-b0f97c0e3c16\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.018409 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-tls\") pod \"53c0907a-0c62-4813-af74-b0f97c0e3c16\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.018472 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-server-conf\") pod \"53c0907a-0c62-4813-af74-b0f97c0e3c16\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.018503 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/53c0907a-0c62-4813-af74-b0f97c0e3c16-erlang-cookie-secret\") pod \"53c0907a-0c62-4813-af74-b0f97c0e3c16\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.018534 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/53c0907a-0c62-4813-af74-b0f97c0e3c16-pod-info\") pod \"53c0907a-0c62-4813-af74-b0f97c0e3c16\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.018675 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\") pod \"53c0907a-0c62-4813-af74-b0f97c0e3c16\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.018742 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-erlang-cookie\") pod \"53c0907a-0c62-4813-af74-b0f97c0e3c16\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.018790 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjczw\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-kube-api-access-xjczw\") pod \"53c0907a-0c62-4813-af74-b0f97c0e3c16\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.018823 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-plugins-conf\") pod \"53c0907a-0c62-4813-af74-b0f97c0e3c16\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.018680 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "53c0907a-0c62-4813-af74-b0f97c0e3c16" (UID: "53c0907a-0c62-4813-af74-b0f97c0e3c16"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.019766 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "53c0907a-0c62-4813-af74-b0f97c0e3c16" (UID: "53c0907a-0c62-4813-af74-b0f97c0e3c16"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.020467 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "53c0907a-0c62-4813-af74-b0f97c0e3c16" (UID: "53c0907a-0c62-4813-af74-b0f97c0e3c16"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.021564 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/53c0907a-0c62-4813-af74-b0f97c0e3c16-pod-info" (OuterVolumeSpecName: "pod-info") pod "53c0907a-0c62-4813-af74-b0f97c0e3c16" (UID: "53c0907a-0c62-4813-af74-b0f97c0e3c16"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.023945 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "53c0907a-0c62-4813-af74-b0f97c0e3c16" (UID: "53c0907a-0c62-4813-af74-b0f97c0e3c16"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.027212 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53c0907a-0c62-4813-af74-b0f97c0e3c16-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "53c0907a-0c62-4813-af74-b0f97c0e3c16" (UID: "53c0907a-0c62-4813-af74-b0f97c0e3c16"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.028403 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-kube-api-access-xjczw" (OuterVolumeSpecName: "kube-api-access-xjczw") pod "53c0907a-0c62-4813-af74-b0f97c0e3c16" (UID: "53c0907a-0c62-4813-af74-b0f97c0e3c16"). InnerVolumeSpecName "kube-api-access-xjczw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.032322 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f" (OuterVolumeSpecName: "persistence") pod "53c0907a-0c62-4813-af74-b0f97c0e3c16" (UID: "53c0907a-0c62-4813-af74-b0f97c0e3c16"). InnerVolumeSpecName "pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.045776 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-config-data" (OuterVolumeSpecName: "config-data") pod "53c0907a-0c62-4813-af74-b0f97c0e3c16" (UID: "53c0907a-0c62-4813-af74-b0f97c0e3c16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.081974 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-server-conf" (OuterVolumeSpecName: "server-conf") pod "53c0907a-0c62-4813-af74-b0f97c0e3c16" (UID: "53c0907a-0c62-4813-af74-b0f97c0e3c16"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.120344 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.120387 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.120397 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.120405 4902 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.120413 4902 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/53c0907a-0c62-4813-af74-b0f97c0e3c16-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.120423 4902 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/53c0907a-0c62-4813-af74-b0f97c0e3c16-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.120510 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\") on node \"crc\" " Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.120526 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.120540 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjczw\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-kube-api-access-xjczw\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.120548 4902 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.138546 4902 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.138724 4902 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f") on node "crc" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.145280 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "53c0907a-0c62-4813-af74-b0f97c0e3c16" (UID: "53c0907a-0c62-4813-af74-b0f97c0e3c16"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.221924 4902 reconciler_common.go:293] "Volume detached for volume \"pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.221972 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.630119 4902 generic.go:334] "Generic (PLEG): container finished" podID="53c0907a-0c62-4813-af74-b0f97c0e3c16" containerID="0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa" exitCode=0 Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.630272 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"53c0907a-0c62-4813-af74-b0f97c0e3c16","Type":"ContainerDied","Data":"0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa"} Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.630356 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"53c0907a-0c62-4813-af74-b0f97c0e3c16","Type":"ContainerDied","Data":"823499dcc6be68200313f9990f3b406f719dc54b8a2e736053275316c037d578"} Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.630275 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.630404 4902 scope.go:117] "RemoveContainer" containerID="0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.636340 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" event={"ID":"09f238e8-eb6e-47ac-818b-3558f9f6a841","Type":"ContainerDied","Data":"cd7e0cd801ba79f538e3c63c7aa4f7926d46008854b1879da441818cd04cf0dc"} Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.636447 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.662743 4902 scope.go:117] "RemoveContainer" containerID="f8b75d54a767665b098236f966e923a1937e74120339bc2fce010451c54f7757" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.665841 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.672255 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.692667 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-bqqlm"] Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.703896 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-bqqlm"] Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.708189 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:57:20 crc kubenswrapper[4902]: E0121 15:57:20.708480 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f238e8-eb6e-47ac-818b-3558f9f6a841" containerName="dnsmasq-dns" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.708495 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f238e8-eb6e-47ac-818b-3558f9f6a841" containerName="dnsmasq-dns" Jan 21 15:57:20 crc kubenswrapper[4902]: E0121 15:57:20.708511 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f238e8-eb6e-47ac-818b-3558f9f6a841" containerName="init" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.708517 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f238e8-eb6e-47ac-818b-3558f9f6a841" containerName="init" Jan 21 15:57:20 crc kubenswrapper[4902]: E0121 15:57:20.708534 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c0907a-0c62-4813-af74-b0f97c0e3c16" containerName="setup-container" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.708542 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c0907a-0c62-4813-af74-b0f97c0e3c16" containerName="setup-container" Jan 21 15:57:20 crc kubenswrapper[4902]: E0121 15:57:20.708571 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c0907a-0c62-4813-af74-b0f97c0e3c16" containerName="rabbitmq" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.708578 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c0907a-0c62-4813-af74-b0f97c0e3c16" containerName="rabbitmq" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.727168 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c0907a-0c62-4813-af74-b0f97c0e3c16" containerName="rabbitmq" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.727212 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="09f238e8-eb6e-47ac-818b-3558f9f6a841" containerName="dnsmasq-dns" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.735629 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.735817 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.742431 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.742879 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ssxxh" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.742891 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.743670 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.743728 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.743753 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.744832 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.794555 4902 scope.go:117] "RemoveContainer" containerID="0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa" Jan 21 15:57:20 crc kubenswrapper[4902]: E0121 15:57:20.794918 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa\": container with ID starting with 0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa not found: ID does not exist" containerID="0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.794953 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa"} err="failed to get container status \"0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa\": rpc error: code = NotFound desc = could not find container \"0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa\": container with ID starting with 0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa not found: ID does not exist" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.794975 4902 scope.go:117] "RemoveContainer" containerID="f8b75d54a767665b098236f966e923a1937e74120339bc2fce010451c54f7757" Jan 21 15:57:20 crc kubenswrapper[4902]: E0121 15:57:20.795283 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8b75d54a767665b098236f966e923a1937e74120339bc2fce010451c54f7757\": container with ID starting with f8b75d54a767665b098236f966e923a1937e74120339bc2fce010451c54f7757 not found: ID does not exist" containerID="f8b75d54a767665b098236f966e923a1937e74120339bc2fce010451c54f7757" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.795305 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8b75d54a767665b098236f966e923a1937e74120339bc2fce010451c54f7757"} err="failed to get container status \"f8b75d54a767665b098236f966e923a1937e74120339bc2fce010451c54f7757\": rpc error: code = NotFound desc = could not find container \"f8b75d54a767665b098236f966e923a1937e74120339bc2fce010451c54f7757\": container with ID starting with f8b75d54a767665b098236f966e923a1937e74120339bc2fce010451c54f7757 not found: ID does not exist" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.795317 4902 scope.go:117] "RemoveContainer" containerID="df1c8824638373bd513fe569a7cfc99ba5575ba306170d90a24a3e259265c66d" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.819335 4902 scope.go:117] "RemoveContainer" containerID="d0d1ff36d9c251f2b2fbf7c284bbe148be1cf281267966cd9400c8ef5a5fdfad" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.832439 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-config-data\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.832735 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.832773 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.832794 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.832832 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.832910 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.832945 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.832978 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.833016 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.833073 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.833099 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhtgd\" (UniqueName: \"kubernetes.io/projected/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-kube-api-access-zhtgd\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.933505 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.933546 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.933578 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.933605 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.933626 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.933657 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.933699 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.933760 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.933785 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhtgd\" (UniqueName: \"kubernetes.io/projected/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-kube-api-access-zhtgd\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.933819 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-config-data\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.933841 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.934969 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.935299 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.935821 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.936519 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.936916 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-config-data\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.938602 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.938632 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/044d17188a71d87a2f162043dfcb436253bd0043d87dd6a91403116fc167aa96/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.939562 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.940368 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.940691 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.941364 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.968758 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhtgd\" (UniqueName: \"kubernetes.io/projected/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-kube-api-access-zhtgd\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.972442 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.068391 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.123556 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.136278 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-plugins\") pod \"c6f17a65-e372-463d-b875-c8acdd3a8a04\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.136314 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-config-data\") pod \"c6f17a65-e372-463d-b875-c8acdd3a8a04\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.136337 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ct2z\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-kube-api-access-7ct2z\") pod \"c6f17a65-e372-463d-b875-c8acdd3a8a04\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.136358 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-confd\") pod \"c6f17a65-e372-463d-b875-c8acdd3a8a04\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.136509 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-server-conf\") pod \"c6f17a65-e372-463d-b875-c8acdd3a8a04\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.136542 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-tls\") pod \"c6f17a65-e372-463d-b875-c8acdd3a8a04\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.136641 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c6f17a65-e372-463d-b875-c8acdd3a8a04-erlang-cookie-secret\") pod \"c6f17a65-e372-463d-b875-c8acdd3a8a04\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.136742 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\") pod \"c6f17a65-e372-463d-b875-c8acdd3a8a04\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.136788 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-plugins-conf\") pod \"c6f17a65-e372-463d-b875-c8acdd3a8a04\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.136809 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c6f17a65-e372-463d-b875-c8acdd3a8a04-pod-info\") pod \"c6f17a65-e372-463d-b875-c8acdd3a8a04\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.136829 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-erlang-cookie\") pod \"c6f17a65-e372-463d-b875-c8acdd3a8a04\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.137074 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "c6f17a65-e372-463d-b875-c8acdd3a8a04" (UID: "c6f17a65-e372-463d-b875-c8acdd3a8a04"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.137431 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "c6f17a65-e372-463d-b875-c8acdd3a8a04" (UID: "c6f17a65-e372-463d-b875-c8acdd3a8a04"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.141769 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "c6f17a65-e372-463d-b875-c8acdd3a8a04" (UID: "c6f17a65-e372-463d-b875-c8acdd3a8a04"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.142391 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "c6f17a65-e372-463d-b875-c8acdd3a8a04" (UID: "c6f17a65-e372-463d-b875-c8acdd3a8a04"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.142799 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/c6f17a65-e372-463d-b875-c8acdd3a8a04-pod-info" (OuterVolumeSpecName: "pod-info") pod "c6f17a65-e372-463d-b875-c8acdd3a8a04" (UID: "c6f17a65-e372-463d-b875-c8acdd3a8a04"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.150880 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-kube-api-access-7ct2z" (OuterVolumeSpecName: "kube-api-access-7ct2z") pod "c6f17a65-e372-463d-b875-c8acdd3a8a04" (UID: "c6f17a65-e372-463d-b875-c8acdd3a8a04"). InnerVolumeSpecName "kube-api-access-7ct2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.164390 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6f17a65-e372-463d-b875-c8acdd3a8a04-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "c6f17a65-e372-463d-b875-c8acdd3a8a04" (UID: "c6f17a65-e372-463d-b875-c8acdd3a8a04"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:21 crc kubenswrapper[4902]: E0121 15:57:21.164603 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560 podName:c6f17a65-e372-463d-b875-c8acdd3a8a04 nodeName:}" failed. No retries permitted until 2026-01-21 15:57:21.664581313 +0000 UTC m=+5003.741414342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "persistence" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560") pod "c6f17a65-e372-463d-b875-c8acdd3a8a04" (UID: "c6f17a65-e372-463d-b875-c8acdd3a8a04") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.174440 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-config-data" (OuterVolumeSpecName: "config-data") pod "c6f17a65-e372-463d-b875-c8acdd3a8a04" (UID: "c6f17a65-e372-463d-b875-c8acdd3a8a04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.197905 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-server-conf" (OuterVolumeSpecName: "server-conf") pod "c6f17a65-e372-463d-b875-c8acdd3a8a04" (UID: "c6f17a65-e372-463d-b875-c8acdd3a8a04"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.238905 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.238933 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.238945 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ct2z\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-kube-api-access-7ct2z\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.238955 4902 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.238963 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.238971 4902 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c6f17a65-e372-463d-b875-c8acdd3a8a04-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.238979 4902 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.238987 4902 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c6f17a65-e372-463d-b875-c8acdd3a8a04-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.238994 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.248306 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "c6f17a65-e372-463d-b875-c8acdd3a8a04" (UID: "c6f17a65-e372-463d-b875-c8acdd3a8a04"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.340413 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.510937 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.647455 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8","Type":"ContainerStarted","Data":"3417b2353d6f946f9428306ce572a32bbc9d237d4953d50947d70635e14f3289"} Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.650379 4902 generic.go:334] "Generic (PLEG): container finished" podID="c6f17a65-e372-463d-b875-c8acdd3a8a04" containerID="d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa" exitCode=0 Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.650422 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c6f17a65-e372-463d-b875-c8acdd3a8a04","Type":"ContainerDied","Data":"d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa"} Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.650465 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c6f17a65-e372-463d-b875-c8acdd3a8a04","Type":"ContainerDied","Data":"9d7549ef3e343170f623b1703d13ef1cc7e5adec835d42203926b1f5605c69d7"} Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.650466 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.650489 4902 scope.go:117] "RemoveContainer" containerID="d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.666235 4902 scope.go:117] "RemoveContainer" containerID="56fede2a049d35557b9b5addf25353c93e8b313f6e9712ce4b830ecec8079d28" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.681541 4902 scope.go:117] "RemoveContainer" containerID="d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa" Jan 21 15:57:21 crc kubenswrapper[4902]: E0121 15:57:21.681964 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa\": container with ID starting with d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa not found: ID does not exist" containerID="d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.681998 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa"} err="failed to get container status \"d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa\": rpc error: code = NotFound desc = could not find container \"d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa\": container with ID starting with d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa not found: ID does not exist" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.682023 4902 scope.go:117] "RemoveContainer" containerID="56fede2a049d35557b9b5addf25353c93e8b313f6e9712ce4b830ecec8079d28" Jan 21 15:57:21 crc kubenswrapper[4902]: E0121 15:57:21.682388 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56fede2a049d35557b9b5addf25353c93e8b313f6e9712ce4b830ecec8079d28\": container with ID starting with 56fede2a049d35557b9b5addf25353c93e8b313f6e9712ce4b830ecec8079d28 not found: ID does not exist" containerID="56fede2a049d35557b9b5addf25353c93e8b313f6e9712ce4b830ecec8079d28" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.682432 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56fede2a049d35557b9b5addf25353c93e8b313f6e9712ce4b830ecec8079d28"} err="failed to get container status \"56fede2a049d35557b9b5addf25353c93e8b313f6e9712ce4b830ecec8079d28\": rpc error: code = NotFound desc = could not find container \"56fede2a049d35557b9b5addf25353c93e8b313f6e9712ce4b830ecec8079d28\": container with ID starting with 56fede2a049d35557b9b5addf25353c93e8b313f6e9712ce4b830ecec8079d28 not found: ID does not exist" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.746017 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\") pod \"c6f17a65-e372-463d-b875-c8acdd3a8a04\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.759119 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560" (OuterVolumeSpecName: "persistence") pod "c6f17a65-e372-463d-b875-c8acdd3a8a04" (UID: "c6f17a65-e372-463d-b875-c8acdd3a8a04"). InnerVolumeSpecName "pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.848353 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\") on node \"crc\" " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.868215 4902 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.868375 4902 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560") on node "crc" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.888872 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.913179 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.917395 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:57:21 crc kubenswrapper[4902]: E0121 15:57:21.917727 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6f17a65-e372-463d-b875-c8acdd3a8a04" containerName="rabbitmq" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.917743 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6f17a65-e372-463d-b875-c8acdd3a8a04" containerName="rabbitmq" Jan 21 15:57:21 crc kubenswrapper[4902]: E0121 15:57:21.917763 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6f17a65-e372-463d-b875-c8acdd3a8a04" containerName="setup-container" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.917769 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6f17a65-e372-463d-b875-c8acdd3a8a04" containerName="setup-container" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.917954 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6f17a65-e372-463d-b875-c8acdd3a8a04" containerName="rabbitmq" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.921880 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.926785 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.927569 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.927648 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.928123 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.928325 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.928481 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-928bn" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.928600 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.928814 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.949763 4902 reconciler_common.go:293] "Volume detached for volume \"pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.051949 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.052029 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.052111 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.052145 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.052166 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzqw4\" (UniqueName: \"kubernetes.io/projected/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-kube-api-access-lzqw4\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.052409 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.052534 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.052598 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.052637 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.052694 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.052745 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.154299 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.154370 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.154405 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.154428 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.154460 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.154515 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.154538 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.154578 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.154596 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.154629 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.154649 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzqw4\" (UniqueName: \"kubernetes.io/projected/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-kube-api-access-lzqw4\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.155497 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.156582 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.157579 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.157727 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.158329 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.159539 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.159615 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ca3246581f7b05cdf38cd2988972c40f4ce4dbd3e3f2637534a551fbe51cdea/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.160704 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.163146 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.163453 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.163289 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.178013 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzqw4\" (UniqueName: \"kubernetes.io/projected/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-kube-api-access-lzqw4\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.195518 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.248289 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.307650 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09f238e8-eb6e-47ac-818b-3558f9f6a841" path="/var/lib/kubelet/pods/09f238e8-eb6e-47ac-818b-3558f9f6a841/volumes" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.309354 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53c0907a-0c62-4813-af74-b0f97c0e3c16" path="/var/lib/kubelet/pods/53c0907a-0c62-4813-af74-b0f97c0e3c16/volumes" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.312020 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6f17a65-e372-463d-b875-c8acdd3a8a04" path="/var/lib/kubelet/pods/c6f17a65-e372-463d-b875-c8acdd3a8a04/volumes" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.488559 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:57:22 crc kubenswrapper[4902]: W0121 15:57:22.492658 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0bcf8cd_3dd9_409b_84d9_693f7e471fc1.slice/crio-bce62c7a0fb48ec041cdccaf25ff949feab0524109c1047b0cc3bd9605f3b21a WatchSource:0}: Error finding container bce62c7a0fb48ec041cdccaf25ff949feab0524109c1047b0cc3bd9605f3b21a: Status 404 returned error can't find the container with id bce62c7a0fb48ec041cdccaf25ff949feab0524109c1047b0cc3bd9605f3b21a Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.658292 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1","Type":"ContainerStarted","Data":"bce62c7a0fb48ec041cdccaf25ff949feab0524109c1047b0cc3bd9605f3b21a"} Jan 21 15:57:23 crc kubenswrapper[4902]: I0121 15:57:23.676562 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8","Type":"ContainerStarted","Data":"ed2ed34b6a745712f048be04ebe104f3fd28e32858c6ad02778421c757dbe1a2"} Jan 21 15:57:23 crc kubenswrapper[4902]: I0121 15:57:23.821117 4902 scope.go:117] "RemoveContainer" containerID="832bdc2244fcacc08faf09f474999607b365e44c63c97c499a7f0ae90cc52a03" Jan 21 15:57:23 crc kubenswrapper[4902]: I0121 15:57:23.840200 4902 scope.go:117] "RemoveContainer" containerID="3fd7269ed4af2b5ed8789200b615c63fc1a7f708f657559905419462e7af7de1" Jan 21 15:57:23 crc kubenswrapper[4902]: I0121 15:57:23.880330 4902 scope.go:117] "RemoveContainer" containerID="01b1e3385b91a0ac713735c08ca6d5002c8c460c4cfa3d2e686ace79189fad0a" Jan 21 15:57:24 crc kubenswrapper[4902]: I0121 15:57:24.687504 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1","Type":"ContainerStarted","Data":"192d4807b0e153ac1c718bb1c38de8050845e282eff52bf087f9f2ae6f85ee8f"} Jan 21 15:57:47 crc kubenswrapper[4902]: I0121 15:57:47.770182 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:57:47 crc kubenswrapper[4902]: I0121 15:57:47.770659 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:57:55 crc kubenswrapper[4902]: I0121 15:57:55.929625 4902 generic.go:334] "Generic (PLEG): container finished" podID="e0bcf8cd-3dd9-409b-84d9-693f7e471fc1" containerID="192d4807b0e153ac1c718bb1c38de8050845e282eff52bf087f9f2ae6f85ee8f" exitCode=0 Jan 21 15:57:55 crc kubenswrapper[4902]: I0121 15:57:55.929723 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1","Type":"ContainerDied","Data":"192d4807b0e153ac1c718bb1c38de8050845e282eff52bf087f9f2ae6f85ee8f"} Jan 21 15:57:55 crc kubenswrapper[4902]: I0121 15:57:55.933919 4902 generic.go:334] "Generic (PLEG): container finished" podID="7f24aaa5-50e0-4e80-ba28-3fa2b770fac8" containerID="ed2ed34b6a745712f048be04ebe104f3fd28e32858c6ad02778421c757dbe1a2" exitCode=0 Jan 21 15:57:55 crc kubenswrapper[4902]: I0121 15:57:55.933991 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8","Type":"ContainerDied","Data":"ed2ed34b6a745712f048be04ebe104f3fd28e32858c6ad02778421c757dbe1a2"} Jan 21 15:57:56 crc kubenswrapper[4902]: I0121 15:57:56.942706 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8","Type":"ContainerStarted","Data":"d65a409ab929e9f371da19394ebc425f7079289f9b0fedcb69a0c3c57e8982b7"} Jan 21 15:57:56 crc kubenswrapper[4902]: I0121 15:57:56.943276 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 15:57:56 crc kubenswrapper[4902]: I0121 15:57:56.945690 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1","Type":"ContainerStarted","Data":"6e3b57ec46142cef2394ad1fbbb883607474de0aaecec18122bbd60fbe9f25ce"} Jan 21 15:57:56 crc kubenswrapper[4902]: I0121 15:57:56.945950 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:56 crc kubenswrapper[4902]: I0121 15:57:56.971746 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.971728667 podStartE2EDuration="36.971728667s" podCreationTimestamp="2026-01-21 15:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:57:56.965052488 +0000 UTC m=+5039.041885517" watchObservedRunningTime="2026-01-21 15:57:56.971728667 +0000 UTC m=+5039.048561696" Jan 21 15:57:56 crc kubenswrapper[4902]: I0121 15:57:56.995100 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=35.995078317 podStartE2EDuration="35.995078317s" podCreationTimestamp="2026-01-21 15:57:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:57:56.986735381 +0000 UTC m=+5039.063568400" watchObservedRunningTime="2026-01-21 15:57:56.995078317 +0000 UTC m=+5039.071911346" Jan 21 15:58:11 crc kubenswrapper[4902]: I0121 15:58:11.071213 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 15:58:12 crc kubenswrapper[4902]: I0121 15:58:12.251271 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:58:16 crc kubenswrapper[4902]: I0121 15:58:16.014713 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 21 15:58:16 crc kubenswrapper[4902]: I0121 15:58:16.016016 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 15:58:16 crc kubenswrapper[4902]: I0121 15:58:16.018511 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-8tnn5" Jan 21 15:58:16 crc kubenswrapper[4902]: I0121 15:58:16.025403 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 21 15:58:16 crc kubenswrapper[4902]: I0121 15:58:16.105617 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp6kk\" (UniqueName: \"kubernetes.io/projected/1f924640-2d46-4126-b047-2d3e65c3da76-kube-api-access-pp6kk\") pod \"mariadb-client\" (UID: \"1f924640-2d46-4126-b047-2d3e65c3da76\") " pod="openstack/mariadb-client" Jan 21 15:58:16 crc kubenswrapper[4902]: I0121 15:58:16.206651 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp6kk\" (UniqueName: \"kubernetes.io/projected/1f924640-2d46-4126-b047-2d3e65c3da76-kube-api-access-pp6kk\") pod \"mariadb-client\" (UID: \"1f924640-2d46-4126-b047-2d3e65c3da76\") " pod="openstack/mariadb-client" Jan 21 15:58:16 crc kubenswrapper[4902]: I0121 15:58:16.224745 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp6kk\" (UniqueName: \"kubernetes.io/projected/1f924640-2d46-4126-b047-2d3e65c3da76-kube-api-access-pp6kk\") pod \"mariadb-client\" (UID: \"1f924640-2d46-4126-b047-2d3e65c3da76\") " pod="openstack/mariadb-client" Jan 21 15:58:16 crc kubenswrapper[4902]: I0121 15:58:16.344551 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 15:58:16 crc kubenswrapper[4902]: I0121 15:58:16.837324 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 21 15:58:16 crc kubenswrapper[4902]: I0121 15:58:16.842635 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:58:17 crc kubenswrapper[4902]: I0121 15:58:17.097663 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"1f924640-2d46-4126-b047-2d3e65c3da76","Type":"ContainerStarted","Data":"c90866c136c366d5f870ba48955662b0f66c8e0794cf907765a3b7080400e2fc"} Jan 21 15:58:17 crc kubenswrapper[4902]: I0121 15:58:17.769708 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:58:17 crc kubenswrapper[4902]: I0121 15:58:17.770002 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:58:18 crc kubenswrapper[4902]: I0121 15:58:18.106619 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"1f924640-2d46-4126-b047-2d3e65c3da76","Type":"ContainerStarted","Data":"ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad"} Jan 21 15:58:18 crc kubenswrapper[4902]: I0121 15:58:18.118613 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=1.593623713 podStartE2EDuration="2.118592023s" podCreationTimestamp="2026-01-21 15:58:16 +0000 UTC" firstStartedPulling="2026-01-21 15:58:16.842443022 +0000 UTC m=+5058.919276051" lastFinishedPulling="2026-01-21 15:58:17.367411332 +0000 UTC m=+5059.444244361" observedRunningTime="2026-01-21 15:58:18.116652878 +0000 UTC m=+5060.193485907" watchObservedRunningTime="2026-01-21 15:58:18.118592023 +0000 UTC m=+5060.195425052" Jan 21 15:58:30 crc kubenswrapper[4902]: I0121 15:58:30.472253 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 21 15:58:30 crc kubenswrapper[4902]: I0121 15:58:30.473121 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="1f924640-2d46-4126-b047-2d3e65c3da76" containerName="mariadb-client" containerID="cri-o://ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad" gracePeriod=30 Jan 21 15:58:30 crc kubenswrapper[4902]: I0121 15:58:30.962067 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 15:58:31 crc kubenswrapper[4902]: I0121 15:58:31.036858 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp6kk\" (UniqueName: \"kubernetes.io/projected/1f924640-2d46-4126-b047-2d3e65c3da76-kube-api-access-pp6kk\") pod \"1f924640-2d46-4126-b047-2d3e65c3da76\" (UID: \"1f924640-2d46-4126-b047-2d3e65c3da76\") " Jan 21 15:58:31 crc kubenswrapper[4902]: I0121 15:58:31.042520 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f924640-2d46-4126-b047-2d3e65c3da76-kube-api-access-pp6kk" (OuterVolumeSpecName: "kube-api-access-pp6kk") pod "1f924640-2d46-4126-b047-2d3e65c3da76" (UID: "1f924640-2d46-4126-b047-2d3e65c3da76"). InnerVolumeSpecName "kube-api-access-pp6kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:58:31 crc kubenswrapper[4902]: I0121 15:58:31.138888 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp6kk\" (UniqueName: \"kubernetes.io/projected/1f924640-2d46-4126-b047-2d3e65c3da76-kube-api-access-pp6kk\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:31 crc kubenswrapper[4902]: I0121 15:58:31.216252 4902 generic.go:334] "Generic (PLEG): container finished" podID="1f924640-2d46-4126-b047-2d3e65c3da76" containerID="ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad" exitCode=143 Jan 21 15:58:31 crc kubenswrapper[4902]: I0121 15:58:31.216305 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"1f924640-2d46-4126-b047-2d3e65c3da76","Type":"ContainerDied","Data":"ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad"} Jan 21 15:58:31 crc kubenswrapper[4902]: I0121 15:58:31.216337 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"1f924640-2d46-4126-b047-2d3e65c3da76","Type":"ContainerDied","Data":"c90866c136c366d5f870ba48955662b0f66c8e0794cf907765a3b7080400e2fc"} Jan 21 15:58:31 crc kubenswrapper[4902]: I0121 15:58:31.216347 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 15:58:31 crc kubenswrapper[4902]: I0121 15:58:31.216358 4902 scope.go:117] "RemoveContainer" containerID="ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad" Jan 21 15:58:31 crc kubenswrapper[4902]: I0121 15:58:31.250130 4902 scope.go:117] "RemoveContainer" containerID="ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad" Jan 21 15:58:31 crc kubenswrapper[4902]: E0121 15:58:31.252240 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad\": container with ID starting with ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad not found: ID does not exist" containerID="ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad" Jan 21 15:58:31 crc kubenswrapper[4902]: I0121 15:58:31.308526 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad"} err="failed to get container status \"ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad\": rpc error: code = NotFound desc = could not find container \"ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad\": container with ID starting with ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad not found: ID does not exist" Jan 21 15:58:31 crc kubenswrapper[4902]: I0121 15:58:31.322289 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 21 15:58:31 crc kubenswrapper[4902]: I0121 15:58:31.327119 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 21 15:58:32 crc kubenswrapper[4902]: I0121 15:58:32.312166 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f924640-2d46-4126-b047-2d3e65c3da76" path="/var/lib/kubelet/pods/1f924640-2d46-4126-b047-2d3e65c3da76/volumes" Jan 21 15:58:47 crc kubenswrapper[4902]: I0121 15:58:47.770678 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:58:47 crc kubenswrapper[4902]: I0121 15:58:47.771601 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:58:47 crc kubenswrapper[4902]: I0121 15:58:47.771713 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 15:58:47 crc kubenswrapper[4902]: I0121 15:58:47.772718 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a22fa5f015dccd31057dbf3720918ed0aa27b09d4ac48d4d56f2468401c0c0fb"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:58:47 crc kubenswrapper[4902]: I0121 15:58:47.772855 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://a22fa5f015dccd31057dbf3720918ed0aa27b09d4ac48d4d56f2468401c0c0fb" gracePeriod=600 Jan 21 15:58:48 crc kubenswrapper[4902]: I0121 15:58:48.365780 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="a22fa5f015dccd31057dbf3720918ed0aa27b09d4ac48d4d56f2468401c0c0fb" exitCode=0 Jan 21 15:58:48 crc kubenswrapper[4902]: I0121 15:58:48.365821 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"a22fa5f015dccd31057dbf3720918ed0aa27b09d4ac48d4d56f2468401c0c0fb"} Jan 21 15:58:48 crc kubenswrapper[4902]: I0121 15:58:48.365851 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:58:49 crc kubenswrapper[4902]: I0121 15:58:49.380113 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e"} Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.146578 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp"] Jan 21 16:00:00 crc kubenswrapper[4902]: E0121 16:00:00.147518 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f924640-2d46-4126-b047-2d3e65c3da76" containerName="mariadb-client" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.147534 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f924640-2d46-4126-b047-2d3e65c3da76" containerName="mariadb-client" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.147733 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f924640-2d46-4126-b047-2d3e65c3da76" containerName="mariadb-client" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.148479 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.151228 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.151367 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.158234 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp"] Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.283036 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f705e9e-4608-4e35-9f28-665a52f2aba6-secret-volume\") pod \"collect-profiles-29483520-jmqbp\" (UID: \"2f705e9e-4608-4e35-9f28-665a52f2aba6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.283389 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb7ls\" (UniqueName: \"kubernetes.io/projected/2f705e9e-4608-4e35-9f28-665a52f2aba6-kube-api-access-kb7ls\") pod \"collect-profiles-29483520-jmqbp\" (UID: \"2f705e9e-4608-4e35-9f28-665a52f2aba6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.283593 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f705e9e-4608-4e35-9f28-665a52f2aba6-config-volume\") pod \"collect-profiles-29483520-jmqbp\" (UID: \"2f705e9e-4608-4e35-9f28-665a52f2aba6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.385325 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb7ls\" (UniqueName: \"kubernetes.io/projected/2f705e9e-4608-4e35-9f28-665a52f2aba6-kube-api-access-kb7ls\") pod \"collect-profiles-29483520-jmqbp\" (UID: \"2f705e9e-4608-4e35-9f28-665a52f2aba6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.385801 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f705e9e-4608-4e35-9f28-665a52f2aba6-config-volume\") pod \"collect-profiles-29483520-jmqbp\" (UID: \"2f705e9e-4608-4e35-9f28-665a52f2aba6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.385902 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f705e9e-4608-4e35-9f28-665a52f2aba6-secret-volume\") pod \"collect-profiles-29483520-jmqbp\" (UID: \"2f705e9e-4608-4e35-9f28-665a52f2aba6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.386807 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f705e9e-4608-4e35-9f28-665a52f2aba6-config-volume\") pod \"collect-profiles-29483520-jmqbp\" (UID: \"2f705e9e-4608-4e35-9f28-665a52f2aba6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.392002 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f705e9e-4608-4e35-9f28-665a52f2aba6-secret-volume\") pod \"collect-profiles-29483520-jmqbp\" (UID: \"2f705e9e-4608-4e35-9f28-665a52f2aba6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.407340 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb7ls\" (UniqueName: \"kubernetes.io/projected/2f705e9e-4608-4e35-9f28-665a52f2aba6-kube-api-access-kb7ls\") pod \"collect-profiles-29483520-jmqbp\" (UID: \"2f705e9e-4608-4e35-9f28-665a52f2aba6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.488335 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:01 crc kubenswrapper[4902]: I0121 16:00:01.022303 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp"] Jan 21 16:00:01 crc kubenswrapper[4902]: I0121 16:00:01.189750 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" event={"ID":"2f705e9e-4608-4e35-9f28-665a52f2aba6","Type":"ContainerStarted","Data":"7f12a7c2197d8c4c6016efffad3ddef70f0efd243fb60d5e18a6123284b9f8d8"} Jan 21 16:00:02 crc kubenswrapper[4902]: I0121 16:00:02.199953 4902 generic.go:334] "Generic (PLEG): container finished" podID="2f705e9e-4608-4e35-9f28-665a52f2aba6" containerID="32fe8ff5a7cc5267205a3f1e8b759ee5d99a41ef6bca9732cd6d5478ff974b57" exitCode=0 Jan 21 16:00:02 crc kubenswrapper[4902]: I0121 16:00:02.200413 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" event={"ID":"2f705e9e-4608-4e35-9f28-665a52f2aba6","Type":"ContainerDied","Data":"32fe8ff5a7cc5267205a3f1e8b759ee5d99a41ef6bca9732cd6d5478ff974b57"} Jan 21 16:00:03 crc kubenswrapper[4902]: I0121 16:00:03.506040 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:03 crc kubenswrapper[4902]: I0121 16:00:03.635301 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f705e9e-4608-4e35-9f28-665a52f2aba6-secret-volume\") pod \"2f705e9e-4608-4e35-9f28-665a52f2aba6\" (UID: \"2f705e9e-4608-4e35-9f28-665a52f2aba6\") " Jan 21 16:00:03 crc kubenswrapper[4902]: I0121 16:00:03.635418 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb7ls\" (UniqueName: \"kubernetes.io/projected/2f705e9e-4608-4e35-9f28-665a52f2aba6-kube-api-access-kb7ls\") pod \"2f705e9e-4608-4e35-9f28-665a52f2aba6\" (UID: \"2f705e9e-4608-4e35-9f28-665a52f2aba6\") " Jan 21 16:00:03 crc kubenswrapper[4902]: I0121 16:00:03.635546 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f705e9e-4608-4e35-9f28-665a52f2aba6-config-volume\") pod \"2f705e9e-4608-4e35-9f28-665a52f2aba6\" (UID: \"2f705e9e-4608-4e35-9f28-665a52f2aba6\") " Jan 21 16:00:03 crc kubenswrapper[4902]: I0121 16:00:03.636906 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f705e9e-4608-4e35-9f28-665a52f2aba6-config-volume" (OuterVolumeSpecName: "config-volume") pod "2f705e9e-4608-4e35-9f28-665a52f2aba6" (UID: "2f705e9e-4608-4e35-9f28-665a52f2aba6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:00:03 crc kubenswrapper[4902]: I0121 16:00:03.642558 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f705e9e-4608-4e35-9f28-665a52f2aba6-kube-api-access-kb7ls" (OuterVolumeSpecName: "kube-api-access-kb7ls") pod "2f705e9e-4608-4e35-9f28-665a52f2aba6" (UID: "2f705e9e-4608-4e35-9f28-665a52f2aba6"). InnerVolumeSpecName "kube-api-access-kb7ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:00:03 crc kubenswrapper[4902]: I0121 16:00:03.644690 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f705e9e-4608-4e35-9f28-665a52f2aba6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2f705e9e-4608-4e35-9f28-665a52f2aba6" (UID: "2f705e9e-4608-4e35-9f28-665a52f2aba6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:00:03 crc kubenswrapper[4902]: I0121 16:00:03.737803 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f705e9e-4608-4e35-9f28-665a52f2aba6-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:03 crc kubenswrapper[4902]: I0121 16:00:03.737841 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb7ls\" (UniqueName: \"kubernetes.io/projected/2f705e9e-4608-4e35-9f28-665a52f2aba6-kube-api-access-kb7ls\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:03 crc kubenswrapper[4902]: I0121 16:00:03.737850 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f705e9e-4608-4e35-9f28-665a52f2aba6-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:04 crc kubenswrapper[4902]: I0121 16:00:04.214518 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" event={"ID":"2f705e9e-4608-4e35-9f28-665a52f2aba6","Type":"ContainerDied","Data":"7f12a7c2197d8c4c6016efffad3ddef70f0efd243fb60d5e18a6123284b9f8d8"} Jan 21 16:00:04 crc kubenswrapper[4902]: I0121 16:00:04.214575 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f12a7c2197d8c4c6016efffad3ddef70f0efd243fb60d5e18a6123284b9f8d8" Jan 21 16:00:04 crc kubenswrapper[4902]: I0121 16:00:04.214573 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:04 crc kubenswrapper[4902]: I0121 16:00:04.578878 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d"] Jan 21 16:00:04 crc kubenswrapper[4902]: I0121 16:00:04.584345 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d"] Jan 21 16:00:06 crc kubenswrapper[4902]: I0121 16:00:06.304678 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bebd9484-ab72-4bbd-84e7-99f28795ad85" path="/var/lib/kubelet/pods/bebd9484-ab72-4bbd-84e7-99f28795ad85/volumes" Jan 21 16:00:24 crc kubenswrapper[4902]: I0121 16:00:24.069476 4902 scope.go:117] "RemoveContainer" containerID="8cefa707fcc5de9979cdbb8b42dd928ba6a77070fd6ce0a791939df6996a702e" Jan 21 16:00:24 crc kubenswrapper[4902]: I0121 16:00:24.116829 4902 scope.go:117] "RemoveContainer" containerID="5f2cc1ae5d9e64887200b316f71af17b596d6725e436d2e46c7acd03a38f0c75" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.602392 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jsntb"] Jan 21 16:00:33 crc kubenswrapper[4902]: E0121 16:00:33.603358 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f705e9e-4608-4e35-9f28-665a52f2aba6" containerName="collect-profiles" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.603375 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f705e9e-4608-4e35-9f28-665a52f2aba6" containerName="collect-profiles" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.603557 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f705e9e-4608-4e35-9f28-665a52f2aba6" containerName="collect-profiles" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.608205 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.622357 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jsntb"] Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.696710 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-catalog-content\") pod \"certified-operators-jsntb\" (UID: \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\") " pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.696810 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-utilities\") pod \"certified-operators-jsntb\" (UID: \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\") " pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.696948 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvsg5\" (UniqueName: \"kubernetes.io/projected/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-kube-api-access-rvsg5\") pod \"certified-operators-jsntb\" (UID: \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\") " pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.798198 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvsg5\" (UniqueName: \"kubernetes.io/projected/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-kube-api-access-rvsg5\") pod \"certified-operators-jsntb\" (UID: \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\") " pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.798267 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-catalog-content\") pod \"certified-operators-jsntb\" (UID: \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\") " pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.798295 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-utilities\") pod \"certified-operators-jsntb\" (UID: \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\") " pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.798781 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-utilities\") pod \"certified-operators-jsntb\" (UID: \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\") " pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.799271 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-catalog-content\") pod \"certified-operators-jsntb\" (UID: \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\") " pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.818642 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvsg5\" (UniqueName: \"kubernetes.io/projected/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-kube-api-access-rvsg5\") pod \"certified-operators-jsntb\" (UID: \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\") " pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.926785 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:34 crc kubenswrapper[4902]: I0121 16:00:34.400699 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jsntb"] Jan 21 16:00:34 crc kubenswrapper[4902]: I0121 16:00:34.462294 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsntb" event={"ID":"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346","Type":"ContainerStarted","Data":"4297834efcd126cd86d3b32bb7784ade06bfc81298e047d8a8b69f151674240f"} Jan 21 16:00:35 crc kubenswrapper[4902]: I0121 16:00:35.472934 4902 generic.go:334] "Generic (PLEG): container finished" podID="b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" containerID="5bdfbc6fc880b43dbafcb855258304e457d71f395118e651b9def8eb000fc6c9" exitCode=0 Jan 21 16:00:35 crc kubenswrapper[4902]: I0121 16:00:35.472986 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsntb" event={"ID":"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346","Type":"ContainerDied","Data":"5bdfbc6fc880b43dbafcb855258304e457d71f395118e651b9def8eb000fc6c9"} Jan 21 16:00:36 crc kubenswrapper[4902]: I0121 16:00:36.481811 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsntb" event={"ID":"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346","Type":"ContainerStarted","Data":"2753aeaed85fe381977a2c631be18cb8a7ff712e90b7fd7185ea4aab6b1b63d6"} Jan 21 16:00:37 crc kubenswrapper[4902]: I0121 16:00:37.492966 4902 generic.go:334] "Generic (PLEG): container finished" podID="b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" containerID="2753aeaed85fe381977a2c631be18cb8a7ff712e90b7fd7185ea4aab6b1b63d6" exitCode=0 Jan 21 16:00:37 crc kubenswrapper[4902]: I0121 16:00:37.493132 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsntb" event={"ID":"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346","Type":"ContainerDied","Data":"2753aeaed85fe381977a2c631be18cb8a7ff712e90b7fd7185ea4aab6b1b63d6"} Jan 21 16:00:38 crc kubenswrapper[4902]: I0121 16:00:38.502570 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsntb" event={"ID":"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346","Type":"ContainerStarted","Data":"0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076"} Jan 21 16:00:38 crc kubenswrapper[4902]: I0121 16:00:38.521328 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jsntb" podStartSLOduration=3.093458177 podStartE2EDuration="5.521310605s" podCreationTimestamp="2026-01-21 16:00:33 +0000 UTC" firstStartedPulling="2026-01-21 16:00:35.477563288 +0000 UTC m=+5197.554396357" lastFinishedPulling="2026-01-21 16:00:37.905415756 +0000 UTC m=+5199.982248785" observedRunningTime="2026-01-21 16:00:38.518369252 +0000 UTC m=+5200.595202281" watchObservedRunningTime="2026-01-21 16:00:38.521310605 +0000 UTC m=+5200.598143634" Jan 21 16:00:43 crc kubenswrapper[4902]: I0121 16:00:43.928136 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:43 crc kubenswrapper[4902]: I0121 16:00:43.928796 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:43 crc kubenswrapper[4902]: I0121 16:00:43.971616 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:44 crc kubenswrapper[4902]: I0121 16:00:44.600705 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:44 crc kubenswrapper[4902]: I0121 16:00:44.661887 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jsntb"] Jan 21 16:00:46 crc kubenswrapper[4902]: I0121 16:00:46.561977 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jsntb" podUID="b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" containerName="registry-server" containerID="cri-o://0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076" gracePeriod=2 Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.444555 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.571522 4902 generic.go:334] "Generic (PLEG): container finished" podID="b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" containerID="0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076" exitCode=0 Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.571599 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsntb" event={"ID":"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346","Type":"ContainerDied","Data":"0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076"} Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.571665 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsntb" event={"ID":"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346","Type":"ContainerDied","Data":"4297834efcd126cd86d3b32bb7784ade06bfc81298e047d8a8b69f151674240f"} Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.571687 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.571706 4902 scope.go:117] "RemoveContainer" containerID="0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.598757 4902 scope.go:117] "RemoveContainer" containerID="2753aeaed85fe381977a2c631be18cb8a7ff712e90b7fd7185ea4aab6b1b63d6" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.615684 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-utilities\") pod \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\" (UID: \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\") " Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.615804 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-catalog-content\") pod \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\" (UID: \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\") " Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.615924 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvsg5\" (UniqueName: \"kubernetes.io/projected/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-kube-api-access-rvsg5\") pod \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\" (UID: \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\") " Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.617289 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-utilities" (OuterVolumeSpecName: "utilities") pod "b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" (UID: "b0e8dd27-ba7e-45bf-81ba-fdeb819b8346"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.624104 4902 scope.go:117] "RemoveContainer" containerID="5bdfbc6fc880b43dbafcb855258304e457d71f395118e651b9def8eb000fc6c9" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.624098 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-kube-api-access-rvsg5" (OuterVolumeSpecName: "kube-api-access-rvsg5") pod "b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" (UID: "b0e8dd27-ba7e-45bf-81ba-fdeb819b8346"). InnerVolumeSpecName "kube-api-access-rvsg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.665521 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" (UID: "b0e8dd27-ba7e-45bf-81ba-fdeb819b8346"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.671294 4902 scope.go:117] "RemoveContainer" containerID="0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076" Jan 21 16:00:47 crc kubenswrapper[4902]: E0121 16:00:47.671725 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076\": container with ID starting with 0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076 not found: ID does not exist" containerID="0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.671762 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076"} err="failed to get container status \"0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076\": rpc error: code = NotFound desc = could not find container \"0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076\": container with ID starting with 0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076 not found: ID does not exist" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.671799 4902 scope.go:117] "RemoveContainer" containerID="2753aeaed85fe381977a2c631be18cb8a7ff712e90b7fd7185ea4aab6b1b63d6" Jan 21 16:00:47 crc kubenswrapper[4902]: E0121 16:00:47.672151 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2753aeaed85fe381977a2c631be18cb8a7ff712e90b7fd7185ea4aab6b1b63d6\": container with ID starting with 2753aeaed85fe381977a2c631be18cb8a7ff712e90b7fd7185ea4aab6b1b63d6 not found: ID does not exist" containerID="2753aeaed85fe381977a2c631be18cb8a7ff712e90b7fd7185ea4aab6b1b63d6" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.672180 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2753aeaed85fe381977a2c631be18cb8a7ff712e90b7fd7185ea4aab6b1b63d6"} err="failed to get container status \"2753aeaed85fe381977a2c631be18cb8a7ff712e90b7fd7185ea4aab6b1b63d6\": rpc error: code = NotFound desc = could not find container \"2753aeaed85fe381977a2c631be18cb8a7ff712e90b7fd7185ea4aab6b1b63d6\": container with ID starting with 2753aeaed85fe381977a2c631be18cb8a7ff712e90b7fd7185ea4aab6b1b63d6 not found: ID does not exist" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.672194 4902 scope.go:117] "RemoveContainer" containerID="5bdfbc6fc880b43dbafcb855258304e457d71f395118e651b9def8eb000fc6c9" Jan 21 16:00:47 crc kubenswrapper[4902]: E0121 16:00:47.672695 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bdfbc6fc880b43dbafcb855258304e457d71f395118e651b9def8eb000fc6c9\": container with ID starting with 5bdfbc6fc880b43dbafcb855258304e457d71f395118e651b9def8eb000fc6c9 not found: ID does not exist" containerID="5bdfbc6fc880b43dbafcb855258304e457d71f395118e651b9def8eb000fc6c9" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.672757 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bdfbc6fc880b43dbafcb855258304e457d71f395118e651b9def8eb000fc6c9"} err="failed to get container status \"5bdfbc6fc880b43dbafcb855258304e457d71f395118e651b9def8eb000fc6c9\": rpc error: code = NotFound desc = could not find container \"5bdfbc6fc880b43dbafcb855258304e457d71f395118e651b9def8eb000fc6c9\": container with ID starting with 5bdfbc6fc880b43dbafcb855258304e457d71f395118e651b9def8eb000fc6c9 not found: ID does not exist" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.718014 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvsg5\" (UniqueName: \"kubernetes.io/projected/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-kube-api-access-rvsg5\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.718038 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.718070 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.919911 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jsntb"] Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.926002 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jsntb"] Jan 21 16:00:48 crc kubenswrapper[4902]: I0121 16:00:48.303974 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" path="/var/lib/kubelet/pods/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346/volumes" Jan 21 16:01:17 crc kubenswrapper[4902]: I0121 16:01:17.770096 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:01:17 crc kubenswrapper[4902]: I0121 16:01:17.770718 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:01:47 crc kubenswrapper[4902]: I0121 16:01:47.770201 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:01:47 crc kubenswrapper[4902]: I0121 16:01:47.770770 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:02:17 crc kubenswrapper[4902]: I0121 16:02:17.770983 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:02:17 crc kubenswrapper[4902]: I0121 16:02:17.771541 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:02:17 crc kubenswrapper[4902]: I0121 16:02:17.771607 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 16:02:17 crc kubenswrapper[4902]: I0121 16:02:17.772371 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:02:17 crc kubenswrapper[4902]: I0121 16:02:17.772665 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" gracePeriod=600 Jan 21 16:02:17 crc kubenswrapper[4902]: E0121 16:02:17.895376 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:02:18 crc kubenswrapper[4902]: I0121 16:02:18.263994 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" exitCode=0 Jan 21 16:02:18 crc kubenswrapper[4902]: I0121 16:02:18.264038 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e"} Jan 21 16:02:18 crc kubenswrapper[4902]: I0121 16:02:18.264117 4902 scope.go:117] "RemoveContainer" containerID="a22fa5f015dccd31057dbf3720918ed0aa27b09d4ac48d4d56f2468401c0c0fb" Jan 21 16:02:18 crc kubenswrapper[4902]: I0121 16:02:18.264662 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:02:18 crc kubenswrapper[4902]: E0121 16:02:18.264947 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:02:21 crc kubenswrapper[4902]: I0121 16:02:21.752243 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Jan 21 16:02:21 crc kubenswrapper[4902]: E0121 16:02:21.753201 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" containerName="extract-content" Jan 21 16:02:21 crc kubenswrapper[4902]: I0121 16:02:21.753217 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" containerName="extract-content" Jan 21 16:02:21 crc kubenswrapper[4902]: E0121 16:02:21.753237 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" containerName="extract-utilities" Jan 21 16:02:21 crc kubenswrapper[4902]: I0121 16:02:21.753246 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" containerName="extract-utilities" Jan 21 16:02:21 crc kubenswrapper[4902]: E0121 16:02:21.753260 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" containerName="registry-server" Jan 21 16:02:21 crc kubenswrapper[4902]: I0121 16:02:21.753266 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" containerName="registry-server" Jan 21 16:02:21 crc kubenswrapper[4902]: I0121 16:02:21.753453 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" containerName="registry-server" Jan 21 16:02:21 crc kubenswrapper[4902]: I0121 16:02:21.753984 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 21 16:02:21 crc kubenswrapper[4902]: I0121 16:02:21.757566 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 21 16:02:21 crc kubenswrapper[4902]: I0121 16:02:21.809807 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-8tnn5" Jan 21 16:02:21 crc kubenswrapper[4902]: I0121 16:02:21.897450 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8c715e82-54b4-49f8-8bc2-33391c59a801\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8c715e82-54b4-49f8-8bc2-33391c59a801\") pod \"mariadb-copy-data\" (UID: \"45f02625-70e9-48ec-8dd4-a0bd456a283b\") " pod="openstack/mariadb-copy-data" Jan 21 16:02:21 crc kubenswrapper[4902]: I0121 16:02:21.897508 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d2nc\" (UniqueName: \"kubernetes.io/projected/45f02625-70e9-48ec-8dd4-a0bd456a283b-kube-api-access-4d2nc\") pod \"mariadb-copy-data\" (UID: \"45f02625-70e9-48ec-8dd4-a0bd456a283b\") " pod="openstack/mariadb-copy-data" Jan 21 16:02:21 crc kubenswrapper[4902]: I0121 16:02:21.999419 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8c715e82-54b4-49f8-8bc2-33391c59a801\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8c715e82-54b4-49f8-8bc2-33391c59a801\") pod \"mariadb-copy-data\" (UID: \"45f02625-70e9-48ec-8dd4-a0bd456a283b\") " pod="openstack/mariadb-copy-data" Jan 21 16:02:21 crc kubenswrapper[4902]: I0121 16:02:21.999482 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d2nc\" (UniqueName: \"kubernetes.io/projected/45f02625-70e9-48ec-8dd4-a0bd456a283b-kube-api-access-4d2nc\") pod \"mariadb-copy-data\" (UID: \"45f02625-70e9-48ec-8dd4-a0bd456a283b\") " pod="openstack/mariadb-copy-data" Jan 21 16:02:22 crc kubenswrapper[4902]: I0121 16:02:22.003203 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:02:22 crc kubenswrapper[4902]: I0121 16:02:22.003255 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8c715e82-54b4-49f8-8bc2-33391c59a801\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8c715e82-54b4-49f8-8bc2-33391c59a801\") pod \"mariadb-copy-data\" (UID: \"45f02625-70e9-48ec-8dd4-a0bd456a283b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7a0173fef46fde57e42c640e0fbcdc871aa92738e93088ab89d2b9968325093c/globalmount\"" pod="openstack/mariadb-copy-data" Jan 21 16:02:22 crc kubenswrapper[4902]: I0121 16:02:22.023865 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d2nc\" (UniqueName: \"kubernetes.io/projected/45f02625-70e9-48ec-8dd4-a0bd456a283b-kube-api-access-4d2nc\") pod \"mariadb-copy-data\" (UID: \"45f02625-70e9-48ec-8dd4-a0bd456a283b\") " pod="openstack/mariadb-copy-data" Jan 21 16:02:22 crc kubenswrapper[4902]: I0121 16:02:22.028980 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8c715e82-54b4-49f8-8bc2-33391c59a801\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8c715e82-54b4-49f8-8bc2-33391c59a801\") pod \"mariadb-copy-data\" (UID: \"45f02625-70e9-48ec-8dd4-a0bd456a283b\") " pod="openstack/mariadb-copy-data" Jan 21 16:02:22 crc kubenswrapper[4902]: I0121 16:02:22.119579 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 21 16:02:22 crc kubenswrapper[4902]: I0121 16:02:22.609998 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 21 16:02:23 crc kubenswrapper[4902]: I0121 16:02:23.315778 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"45f02625-70e9-48ec-8dd4-a0bd456a283b","Type":"ContainerStarted","Data":"2f267b1e9fd95ab1d681b1fc71dd13c99d7242664f4622ef5a35f6cfcd7f0f68"} Jan 21 16:02:23 crc kubenswrapper[4902]: I0121 16:02:23.316148 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"45f02625-70e9-48ec-8dd4-a0bd456a283b","Type":"ContainerStarted","Data":"f03372433b6fe52b931300c0e8c5e006363900d659194ca0c879164110259772"} Jan 21 16:02:23 crc kubenswrapper[4902]: I0121 16:02:23.331834 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.331756924 podStartE2EDuration="3.331756924s" podCreationTimestamp="2026-01-21 16:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:02:23.330318704 +0000 UTC m=+5305.407151773" watchObservedRunningTime="2026-01-21 16:02:23.331756924 +0000 UTC m=+5305.408590013" Jan 21 16:02:26 crc kubenswrapper[4902]: I0121 16:02:26.140728 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 21 16:02:26 crc kubenswrapper[4902]: I0121 16:02:26.142556 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 16:02:26 crc kubenswrapper[4902]: I0121 16:02:26.148115 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 21 16:02:26 crc kubenswrapper[4902]: I0121 16:02:26.271196 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbptg\" (UniqueName: \"kubernetes.io/projected/770cc96e-3108-4294-aa08-d84995b87c15-kube-api-access-kbptg\") pod \"mariadb-client\" (UID: \"770cc96e-3108-4294-aa08-d84995b87c15\") " pod="openstack/mariadb-client" Jan 21 16:02:26 crc kubenswrapper[4902]: I0121 16:02:26.372352 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbptg\" (UniqueName: \"kubernetes.io/projected/770cc96e-3108-4294-aa08-d84995b87c15-kube-api-access-kbptg\") pod \"mariadb-client\" (UID: \"770cc96e-3108-4294-aa08-d84995b87c15\") " pod="openstack/mariadb-client" Jan 21 16:02:26 crc kubenswrapper[4902]: I0121 16:02:26.397368 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbptg\" (UniqueName: \"kubernetes.io/projected/770cc96e-3108-4294-aa08-d84995b87c15-kube-api-access-kbptg\") pod \"mariadb-client\" (UID: \"770cc96e-3108-4294-aa08-d84995b87c15\") " pod="openstack/mariadb-client" Jan 21 16:02:26 crc kubenswrapper[4902]: I0121 16:02:26.466337 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 16:02:26 crc kubenswrapper[4902]: I0121 16:02:26.948298 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 21 16:02:26 crc kubenswrapper[4902]: W0121 16:02:26.953346 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod770cc96e_3108_4294_aa08_d84995b87c15.slice/crio-b94e795b4c024b501fc55783ab36f8a15c3cbd039e88411bcd7c1ca7d2f943df WatchSource:0}: Error finding container b94e795b4c024b501fc55783ab36f8a15c3cbd039e88411bcd7c1ca7d2f943df: Status 404 returned error can't find the container with id b94e795b4c024b501fc55783ab36f8a15c3cbd039e88411bcd7c1ca7d2f943df Jan 21 16:02:27 crc kubenswrapper[4902]: I0121 16:02:27.359875 4902 generic.go:334] "Generic (PLEG): container finished" podID="770cc96e-3108-4294-aa08-d84995b87c15" containerID="0b56fe28c730faebb9b858e50e97ecef1625af2c756c8684ae0d499694f95667" exitCode=0 Jan 21 16:02:27 crc kubenswrapper[4902]: I0121 16:02:27.359917 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"770cc96e-3108-4294-aa08-d84995b87c15","Type":"ContainerDied","Data":"0b56fe28c730faebb9b858e50e97ecef1625af2c756c8684ae0d499694f95667"} Jan 21 16:02:27 crc kubenswrapper[4902]: I0121 16:02:27.359942 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"770cc96e-3108-4294-aa08-d84995b87c15","Type":"ContainerStarted","Data":"b94e795b4c024b501fc55783ab36f8a15c3cbd039e88411bcd7c1ca7d2f943df"} Jan 21 16:02:28 crc kubenswrapper[4902]: I0121 16:02:28.690498 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 16:02:28 crc kubenswrapper[4902]: I0121 16:02:28.720171 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_770cc96e-3108-4294-aa08-d84995b87c15/mariadb-client/0.log" Jan 21 16:02:28 crc kubenswrapper[4902]: I0121 16:02:28.746991 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 21 16:02:28 crc kubenswrapper[4902]: I0121 16:02:28.752817 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 21 16:02:28 crc kubenswrapper[4902]: I0121 16:02:28.847103 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbptg\" (UniqueName: \"kubernetes.io/projected/770cc96e-3108-4294-aa08-d84995b87c15-kube-api-access-kbptg\") pod \"770cc96e-3108-4294-aa08-d84995b87c15\" (UID: \"770cc96e-3108-4294-aa08-d84995b87c15\") " Jan 21 16:02:28 crc kubenswrapper[4902]: I0121 16:02:28.852346 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/770cc96e-3108-4294-aa08-d84995b87c15-kube-api-access-kbptg" (OuterVolumeSpecName: "kube-api-access-kbptg") pod "770cc96e-3108-4294-aa08-d84995b87c15" (UID: "770cc96e-3108-4294-aa08-d84995b87c15"). InnerVolumeSpecName "kube-api-access-kbptg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:02:28 crc kubenswrapper[4902]: I0121 16:02:28.879777 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 21 16:02:28 crc kubenswrapper[4902]: E0121 16:02:28.880183 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="770cc96e-3108-4294-aa08-d84995b87c15" containerName="mariadb-client" Jan 21 16:02:28 crc kubenswrapper[4902]: I0121 16:02:28.880196 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="770cc96e-3108-4294-aa08-d84995b87c15" containerName="mariadb-client" Jan 21 16:02:28 crc kubenswrapper[4902]: I0121 16:02:28.880343 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="770cc96e-3108-4294-aa08-d84995b87c15" containerName="mariadb-client" Jan 21 16:02:28 crc kubenswrapper[4902]: I0121 16:02:28.880880 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 16:02:28 crc kubenswrapper[4902]: I0121 16:02:28.896872 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 21 16:02:28 crc kubenswrapper[4902]: I0121 16:02:28.948661 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbptg\" (UniqueName: \"kubernetes.io/projected/770cc96e-3108-4294-aa08-d84995b87c15-kube-api-access-kbptg\") on node \"crc\" DevicePath \"\"" Jan 21 16:02:29 crc kubenswrapper[4902]: I0121 16:02:29.049706 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blmx2\" (UniqueName: \"kubernetes.io/projected/7abbaa6f-fb64-458a-bf9c-1fd63370b978-kube-api-access-blmx2\") pod \"mariadb-client\" (UID: \"7abbaa6f-fb64-458a-bf9c-1fd63370b978\") " pod="openstack/mariadb-client" Jan 21 16:02:29 crc kubenswrapper[4902]: I0121 16:02:29.151673 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blmx2\" (UniqueName: \"kubernetes.io/projected/7abbaa6f-fb64-458a-bf9c-1fd63370b978-kube-api-access-blmx2\") pod \"mariadb-client\" (UID: \"7abbaa6f-fb64-458a-bf9c-1fd63370b978\") " pod="openstack/mariadb-client" Jan 21 16:02:29 crc kubenswrapper[4902]: I0121 16:02:29.169307 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blmx2\" (UniqueName: \"kubernetes.io/projected/7abbaa6f-fb64-458a-bf9c-1fd63370b978-kube-api-access-blmx2\") pod \"mariadb-client\" (UID: \"7abbaa6f-fb64-458a-bf9c-1fd63370b978\") " pod="openstack/mariadb-client" Jan 21 16:02:29 crc kubenswrapper[4902]: I0121 16:02:29.212448 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 16:02:29 crc kubenswrapper[4902]: I0121 16:02:29.379687 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b94e795b4c024b501fc55783ab36f8a15c3cbd039e88411bcd7c1ca7d2f943df" Jan 21 16:02:29 crc kubenswrapper[4902]: I0121 16:02:29.380031 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 16:02:29 crc kubenswrapper[4902]: I0121 16:02:29.404829 4902 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="770cc96e-3108-4294-aa08-d84995b87c15" podUID="7abbaa6f-fb64-458a-bf9c-1fd63370b978" Jan 21 16:02:29 crc kubenswrapper[4902]: I0121 16:02:29.635074 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 21 16:02:30 crc kubenswrapper[4902]: I0121 16:02:30.304366 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="770cc96e-3108-4294-aa08-d84995b87c15" path="/var/lib/kubelet/pods/770cc96e-3108-4294-aa08-d84995b87c15/volumes" Jan 21 16:02:30 crc kubenswrapper[4902]: I0121 16:02:30.390275 4902 generic.go:334] "Generic (PLEG): container finished" podID="7abbaa6f-fb64-458a-bf9c-1fd63370b978" containerID="7dcfa3ca5d15f7808f5af95de45f6bb83034e3c73c913f10478deaba94fa2fdd" exitCode=0 Jan 21 16:02:30 crc kubenswrapper[4902]: I0121 16:02:30.390320 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"7abbaa6f-fb64-458a-bf9c-1fd63370b978","Type":"ContainerDied","Data":"7dcfa3ca5d15f7808f5af95de45f6bb83034e3c73c913f10478deaba94fa2fdd"} Jan 21 16:02:30 crc kubenswrapper[4902]: I0121 16:02:30.390362 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"7abbaa6f-fb64-458a-bf9c-1fd63370b978","Type":"ContainerStarted","Data":"bd37f84ca51450b73121db67e591ddfc7fa9317a18639717d3561bf6355411ed"} Jan 21 16:02:31 crc kubenswrapper[4902]: I0121 16:02:31.695271 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 16:02:31 crc kubenswrapper[4902]: I0121 16:02:31.718536 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_7abbaa6f-fb64-458a-bf9c-1fd63370b978/mariadb-client/0.log" Jan 21 16:02:31 crc kubenswrapper[4902]: I0121 16:02:31.744575 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 21 16:02:31 crc kubenswrapper[4902]: I0121 16:02:31.750926 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 21 16:02:31 crc kubenswrapper[4902]: I0121 16:02:31.793096 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blmx2\" (UniqueName: \"kubernetes.io/projected/7abbaa6f-fb64-458a-bf9c-1fd63370b978-kube-api-access-blmx2\") pod \"7abbaa6f-fb64-458a-bf9c-1fd63370b978\" (UID: \"7abbaa6f-fb64-458a-bf9c-1fd63370b978\") " Jan 21 16:02:31 crc kubenswrapper[4902]: I0121 16:02:31.801377 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7abbaa6f-fb64-458a-bf9c-1fd63370b978-kube-api-access-blmx2" (OuterVolumeSpecName: "kube-api-access-blmx2") pod "7abbaa6f-fb64-458a-bf9c-1fd63370b978" (UID: "7abbaa6f-fb64-458a-bf9c-1fd63370b978"). InnerVolumeSpecName "kube-api-access-blmx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:02:31 crc kubenswrapper[4902]: I0121 16:02:31.894727 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blmx2\" (UniqueName: \"kubernetes.io/projected/7abbaa6f-fb64-458a-bf9c-1fd63370b978-kube-api-access-blmx2\") on node \"crc\" DevicePath \"\"" Jan 21 16:02:32 crc kubenswrapper[4902]: I0121 16:02:32.306633 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7abbaa6f-fb64-458a-bf9c-1fd63370b978" path="/var/lib/kubelet/pods/7abbaa6f-fb64-458a-bf9c-1fd63370b978/volumes" Jan 21 16:02:32 crc kubenswrapper[4902]: I0121 16:02:32.406088 4902 scope.go:117] "RemoveContainer" containerID="7dcfa3ca5d15f7808f5af95de45f6bb83034e3c73c913f10478deaba94fa2fdd" Jan 21 16:02:32 crc kubenswrapper[4902]: I0121 16:02:32.406112 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 16:02:33 crc kubenswrapper[4902]: I0121 16:02:33.295396 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:02:33 crc kubenswrapper[4902]: E0121 16:02:33.295747 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:02:45 crc kubenswrapper[4902]: I0121 16:02:45.295227 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:02:45 crc kubenswrapper[4902]: E0121 16:02:45.295969 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:02:58 crc kubenswrapper[4902]: I0121 16:02:58.302289 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:02:58 crc kubenswrapper[4902]: E0121 16:02:58.302982 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.136843 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hdb6q"] Jan 21 16:03:03 crc kubenswrapper[4902]: E0121 16:03:03.137930 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7abbaa6f-fb64-458a-bf9c-1fd63370b978" containerName="mariadb-client" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.137959 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="7abbaa6f-fb64-458a-bf9c-1fd63370b978" containerName="mariadb-client" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.138490 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="7abbaa6f-fb64-458a-bf9c-1fd63370b978" containerName="mariadb-client" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.140621 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.146801 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hdb6q"] Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.334255 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-catalog-content\") pod \"redhat-marketplace-hdb6q\" (UID: \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\") " pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.334348 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-utilities\") pod \"redhat-marketplace-hdb6q\" (UID: \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\") " pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.334746 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkspc\" (UniqueName: \"kubernetes.io/projected/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-kube-api-access-lkspc\") pod \"redhat-marketplace-hdb6q\" (UID: \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\") " pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.436184 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkspc\" (UniqueName: \"kubernetes.io/projected/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-kube-api-access-lkspc\") pod \"redhat-marketplace-hdb6q\" (UID: \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\") " pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.436248 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-catalog-content\") pod \"redhat-marketplace-hdb6q\" (UID: \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\") " pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.436273 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-utilities\") pod \"redhat-marketplace-hdb6q\" (UID: \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\") " pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.436849 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-utilities\") pod \"redhat-marketplace-hdb6q\" (UID: \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\") " pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.437541 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-catalog-content\") pod \"redhat-marketplace-hdb6q\" (UID: \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\") " pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.456078 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkspc\" (UniqueName: \"kubernetes.io/projected/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-kube-api-access-lkspc\") pod \"redhat-marketplace-hdb6q\" (UID: \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\") " pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.474444 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.903560 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hdb6q"] Jan 21 16:03:04 crc kubenswrapper[4902]: I0121 16:03:04.681364 4902 generic.go:334] "Generic (PLEG): container finished" podID="1a0bbdbe-ee51-4b19-be3e-446c55d329ce" containerID="fc65425f6acdcd8ad1064e631ef6b438360961113ec78db9f0ab29fe0a7d077d" exitCode=0 Jan 21 16:03:04 crc kubenswrapper[4902]: I0121 16:03:04.681717 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdb6q" event={"ID":"1a0bbdbe-ee51-4b19-be3e-446c55d329ce","Type":"ContainerDied","Data":"fc65425f6acdcd8ad1064e631ef6b438360961113ec78db9f0ab29fe0a7d077d"} Jan 21 16:03:04 crc kubenswrapper[4902]: I0121 16:03:04.681756 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdb6q" event={"ID":"1a0bbdbe-ee51-4b19-be3e-446c55d329ce","Type":"ContainerStarted","Data":"be02b642998606b42c51b3f9e618047463c4734c56781cfc76114a147ceeff59"} Jan 21 16:03:05 crc kubenswrapper[4902]: I0121 16:03:05.691204 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdb6q" event={"ID":"1a0bbdbe-ee51-4b19-be3e-446c55d329ce","Type":"ContainerStarted","Data":"79c287022aab0bb7b1b2cb0a387d299d2e6a65dc3b8f58bb2a04cbb81249a9ff"} Jan 21 16:03:06 crc kubenswrapper[4902]: I0121 16:03:06.701418 4902 generic.go:334] "Generic (PLEG): container finished" podID="1a0bbdbe-ee51-4b19-be3e-446c55d329ce" containerID="79c287022aab0bb7b1b2cb0a387d299d2e6a65dc3b8f58bb2a04cbb81249a9ff" exitCode=0 Jan 21 16:03:06 crc kubenswrapper[4902]: I0121 16:03:06.701471 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdb6q" event={"ID":"1a0bbdbe-ee51-4b19-be3e-446c55d329ce","Type":"ContainerDied","Data":"79c287022aab0bb7b1b2cb0a387d299d2e6a65dc3b8f58bb2a04cbb81249a9ff"} Jan 21 16:03:07 crc kubenswrapper[4902]: I0121 16:03:07.712957 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdb6q" event={"ID":"1a0bbdbe-ee51-4b19-be3e-446c55d329ce","Type":"ContainerStarted","Data":"83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450"} Jan 21 16:03:07 crc kubenswrapper[4902]: I0121 16:03:07.742860 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hdb6q" podStartSLOduration=2.169493494 podStartE2EDuration="4.742836033s" podCreationTimestamp="2026-01-21 16:03:03 +0000 UTC" firstStartedPulling="2026-01-21 16:03:04.684188661 +0000 UTC m=+5346.761021730" lastFinishedPulling="2026-01-21 16:03:07.25753121 +0000 UTC m=+5349.334364269" observedRunningTime="2026-01-21 16:03:07.740793036 +0000 UTC m=+5349.817626085" watchObservedRunningTime="2026-01-21 16:03:07.742836033 +0000 UTC m=+5349.819669062" Jan 21 16:03:12 crc kubenswrapper[4902]: I0121 16:03:12.294527 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:03:12 crc kubenswrapper[4902]: E0121 16:03:12.295325 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:03:13 crc kubenswrapper[4902]: I0121 16:03:13.475376 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:13 crc kubenswrapper[4902]: I0121 16:03:13.475455 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:13 crc kubenswrapper[4902]: I0121 16:03:13.533840 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:13 crc kubenswrapper[4902]: I0121 16:03:13.824962 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:13 crc kubenswrapper[4902]: I0121 16:03:13.888697 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hdb6q"] Jan 21 16:03:15 crc kubenswrapper[4902]: I0121 16:03:15.770323 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hdb6q" podUID="1a0bbdbe-ee51-4b19-be3e-446c55d329ce" containerName="registry-server" containerID="cri-o://83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450" gracePeriod=2 Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.205847 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.239986 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-catalog-content\") pod \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\" (UID: \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\") " Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.240028 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-utilities\") pod \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\" (UID: \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\") " Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.240089 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkspc\" (UniqueName: \"kubernetes.io/projected/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-kube-api-access-lkspc\") pod \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\" (UID: \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\") " Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.242714 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-utilities" (OuterVolumeSpecName: "utilities") pod "1a0bbdbe-ee51-4b19-be3e-446c55d329ce" (UID: "1a0bbdbe-ee51-4b19-be3e-446c55d329ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.262342 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-kube-api-access-lkspc" (OuterVolumeSpecName: "kube-api-access-lkspc") pod "1a0bbdbe-ee51-4b19-be3e-446c55d329ce" (UID: "1a0bbdbe-ee51-4b19-be3e-446c55d329ce"). InnerVolumeSpecName "kube-api-access-lkspc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.297229 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a0bbdbe-ee51-4b19-be3e-446c55d329ce" (UID: "1a0bbdbe-ee51-4b19-be3e-446c55d329ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.341769 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkspc\" (UniqueName: \"kubernetes.io/projected/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-kube-api-access-lkspc\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.341809 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.341820 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.778872 4902 generic.go:334] "Generic (PLEG): container finished" podID="1a0bbdbe-ee51-4b19-be3e-446c55d329ce" containerID="83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450" exitCode=0 Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.778920 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdb6q" event={"ID":"1a0bbdbe-ee51-4b19-be3e-446c55d329ce","Type":"ContainerDied","Data":"83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450"} Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.778946 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdb6q" event={"ID":"1a0bbdbe-ee51-4b19-be3e-446c55d329ce","Type":"ContainerDied","Data":"be02b642998606b42c51b3f9e618047463c4734c56781cfc76114a147ceeff59"} Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.778955 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.778964 4902 scope.go:117] "RemoveContainer" containerID="83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.799337 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hdb6q"] Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.805960 4902 scope.go:117] "RemoveContainer" containerID="79c287022aab0bb7b1b2cb0a387d299d2e6a65dc3b8f58bb2a04cbb81249a9ff" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.808303 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hdb6q"] Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.824066 4902 scope.go:117] "RemoveContainer" containerID="fc65425f6acdcd8ad1064e631ef6b438360961113ec78db9f0ab29fe0a7d077d" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.856581 4902 scope.go:117] "RemoveContainer" containerID="83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450" Jan 21 16:03:16 crc kubenswrapper[4902]: E0121 16:03:16.857075 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450\": container with ID starting with 83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450 not found: ID does not exist" containerID="83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.857106 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450"} err="failed to get container status \"83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450\": rpc error: code = NotFound desc = could not find container \"83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450\": container with ID starting with 83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450 not found: ID does not exist" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.857127 4902 scope.go:117] "RemoveContainer" containerID="79c287022aab0bb7b1b2cb0a387d299d2e6a65dc3b8f58bb2a04cbb81249a9ff" Jan 21 16:03:16 crc kubenswrapper[4902]: E0121 16:03:16.857496 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79c287022aab0bb7b1b2cb0a387d299d2e6a65dc3b8f58bb2a04cbb81249a9ff\": container with ID starting with 79c287022aab0bb7b1b2cb0a387d299d2e6a65dc3b8f58bb2a04cbb81249a9ff not found: ID does not exist" containerID="79c287022aab0bb7b1b2cb0a387d299d2e6a65dc3b8f58bb2a04cbb81249a9ff" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.857520 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79c287022aab0bb7b1b2cb0a387d299d2e6a65dc3b8f58bb2a04cbb81249a9ff"} err="failed to get container status \"79c287022aab0bb7b1b2cb0a387d299d2e6a65dc3b8f58bb2a04cbb81249a9ff\": rpc error: code = NotFound desc = could not find container \"79c287022aab0bb7b1b2cb0a387d299d2e6a65dc3b8f58bb2a04cbb81249a9ff\": container with ID starting with 79c287022aab0bb7b1b2cb0a387d299d2e6a65dc3b8f58bb2a04cbb81249a9ff not found: ID does not exist" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.857556 4902 scope.go:117] "RemoveContainer" containerID="fc65425f6acdcd8ad1064e631ef6b438360961113ec78db9f0ab29fe0a7d077d" Jan 21 16:03:16 crc kubenswrapper[4902]: E0121 16:03:16.857816 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc65425f6acdcd8ad1064e631ef6b438360961113ec78db9f0ab29fe0a7d077d\": container with ID starting with fc65425f6acdcd8ad1064e631ef6b438360961113ec78db9f0ab29fe0a7d077d not found: ID does not exist" containerID="fc65425f6acdcd8ad1064e631ef6b438360961113ec78db9f0ab29fe0a7d077d" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.857839 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc65425f6acdcd8ad1064e631ef6b438360961113ec78db9f0ab29fe0a7d077d"} err="failed to get container status \"fc65425f6acdcd8ad1064e631ef6b438360961113ec78db9f0ab29fe0a7d077d\": rpc error: code = NotFound desc = could not find container \"fc65425f6acdcd8ad1064e631ef6b438360961113ec78db9f0ab29fe0a7d077d\": container with ID starting with fc65425f6acdcd8ad1064e631ef6b438360961113ec78db9f0ab29fe0a7d077d not found: ID does not exist" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.123374 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 16:03:17 crc kubenswrapper[4902]: E0121 16:03:17.123822 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a0bbdbe-ee51-4b19-be3e-446c55d329ce" containerName="extract-utilities" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.123851 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a0bbdbe-ee51-4b19-be3e-446c55d329ce" containerName="extract-utilities" Jan 21 16:03:17 crc kubenswrapper[4902]: E0121 16:03:17.123882 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a0bbdbe-ee51-4b19-be3e-446c55d329ce" containerName="registry-server" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.123894 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a0bbdbe-ee51-4b19-be3e-446c55d329ce" containerName="registry-server" Jan 21 16:03:17 crc kubenswrapper[4902]: E0121 16:03:17.123912 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a0bbdbe-ee51-4b19-be3e-446c55d329ce" containerName="extract-content" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.123923 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a0bbdbe-ee51-4b19-be3e-446c55d329ce" containerName="extract-content" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.124183 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a0bbdbe-ee51-4b19-be3e-446c55d329ce" containerName="registry-server" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.125485 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.133962 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.134415 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.134589 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.134613 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.134701 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-lddmf" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.135123 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.135263 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.146393 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.148236 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.155862 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.169180 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.198057 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.255169 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gj2x\" (UniqueName: \"kubernetes.io/projected/69d6d956-f400-4339-8b68-c2644bb9b9eb-kube-api-access-5gj2x\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.255264 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.255403 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b530ea-b7ee-4420-a3d6-d140ac75c474-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.255486 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b530ea-b7ee-4420-a3d6-d140ac75c474-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.255552 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b530ea-b7ee-4420-a3d6-d140ac75c474-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.255731 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.255826 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/52b530ea-b7ee-4420-a3d6-d140ac75c474-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.256671 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-97432595-5de6-4cac-a1f3-f654f9ca8b9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97432595-5de6-4cac-a1f3-f654f9ca8b9c\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.257038 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/69d6d956-f400-4339-8b68-c2644bb9b9eb-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.257197 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d6d956-f400-4339-8b68-c2644bb9b9eb-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.257415 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdqfx\" (UniqueName: \"kubernetes.io/projected/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-kube-api-access-fdqfx\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.257577 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69d6d956-f400-4339-8b68-c2644bb9b9eb-config\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.257743 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-config\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.257829 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/69d6d956-f400-4339-8b68-c2644bb9b9eb-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.257876 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52b530ea-b7ee-4420-a3d6-d140ac75c474-config\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.257912 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-560e1c15-7dc7-46b7-928f-ec627b7d70dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-560e1c15-7dc7-46b7-928f-ec627b7d70dc\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.258113 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b7b2b112-c375-4213-a11d-eed088866ef0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7b2b112-c375-4213-a11d-eed088866ef0\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.258168 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/69d6d956-f400-4339-8b68-c2644bb9b9eb-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.258220 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54thc\" (UniqueName: \"kubernetes.io/projected/52b530ea-b7ee-4420-a3d6-d140ac75c474-kube-api-access-54thc\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.258250 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.258319 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.258373 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.258420 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69d6d956-f400-4339-8b68-c2644bb9b9eb-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.258457 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52b530ea-b7ee-4420-a3d6-d140ac75c474-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.359960 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.360264 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69d6d956-f400-4339-8b68-c2644bb9b9eb-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.360341 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52b530ea-b7ee-4420-a3d6-d140ac75c474-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.360413 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gj2x\" (UniqueName: \"kubernetes.io/projected/69d6d956-f400-4339-8b68-c2644bb9b9eb-kube-api-access-5gj2x\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.360474 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.360546 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b530ea-b7ee-4420-a3d6-d140ac75c474-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.360615 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b530ea-b7ee-4420-a3d6-d140ac75c474-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.360681 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b530ea-b7ee-4420-a3d6-d140ac75c474-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.360750 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.360844 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/52b530ea-b7ee-4420-a3d6-d140ac75c474-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.360924 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-97432595-5de6-4cac-a1f3-f654f9ca8b9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97432595-5de6-4cac-a1f3-f654f9ca8b9c\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.361036 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/69d6d956-f400-4339-8b68-c2644bb9b9eb-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.361153 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d6d956-f400-4339-8b68-c2644bb9b9eb-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.361273 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdqfx\" (UniqueName: \"kubernetes.io/projected/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-kube-api-access-fdqfx\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.361390 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69d6d956-f400-4339-8b68-c2644bb9b9eb-config\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.361483 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.361628 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-config\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.361761 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/69d6d956-f400-4339-8b68-c2644bb9b9eb-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.361879 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52b530ea-b7ee-4420-a3d6-d140ac75c474-config\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.361934 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/52b530ea-b7ee-4420-a3d6-d140ac75c474-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.362031 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.362146 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-560e1c15-7dc7-46b7-928f-ec627b7d70dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-560e1c15-7dc7-46b7-928f-ec627b7d70dc\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.362289 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b7b2b112-c375-4213-a11d-eed088866ef0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7b2b112-c375-4213-a11d-eed088866ef0\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.362381 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/69d6d956-f400-4339-8b68-c2644bb9b9eb-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.362479 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54thc\" (UniqueName: \"kubernetes.io/projected/52b530ea-b7ee-4420-a3d6-d140ac75c474-kube-api-access-54thc\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.362571 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.362681 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.362869 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-config\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.363211 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/69d6d956-f400-4339-8b68-c2644bb9b9eb-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.363505 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69d6d956-f400-4339-8b68-c2644bb9b9eb-config\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.361775 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52b530ea-b7ee-4420-a3d6-d140ac75c474-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.364720 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52b530ea-b7ee-4420-a3d6-d140ac75c474-config\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.364985 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b530ea-b7ee-4420-a3d6-d140ac75c474-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.365002 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b530ea-b7ee-4420-a3d6-d140ac75c474-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.366124 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b530ea-b7ee-4420-a3d6-d140ac75c474-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.366299 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.366878 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.366918 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-97432595-5de6-4cac-a1f3-f654f9ca8b9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97432595-5de6-4cac-a1f3-f654f9ca8b9c\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6376770102d5a2c9cc14d8ba869f07cb47b601e45a29b1ab6d31477a59155ada/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.367020 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.367026 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d6d956-f400-4339-8b68-c2644bb9b9eb-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.367125 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b7b2b112-c375-4213-a11d-eed088866ef0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7b2b112-c375-4213-a11d-eed088866ef0\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fddab078abbcdd19a8bb025b73441ff56000f64db216868e3fba63259e5ac188/globalmount\"" pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.367719 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.367752 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-560e1c15-7dc7-46b7-928f-ec627b7d70dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-560e1c15-7dc7-46b7-928f-ec627b7d70dc\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9dd2dd440dd20c9c629c0b341fc66910819ff310cbb68fa497bc94336f1aa38e/globalmount\"" pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.374084 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.375200 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/69d6d956-f400-4339-8b68-c2644bb9b9eb-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.381371 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/69d6d956-f400-4339-8b68-c2644bb9b9eb-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.385560 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69d6d956-f400-4339-8b68-c2644bb9b9eb-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.389424 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54thc\" (UniqueName: \"kubernetes.io/projected/52b530ea-b7ee-4420-a3d6-d140ac75c474-kube-api-access-54thc\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.389762 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdqfx\" (UniqueName: \"kubernetes.io/projected/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-kube-api-access-fdqfx\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.391361 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.394572 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gj2x\" (UniqueName: \"kubernetes.io/projected/69d6d956-f400-4339-8b68-c2644bb9b9eb-kube-api-access-5gj2x\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.447027 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-97432595-5de6-4cac-a1f3-f654f9ca8b9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97432595-5de6-4cac-a1f3-f654f9ca8b9c\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.457355 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-560e1c15-7dc7-46b7-928f-ec627b7d70dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-560e1c15-7dc7-46b7-928f-ec627b7d70dc\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.460770 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b7b2b112-c375-4213-a11d-eed088866ef0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7b2b112-c375-4213-a11d-eed088866ef0\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.467627 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.742929 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.752833 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.019557 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.178695 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.307693 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a0bbdbe-ee51-4b19-be3e-446c55d329ce" path="/var/lib/kubelet/pods/1a0bbdbe-ee51-4b19-be3e-446c55d329ce/volumes" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.564541 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.668873 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.670263 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.678653 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.679088 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.679975 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-qvwb5" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.680116 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.693758 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.700697 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.702129 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.710913 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.712558 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.741737 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.749971 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.799284 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa609e80-09d5-4393-a79f-9989f9223bdd-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.799364 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fa609e80-09d5-4393-a79f-9989f9223bdd-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.799410 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa609e80-09d5-4393-a79f-9989f9223bdd-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.799452 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa609e80-09d5-4393-a79f-9989f9223bdd-config\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.799505 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa609e80-09d5-4393-a79f-9989f9223bdd-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.799545 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8scp\" (UniqueName: \"kubernetes.io/projected/fa609e80-09d5-4393-a79f-9989f9223bdd-kube-api-access-m8scp\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.799585 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8778a376-340b-45e3-b39d-cce41c466f3b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8778a376-340b-45e3-b39d-cce41c466f3b\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.799629 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa609e80-09d5-4393-a79f-9989f9223bdd-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.802904 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"69d6d956-f400-4339-8b68-c2644bb9b9eb","Type":"ContainerStarted","Data":"713ed28339ce0c81664640268861d087ecbd17dc1c3484e78c87508b0a91ff6a"} Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.802953 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"69d6d956-f400-4339-8b68-c2644bb9b9eb","Type":"ContainerStarted","Data":"221cd96926971a5fc0702e0876716e32692bc5a1e0c503e68787f580e8acdd64"} Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.804508 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2","Type":"ContainerStarted","Data":"0b77c9c9e0e00ad5e813e70ecba66df8b2621c5aa975b59b640426d494976851"} Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.807492 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"52b530ea-b7ee-4420-a3d6-d140ac75c474","Type":"ContainerStarted","Data":"b59eecebe42c6fff02b6e60d8280e8d78a547b8431fe1ee315d35b50d66ecb9b"} Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.900986 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8scp\" (UniqueName: \"kubernetes.io/projected/fa609e80-09d5-4393-a79f-9989f9223bdd-kube-api-access-m8scp\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901087 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51aa3a3a-61f9-4757-b302-aa170904d97f-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901124 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51aa3a3a-61f9-4757-b302-aa170904d97f-config\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901148 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8778a376-340b-45e3-b39d-cce41c466f3b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8778a376-340b-45e3-b39d-cce41c466f3b\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901187 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901213 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901305 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa609e80-09d5-4393-a79f-9989f9223bdd-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901327 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/51aa3a3a-61f9-4757-b302-aa170904d97f-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901373 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j4mf\" (UniqueName: \"kubernetes.io/projected/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-kube-api-access-4j4mf\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901399 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa609e80-09d5-4393-a79f-9989f9223bdd-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901423 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51aa3a3a-61f9-4757-b302-aa170904d97f-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901458 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901482 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-config\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901522 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fa609e80-09d5-4393-a79f-9989f9223bdd-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901557 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fd108b96-2c94-42eb-9abe-885541aaa945\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fd108b96-2c94-42eb-9abe-885541aaa945\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901588 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901629 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa609e80-09d5-4393-a79f-9989f9223bdd-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901667 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa609e80-09d5-4393-a79f-9989f9223bdd-config\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901706 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sdls\" (UniqueName: \"kubernetes.io/projected/51aa3a3a-61f9-4757-b302-aa170904d97f-kube-api-access-2sdls\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901738 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-72c95061-8905-4c50-892f-e1dd13b791b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-72c95061-8905-4c50-892f-e1dd13b791b8\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901765 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901802 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa609e80-09d5-4393-a79f-9989f9223bdd-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901847 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/51aa3a3a-61f9-4757-b302-aa170904d97f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901869 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/51aa3a3a-61f9-4757-b302-aa170904d97f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.904648 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fa609e80-09d5-4393-a79f-9989f9223bdd-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.904798 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa609e80-09d5-4393-a79f-9989f9223bdd-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.907780 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa609e80-09d5-4393-a79f-9989f9223bdd-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.908239 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa609e80-09d5-4393-a79f-9989f9223bdd-config\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.909037 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.909162 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8778a376-340b-45e3-b39d-cce41c466f3b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8778a376-340b-45e3-b39d-cce41c466f3b\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/16996701e509ddf4c9edcb9d835961256510b9b47ca58d41158d6daa37486c0d/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.923540 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa609e80-09d5-4393-a79f-9989f9223bdd-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.923847 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa609e80-09d5-4393-a79f-9989f9223bdd-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.924765 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8scp\" (UniqueName: \"kubernetes.io/projected/fa609e80-09d5-4393-a79f-9989f9223bdd-kube-api-access-m8scp\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.962890 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8778a376-340b-45e3-b39d-cce41c466f3b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8778a376-340b-45e3-b39d-cce41c466f3b\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.998478 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003539 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/51aa3a3a-61f9-4757-b302-aa170904d97f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003576 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/51aa3a3a-61f9-4757-b302-aa170904d97f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003607 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51aa3a3a-61f9-4757-b302-aa170904d97f-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003627 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51aa3a3a-61f9-4757-b302-aa170904d97f-config\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003646 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003663 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003689 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/51aa3a3a-61f9-4757-b302-aa170904d97f-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003708 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j4mf\" (UniqueName: \"kubernetes.io/projected/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-kube-api-access-4j4mf\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003724 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51aa3a3a-61f9-4757-b302-aa170904d97f-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003746 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003762 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-config\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003801 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fd108b96-2c94-42eb-9abe-885541aaa945\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fd108b96-2c94-42eb-9abe-885541aaa945\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003819 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003899 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sdls\" (UniqueName: \"kubernetes.io/projected/51aa3a3a-61f9-4757-b302-aa170904d97f-kube-api-access-2sdls\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003919 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-72c95061-8905-4c50-892f-e1dd13b791b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-72c95061-8905-4c50-892f-e1dd13b791b8\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003937 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.004379 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.007176 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/51aa3a3a-61f9-4757-b302-aa170904d97f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.008025 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51aa3a3a-61f9-4757-b302-aa170904d97f-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.008829 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.009764 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/51aa3a3a-61f9-4757-b302-aa170904d97f-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.009771 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-config\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.011970 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51aa3a3a-61f9-4757-b302-aa170904d97f-config\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.012223 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.012255 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fd108b96-2c94-42eb-9abe-885541aaa945\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fd108b96-2c94-42eb-9abe-885541aaa945\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bd25d4049fb12958d19ade43ecb8c4f5b4b71e9f8765ef9f5523c9eb7e9acecf/globalmount\"" pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.012428 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.012460 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-72c95061-8905-4c50-892f-e1dd13b791b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-72c95061-8905-4c50-892f-e1dd13b791b8\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0fec25508ac70e473cfdac92e070550d89bcff2b29e75e1519b09d4ff6b8c411/globalmount\"" pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.012957 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.013512 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/51aa3a3a-61f9-4757-b302-aa170904d97f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.020856 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51aa3a3a-61f9-4757-b302-aa170904d97f-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.024856 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.027766 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.033062 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sdls\" (UniqueName: \"kubernetes.io/projected/51aa3a3a-61f9-4757-b302-aa170904d97f-kube-api-access-2sdls\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.034705 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j4mf\" (UniqueName: \"kubernetes.io/projected/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-kube-api-access-4j4mf\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.063101 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fd108b96-2c94-42eb-9abe-885541aaa945\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fd108b96-2c94-42eb-9abe-885541aaa945\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.112181 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-72c95061-8905-4c50-892f-e1dd13b791b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-72c95061-8905-4c50-892f-e1dd13b791b8\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.322898 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.339537 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.682545 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.785096 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 21 16:03:19 crc kubenswrapper[4902]: W0121 16:03:19.788852 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51aa3a3a_61f9_4757_b302_aa170904d97f.slice/crio-4d3a3d57b01b7fd6191f348740c7d38ae92c7b48e105628aec4c7d9f51b0db42 WatchSource:0}: Error finding container 4d3a3d57b01b7fd6191f348740c7d38ae92c7b48e105628aec4c7d9f51b0db42: Status 404 returned error can't find the container with id 4d3a3d57b01b7fd6191f348740c7d38ae92c7b48e105628aec4c7d9f51b0db42 Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.819738 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"69d6d956-f400-4339-8b68-c2644bb9b9eb","Type":"ContainerStarted","Data":"0bb79ecdee28f880e552556bd4939429494e9bfb063f343efcfff77bd792cec7"} Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.821975 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"fa609e80-09d5-4393-a79f-9989f9223bdd","Type":"ContainerStarted","Data":"da042cb7c84cc2b2b529422e8ce70b299f659710f1acddb459383fabee51e5b2"} Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.824546 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"51aa3a3a-61f9-4757-b302-aa170904d97f","Type":"ContainerStarted","Data":"4d3a3d57b01b7fd6191f348740c7d38ae92c7b48e105628aec4c7d9f51b0db42"} Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.826391 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2","Type":"ContainerStarted","Data":"6bd048219e1683d80362f11d25faacd5fbaea2be7eb1c853f5765fc8ed6e2eaf"} Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.826426 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2","Type":"ContainerStarted","Data":"beb542aba278782fbe8b9bf6055438f73de81761200158808ae24634ea7e7086"} Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.828692 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"52b530ea-b7ee-4420-a3d6-d140ac75c474","Type":"ContainerStarted","Data":"9f67e58fbff69968a740146c35b211053897854c82bac23fba02de5d144e6d83"} Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.828724 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"52b530ea-b7ee-4420-a3d6-d140ac75c474","Type":"ContainerStarted","Data":"78bd186e6934af7ccfa619d99513cb0cc5fc08849adf798d2ae9ea056cc9c7e0"} Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.841408 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.8413909520000002 podStartE2EDuration="3.841390952s" podCreationTimestamp="2026-01-21 16:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:19.837639357 +0000 UTC m=+5361.914472386" watchObservedRunningTime="2026-01-21 16:03:19.841390952 +0000 UTC m=+5361.918223981" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.869091 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.869022096 podStartE2EDuration="3.869022096s" podCreationTimestamp="2026-01-21 16:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:19.866338301 +0000 UTC m=+5361.943171340" watchObservedRunningTime="2026-01-21 16:03:19.869022096 +0000 UTC m=+5361.945855125" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.899343 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.899320435 podStartE2EDuration="3.899320435s" podCreationTimestamp="2026-01-21 16:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:19.89093651 +0000 UTC m=+5361.967769539" watchObservedRunningTime="2026-01-21 16:03:19.899320435 +0000 UTC m=+5361.976153464" Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.043393 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 21 16:03:20 crc kubenswrapper[4902]: W0121 16:03:20.055472 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaadc3978_ec1c_4d8d_8d02_f199d6509d5c.slice/crio-045bbc59816fd15959ce4cc70990898c0f1a7e33935d1c77333184402dd6365e WatchSource:0}: Error finding container 045bbc59816fd15959ce4cc70990898c0f1a7e33935d1c77333184402dd6365e: Status 404 returned error can't find the container with id 045bbc59816fd15959ce4cc70990898c0f1a7e33935d1c77333184402dd6365e Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.469285 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.743213 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.753440 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.848092 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"aadc3978-ec1c-4d8d-8d02-f199d6509d5c","Type":"ContainerStarted","Data":"10687baf2ccc1b8ec1e74cffaedab771b98c26f77c65e3480a13405b39359195"} Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.848136 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"aadc3978-ec1c-4d8d-8d02-f199d6509d5c","Type":"ContainerStarted","Data":"13fa5dffe47aa2b6494ef06624514684be683d0bccb540b1717faf78034c328f"} Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.848147 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"aadc3978-ec1c-4d8d-8d02-f199d6509d5c","Type":"ContainerStarted","Data":"045bbc59816fd15959ce4cc70990898c0f1a7e33935d1c77333184402dd6365e"} Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.850937 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"fa609e80-09d5-4393-a79f-9989f9223bdd","Type":"ContainerStarted","Data":"4945eee6f6827bbe6ae155b3c5ba7fe09ba1f585ab8f384f06172e48c8a2b5fb"} Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.850985 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"fa609e80-09d5-4393-a79f-9989f9223bdd","Type":"ContainerStarted","Data":"9a2c03eff2ca62968b807f2db3f48c3fa66978afc8251433c37ed434bb6584b8"} Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.853714 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"51aa3a3a-61f9-4757-b302-aa170904d97f","Type":"ContainerStarted","Data":"86d76922527bd27c8e68b8440afffd36010d98c79654cae923ade659da2b07e5"} Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.853865 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"51aa3a3a-61f9-4757-b302-aa170904d97f","Type":"ContainerStarted","Data":"45b8282f46f807884a3909c1753ccd4087463b3ae218c1364b729c1584ce0f88"} Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.867245 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=3.867221186 podStartE2EDuration="3.867221186s" podCreationTimestamp="2026-01-21 16:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:20.866098004 +0000 UTC m=+5362.942931043" watchObservedRunningTime="2026-01-21 16:03:20.867221186 +0000 UTC m=+5362.944054215" Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.893610 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.893579034 podStartE2EDuration="3.893579034s" podCreationTimestamp="2026-01-21 16:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:20.891140306 +0000 UTC m=+5362.967973335" watchObservedRunningTime="2026-01-21 16:03:20.893579034 +0000 UTC m=+5362.970412063" Jan 21 16:03:22 crc kubenswrapper[4902]: I0121 16:03:21.999733 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:22 crc kubenswrapper[4902]: I0121 16:03:22.323182 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:22 crc kubenswrapper[4902]: I0121 16:03:22.340469 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:22 crc kubenswrapper[4902]: I0121 16:03:22.468769 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:22 crc kubenswrapper[4902]: I0121 16:03:22.743129 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:22 crc kubenswrapper[4902]: I0121 16:03:22.753282 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:23 crc kubenswrapper[4902]: I0121 16:03:23.512790 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:23 crc kubenswrapper[4902]: I0121 16:03:23.538372 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=6.538344884 podStartE2EDuration="6.538344884s" podCreationTimestamp="2026-01-21 16:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:20.911446025 +0000 UTC m=+5362.988279054" watchObservedRunningTime="2026-01-21 16:03:23.538344884 +0000 UTC m=+5365.615177913" Jan 21 16:03:23 crc kubenswrapper[4902]: I0121 16:03:23.789723 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:23 crc kubenswrapper[4902]: I0121 16:03:23.803779 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:23 crc kubenswrapper[4902]: I0121 16:03:23.914476 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:23 crc kubenswrapper[4902]: I0121 16:03:23.922759 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:23 crc kubenswrapper[4902]: I0121 16:03:23.924867 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:23 crc kubenswrapper[4902]: I0121 16:03:23.999468 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.100897 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d944468c-9qwvt"] Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.102211 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.105757 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.125264 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d944468c-9qwvt"] Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.207073 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-config\") pod \"dnsmasq-dns-d944468c-9qwvt\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.207156 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-ovsdbserver-nb\") pod \"dnsmasq-dns-d944468c-9qwvt\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.207240 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-dns-svc\") pod \"dnsmasq-dns-d944468c-9qwvt\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.207280 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr2fq\" (UniqueName: \"kubernetes.io/projected/eb464c35-1456-495f-bbc0-3d23c076af70-kube-api-access-lr2fq\") pod \"dnsmasq-dns-d944468c-9qwvt\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.212054 4902 scope.go:117] "RemoveContainer" containerID="22e6c4bda8a7a16db8551cc07c7e5779cb515519925f610dd7708c60c5c8a6fc" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.308802 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-config\") pod \"dnsmasq-dns-d944468c-9qwvt\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.308847 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-ovsdbserver-nb\") pod \"dnsmasq-dns-d944468c-9qwvt\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.308906 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-dns-svc\") pod \"dnsmasq-dns-d944468c-9qwvt\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.308932 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr2fq\" (UniqueName: \"kubernetes.io/projected/eb464c35-1456-495f-bbc0-3d23c076af70-kube-api-access-lr2fq\") pod \"dnsmasq-dns-d944468c-9qwvt\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.309841 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-ovsdbserver-nb\") pod \"dnsmasq-dns-d944468c-9qwvt\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.309952 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-dns-svc\") pod \"dnsmasq-dns-d944468c-9qwvt\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.310056 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-config\") pod \"dnsmasq-dns-d944468c-9qwvt\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.323385 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.332249 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr2fq\" (UniqueName: \"kubernetes.io/projected/eb464c35-1456-495f-bbc0-3d23c076af70-kube-api-access-lr2fq\") pod \"dnsmasq-dns-d944468c-9qwvt\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.340542 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.422791 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.868063 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d944468c-9qwvt"] Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.893887 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d944468c-9qwvt" event={"ID":"eb464c35-1456-495f-bbc0-3d23c076af70","Type":"ContainerStarted","Data":"d2474e93bef33df41251ab8aed435fd5eabbbc74ae11fd5542967e63203c6e50"} Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.046968 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.092619 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.354701 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d944468c-9qwvt"] Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.378369 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.382653 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57f688859c-fb82z"] Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.384440 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.386396 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.403324 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57f688859c-fb82z"] Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.431405 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.433961 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-ovsdbserver-sb\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.434002 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-ovsdbserver-nb\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.434154 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-dns-svc\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.434281 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-config\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.434342 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b77t\" (UniqueName: \"kubernetes.io/projected/fdfebc8b-bc5c-4214-acee-021a404994bf-kube-api-access-7b77t\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.437421 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.484605 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.535335 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-config\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.535415 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b77t\" (UniqueName: \"kubernetes.io/projected/fdfebc8b-bc5c-4214-acee-021a404994bf-kube-api-access-7b77t\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.535451 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-ovsdbserver-sb\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.535469 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-ovsdbserver-nb\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.535561 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-dns-svc\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.536592 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-dns-svc\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.537220 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-config\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.537987 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-ovsdbserver-sb\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.538201 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-ovsdbserver-nb\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.557030 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b77t\" (UniqueName: \"kubernetes.io/projected/fdfebc8b-bc5c-4214-acee-021a404994bf-kube-api-access-7b77t\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.721524 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.913239 4902 generic.go:334] "Generic (PLEG): container finished" podID="eb464c35-1456-495f-bbc0-3d23c076af70" containerID="ee8cc873d94ba2c61bfc3ae5a338e0bc1cd25a2108c8845b199b067cbaaa4db1" exitCode=0 Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.913297 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d944468c-9qwvt" event={"ID":"eb464c35-1456-495f-bbc0-3d23c076af70","Type":"ContainerDied","Data":"ee8cc873d94ba2c61bfc3ae5a338e0bc1cd25a2108c8845b199b067cbaaa4db1"} Jan 21 16:03:26 crc kubenswrapper[4902]: W0121 16:03:26.143165 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdfebc8b_bc5c_4214_acee_021a404994bf.slice/crio-f3c03b700062d45894766c079b31174948f641c82af2f122619825aabb3684d0 WatchSource:0}: Error finding container f3c03b700062d45894766c079b31174948f641c82af2f122619825aabb3684d0: Status 404 returned error can't find the container with id f3c03b700062d45894766c079b31174948f641c82af2f122619825aabb3684d0 Jan 21 16:03:26 crc kubenswrapper[4902]: I0121 16:03:26.144804 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57f688859c-fb82z"] Jan 21 16:03:26 crc kubenswrapper[4902]: E0121 16:03:26.452461 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdfebc8b_bc5c_4214_acee_021a404994bf.slice/crio-311b61cd815d9e9e4c95e8d3428eb904438d2e7a6efb54993e589d294d8780c4.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:03:26 crc kubenswrapper[4902]: I0121 16:03:26.922543 4902 generic.go:334] "Generic (PLEG): container finished" podID="fdfebc8b-bc5c-4214-acee-021a404994bf" containerID="311b61cd815d9e9e4c95e8d3428eb904438d2e7a6efb54993e589d294d8780c4" exitCode=0 Jan 21 16:03:26 crc kubenswrapper[4902]: I0121 16:03:26.922609 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57f688859c-fb82z" event={"ID":"fdfebc8b-bc5c-4214-acee-021a404994bf","Type":"ContainerDied","Data":"311b61cd815d9e9e4c95e8d3428eb904438d2e7a6efb54993e589d294d8780c4"} Jan 21 16:03:26 crc kubenswrapper[4902]: I0121 16:03:26.922638 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57f688859c-fb82z" event={"ID":"fdfebc8b-bc5c-4214-acee-021a404994bf","Type":"ContainerStarted","Data":"f3c03b700062d45894766c079b31174948f641c82af2f122619825aabb3684d0"} Jan 21 16:03:26 crc kubenswrapper[4902]: I0121 16:03:26.925361 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d944468c-9qwvt" event={"ID":"eb464c35-1456-495f-bbc0-3d23c076af70","Type":"ContainerStarted","Data":"59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8"} Jan 21 16:03:26 crc kubenswrapper[4902]: I0121 16:03:26.925516 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d944468c-9qwvt" podUID="eb464c35-1456-495f-bbc0-3d23c076af70" containerName="dnsmasq-dns" containerID="cri-o://59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8" gracePeriod=10 Jan 21 16:03:26 crc kubenswrapper[4902]: I0121 16:03:26.925603 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.010771 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d944468c-9qwvt" podStartSLOduration=3.010752995 podStartE2EDuration="3.010752995s" podCreationTimestamp="2026-01-21 16:03:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:27.008852362 +0000 UTC m=+5369.085685381" watchObservedRunningTime="2026-01-21 16:03:27.010752995 +0000 UTC m=+5369.087586024" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.294882 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:03:27 crc kubenswrapper[4902]: E0121 16:03:27.295116 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.549599 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.676417 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-dns-svc\") pod \"eb464c35-1456-495f-bbc0-3d23c076af70\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.677517 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr2fq\" (UniqueName: \"kubernetes.io/projected/eb464c35-1456-495f-bbc0-3d23c076af70-kube-api-access-lr2fq\") pod \"eb464c35-1456-495f-bbc0-3d23c076af70\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.677631 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-config\") pod \"eb464c35-1456-495f-bbc0-3d23c076af70\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.677771 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-ovsdbserver-nb\") pod \"eb464c35-1456-495f-bbc0-3d23c076af70\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.681717 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb464c35-1456-495f-bbc0-3d23c076af70-kube-api-access-lr2fq" (OuterVolumeSpecName: "kube-api-access-lr2fq") pod "eb464c35-1456-495f-bbc0-3d23c076af70" (UID: "eb464c35-1456-495f-bbc0-3d23c076af70"). InnerVolumeSpecName "kube-api-access-lr2fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.722576 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eb464c35-1456-495f-bbc0-3d23c076af70" (UID: "eb464c35-1456-495f-bbc0-3d23c076af70"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.724515 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-config" (OuterVolumeSpecName: "config") pod "eb464c35-1456-495f-bbc0-3d23c076af70" (UID: "eb464c35-1456-495f-bbc0-3d23c076af70"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.726325 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb464c35-1456-495f-bbc0-3d23c076af70" (UID: "eb464c35-1456-495f-bbc0-3d23c076af70"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.780119 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.780339 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr2fq\" (UniqueName: \"kubernetes.io/projected/eb464c35-1456-495f-bbc0-3d23c076af70-kube-api-access-lr2fq\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.780414 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.780505 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.837156 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Jan 21 16:03:27 crc kubenswrapper[4902]: E0121 16:03:27.837503 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb464c35-1456-495f-bbc0-3d23c076af70" containerName="init" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.837515 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb464c35-1456-495f-bbc0-3d23c076af70" containerName="init" Jan 21 16:03:27 crc kubenswrapper[4902]: E0121 16:03:27.837527 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb464c35-1456-495f-bbc0-3d23c076af70" containerName="dnsmasq-dns" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.837532 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb464c35-1456-495f-bbc0-3d23c076af70" containerName="dnsmasq-dns" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.837691 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb464c35-1456-495f-bbc0-3d23c076af70" containerName="dnsmasq-dns" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.838397 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.841441 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.850558 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.934807 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57f688859c-fb82z" event={"ID":"fdfebc8b-bc5c-4214-acee-021a404994bf","Type":"ContainerStarted","Data":"401f56f07810074a750a97f4da0d7c60e93e7a8c193e6d8365b52546dfbecc13"} Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.934981 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.938116 4902 generic.go:334] "Generic (PLEG): container finished" podID="eb464c35-1456-495f-bbc0-3d23c076af70" containerID="59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8" exitCode=0 Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.938145 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d944468c-9qwvt" event={"ID":"eb464c35-1456-495f-bbc0-3d23c076af70","Type":"ContainerDied","Data":"59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8"} Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.938162 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d944468c-9qwvt" event={"ID":"eb464c35-1456-495f-bbc0-3d23c076af70","Type":"ContainerDied","Data":"d2474e93bef33df41251ab8aed435fd5eabbbc74ae11fd5542967e63203c6e50"} Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.938179 4902 scope.go:117] "RemoveContainer" containerID="59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.938325 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.963750 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57f688859c-fb82z" podStartSLOduration=2.963730228 podStartE2EDuration="2.963730228s" podCreationTimestamp="2026-01-21 16:03:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:27.953610625 +0000 UTC m=+5370.030443654" watchObservedRunningTime="2026-01-21 16:03:27.963730228 +0000 UTC m=+5370.040563257" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.965350 4902 scope.go:117] "RemoveContainer" containerID="ee8cc873d94ba2c61bfc3ae5a338e0bc1cd25a2108c8845b199b067cbaaa4db1" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.975546 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d944468c-9qwvt"] Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.982997 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d944468c-9qwvt"] Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.983446 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/15260f61-f63b-48cf-8c1d-1269ed5264d6-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"15260f61-f63b-48cf-8c1d-1269ed5264d6\") " pod="openstack/ovn-copy-data" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.983559 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p997k\" (UniqueName: \"kubernetes.io/projected/15260f61-f63b-48cf-8c1d-1269ed5264d6-kube-api-access-p997k\") pod \"ovn-copy-data\" (UID: \"15260f61-f63b-48cf-8c1d-1269ed5264d6\") " pod="openstack/ovn-copy-data" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.983599 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-292168a9-bbdb-49af-9ca7-c15d08ebd2ba\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-292168a9-bbdb-49af-9ca7-c15d08ebd2ba\") pod \"ovn-copy-data\" (UID: \"15260f61-f63b-48cf-8c1d-1269ed5264d6\") " pod="openstack/ovn-copy-data" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.986120 4902 scope.go:117] "RemoveContainer" containerID="59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8" Jan 21 16:03:27 crc kubenswrapper[4902]: E0121 16:03:27.986506 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8\": container with ID starting with 59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8 not found: ID does not exist" containerID="59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.986564 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8"} err="failed to get container status \"59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8\": rpc error: code = NotFound desc = could not find container \"59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8\": container with ID starting with 59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8 not found: ID does not exist" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.986598 4902 scope.go:117] "RemoveContainer" containerID="ee8cc873d94ba2c61bfc3ae5a338e0bc1cd25a2108c8845b199b067cbaaa4db1" Jan 21 16:03:27 crc kubenswrapper[4902]: E0121 16:03:27.987093 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee8cc873d94ba2c61bfc3ae5a338e0bc1cd25a2108c8845b199b067cbaaa4db1\": container with ID starting with ee8cc873d94ba2c61bfc3ae5a338e0bc1cd25a2108c8845b199b067cbaaa4db1 not found: ID does not exist" containerID="ee8cc873d94ba2c61bfc3ae5a338e0bc1cd25a2108c8845b199b067cbaaa4db1" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.987125 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee8cc873d94ba2c61bfc3ae5a338e0bc1cd25a2108c8845b199b067cbaaa4db1"} err="failed to get container status \"ee8cc873d94ba2c61bfc3ae5a338e0bc1cd25a2108c8845b199b067cbaaa4db1\": rpc error: code = NotFound desc = could not find container \"ee8cc873d94ba2c61bfc3ae5a338e0bc1cd25a2108c8845b199b067cbaaa4db1\": container with ID starting with ee8cc873d94ba2c61bfc3ae5a338e0bc1cd25a2108c8845b199b067cbaaa4db1 not found: ID does not exist" Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.084718 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/15260f61-f63b-48cf-8c1d-1269ed5264d6-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"15260f61-f63b-48cf-8c1d-1269ed5264d6\") " pod="openstack/ovn-copy-data" Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.085087 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p997k\" (UniqueName: \"kubernetes.io/projected/15260f61-f63b-48cf-8c1d-1269ed5264d6-kube-api-access-p997k\") pod \"ovn-copy-data\" (UID: \"15260f61-f63b-48cf-8c1d-1269ed5264d6\") " pod="openstack/ovn-copy-data" Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.085203 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-292168a9-bbdb-49af-9ca7-c15d08ebd2ba\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-292168a9-bbdb-49af-9ca7-c15d08ebd2ba\") pod \"ovn-copy-data\" (UID: \"15260f61-f63b-48cf-8c1d-1269ed5264d6\") " pod="openstack/ovn-copy-data" Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.087633 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.087676 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-292168a9-bbdb-49af-9ca7-c15d08ebd2ba\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-292168a9-bbdb-49af-9ca7-c15d08ebd2ba\") pod \"ovn-copy-data\" (UID: \"15260f61-f63b-48cf-8c1d-1269ed5264d6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9d0337883da665e7f9f3b16b7d379ef59044766ce24adb35f8e12bf624dbdf08/globalmount\"" pod="openstack/ovn-copy-data" Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.088928 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/15260f61-f63b-48cf-8c1d-1269ed5264d6-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"15260f61-f63b-48cf-8c1d-1269ed5264d6\") " pod="openstack/ovn-copy-data" Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.100848 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p997k\" (UniqueName: \"kubernetes.io/projected/15260f61-f63b-48cf-8c1d-1269ed5264d6-kube-api-access-p997k\") pod \"ovn-copy-data\" (UID: \"15260f61-f63b-48cf-8c1d-1269ed5264d6\") " pod="openstack/ovn-copy-data" Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.112576 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-292168a9-bbdb-49af-9ca7-c15d08ebd2ba\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-292168a9-bbdb-49af-9ca7-c15d08ebd2ba\") pod \"ovn-copy-data\" (UID: \"15260f61-f63b-48cf-8c1d-1269ed5264d6\") " pod="openstack/ovn-copy-data" Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.164701 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.308775 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb464c35-1456-495f-bbc0-3d23c076af70" path="/var/lib/kubelet/pods/eb464c35-1456-495f-bbc0-3d23c076af70/volumes" Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.673636 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.683675 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.948429 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"15260f61-f63b-48cf-8c1d-1269ed5264d6","Type":"ContainerStarted","Data":"4aa99197a4359638f0a077c3b965835f3f1ebab8569a45c63b721fb146eec322"} Jan 21 16:03:29 crc kubenswrapper[4902]: I0121 16:03:29.962926 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"15260f61-f63b-48cf-8c1d-1269ed5264d6","Type":"ContainerStarted","Data":"ff53c1d391c5b288428658245a6576a3b8ff756bb1960088cd4ffdc889080fc5"} Jan 21 16:03:29 crc kubenswrapper[4902]: I0121 16:03:29.986887 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.249788301 podStartE2EDuration="3.986824324s" podCreationTimestamp="2026-01-21 16:03:26 +0000 UTC" firstStartedPulling="2026-01-21 16:03:28.68316858 +0000 UTC m=+5370.760001609" lastFinishedPulling="2026-01-21 16:03:29.420204553 +0000 UTC m=+5371.497037632" observedRunningTime="2026-01-21 16:03:29.977645797 +0000 UTC m=+5372.054478846" watchObservedRunningTime="2026-01-21 16:03:29.986824324 +0000 UTC m=+5372.063657363" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.243375 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.245552 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.255859 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.256665 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.257872 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-9hts2" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.259986 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.285348 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.306295 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8db1a8e-13c3-41be-9f21-24077d0e4e29-scripts\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.306532 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8db1a8e-13c3-41be-9f21-24077d0e4e29-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.306620 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8db1a8e-13c3-41be-9f21-24077d0e4e29-config\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.306687 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d9rh\" (UniqueName: \"kubernetes.io/projected/b8db1a8e-13c3-41be-9f21-24077d0e4e29-kube-api-access-4d9rh\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.306752 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b8db1a8e-13c3-41be-9f21-24077d0e4e29-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.306861 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8db1a8e-13c3-41be-9f21-24077d0e4e29-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.306944 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8db1a8e-13c3-41be-9f21-24077d0e4e29-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.408000 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8db1a8e-13c3-41be-9f21-24077d0e4e29-scripts\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.408235 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8db1a8e-13c3-41be-9f21-24077d0e4e29-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.408359 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8db1a8e-13c3-41be-9f21-24077d0e4e29-config\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.408447 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d9rh\" (UniqueName: \"kubernetes.io/projected/b8db1a8e-13c3-41be-9f21-24077d0e4e29-kube-api-access-4d9rh\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.408529 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b8db1a8e-13c3-41be-9f21-24077d0e4e29-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.408648 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8db1a8e-13c3-41be-9f21-24077d0e4e29-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.408744 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8db1a8e-13c3-41be-9f21-24077d0e4e29-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.408900 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8db1a8e-13c3-41be-9f21-24077d0e4e29-scripts\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.408932 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b8db1a8e-13c3-41be-9f21-24077d0e4e29-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.409355 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8db1a8e-13c3-41be-9f21-24077d0e4e29-config\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.414852 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8db1a8e-13c3-41be-9f21-24077d0e4e29-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.417287 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8db1a8e-13c3-41be-9f21-24077d0e4e29-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.417718 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8db1a8e-13c3-41be-9f21-24077d0e4e29-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.434997 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d9rh\" (UniqueName: \"kubernetes.io/projected/b8db1a8e-13c3-41be-9f21-24077d0e4e29-kube-api-access-4d9rh\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.561393 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.724879 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.805998 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-tv7h5"] Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.806314 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" podUID="9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495" containerName="dnsmasq-dns" containerID="cri-o://dcfdb86c7fc37ba60155fa847c386b16cdc514c878f35f9ebd3ae35f87d4d133" gracePeriod=10 Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.024309 4902 generic.go:334] "Generic (PLEG): container finished" podID="9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495" containerID="dcfdb86c7fc37ba60155fa847c386b16cdc514c878f35f9ebd3ae35f87d4d133" exitCode=0 Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.024643 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" event={"ID":"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495","Type":"ContainerDied","Data":"dcfdb86c7fc37ba60155fa847c386b16cdc514c878f35f9ebd3ae35f87d4d133"} Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.079939 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.193801 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.324268 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-dns-svc\") pod \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\" (UID: \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\") " Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.324496 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fngb\" (UniqueName: \"kubernetes.io/projected/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-kube-api-access-5fngb\") pod \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\" (UID: \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\") " Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.324564 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-config\") pod \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\" (UID: \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\") " Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.329092 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-kube-api-access-5fngb" (OuterVolumeSpecName: "kube-api-access-5fngb") pod "9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495" (UID: "9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495"). InnerVolumeSpecName "kube-api-access-5fngb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.376171 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495" (UID: "9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.383154 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-config" (OuterVolumeSpecName: "config") pod "9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495" (UID: "9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.426034 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.426094 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.426109 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fngb\" (UniqueName: \"kubernetes.io/projected/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-kube-api-access-5fngb\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:37 crc kubenswrapper[4902]: I0121 16:03:37.037306 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" event={"ID":"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495","Type":"ContainerDied","Data":"44ae670f7eef2a4f69445e5a528bd2462006fde1e2ee9d0bfd1314e5b3fef469"} Jan 21 16:03:37 crc kubenswrapper[4902]: I0121 16:03:37.037360 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 16:03:37 crc kubenswrapper[4902]: I0121 16:03:37.037403 4902 scope.go:117] "RemoveContainer" containerID="dcfdb86c7fc37ba60155fa847c386b16cdc514c878f35f9ebd3ae35f87d4d133" Jan 21 16:03:37 crc kubenswrapper[4902]: I0121 16:03:37.040335 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b8db1a8e-13c3-41be-9f21-24077d0e4e29","Type":"ContainerStarted","Data":"26ef1b8702c0d19e28bd0877e194c280b8dda6c145c70003041ce54fd44e2cff"} Jan 21 16:03:37 crc kubenswrapper[4902]: I0121 16:03:37.040368 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b8db1a8e-13c3-41be-9f21-24077d0e4e29","Type":"ContainerStarted","Data":"d816bffd3d2f2355cd8337b089c56d42732f2bcb190939b8a40a6c3af3c7b3c7"} Jan 21 16:03:37 crc kubenswrapper[4902]: I0121 16:03:37.040386 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b8db1a8e-13c3-41be-9f21-24077d0e4e29","Type":"ContainerStarted","Data":"eb4275d52e1ff49ba1311059884959115b5ec6aff61502ea0194dc7d6ef1c53c"} Jan 21 16:03:37 crc kubenswrapper[4902]: I0121 16:03:37.040611 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 21 16:03:37 crc kubenswrapper[4902]: I0121 16:03:37.074623 4902 scope.go:117] "RemoveContainer" containerID="55e0afd2388c802fda6ed46b943d3d217c1e55bd357a95a943ea57a5cb135bcf" Jan 21 16:03:37 crc kubenswrapper[4902]: I0121 16:03:37.082849 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.082831822 podStartE2EDuration="2.082831822s" podCreationTimestamp="2026-01-21 16:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:37.07095489 +0000 UTC m=+5379.147787919" watchObservedRunningTime="2026-01-21 16:03:37.082831822 +0000 UTC m=+5379.159664851" Jan 21 16:03:37 crc kubenswrapper[4902]: I0121 16:03:37.087326 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-tv7h5"] Jan 21 16:03:37 crc kubenswrapper[4902]: I0121 16:03:37.092776 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-tv7h5"] Jan 21 16:03:38 crc kubenswrapper[4902]: I0121 16:03:38.304271 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495" path="/var/lib/kubelet/pods/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495/volumes" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.146727 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-9fbzk"] Jan 21 16:03:40 crc kubenswrapper[4902]: E0121 16:03:40.147256 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495" containerName="init" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.147268 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495" containerName="init" Jan 21 16:03:40 crc kubenswrapper[4902]: E0121 16:03:40.147288 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495" containerName="dnsmasq-dns" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.147295 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495" containerName="dnsmasq-dns" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.147478 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495" containerName="dnsmasq-dns" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.147979 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9fbzk" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.166594 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9fbzk"] Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.244888 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-fd43-account-create-update-f6bm7"] Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.245870 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fd43-account-create-update-f6bm7" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.248907 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.260689 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-fd43-account-create-update-f6bm7"] Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.294665 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:03:40 crc kubenswrapper[4902]: E0121 16:03:40.294950 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.295006 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd3463ca-5f37-4a7e-9f53-c32f2abe3502-operator-scripts\") pod \"keystone-db-create-9fbzk\" (UID: \"dd3463ca-5f37-4a7e-9f53-c32f2abe3502\") " pod="openstack/keystone-db-create-9fbzk" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.295072 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b9f6374-66c7-4124-b410-c5d60c8f0d6b-operator-scripts\") pod \"keystone-fd43-account-create-update-f6bm7\" (UID: \"0b9f6374-66c7-4124-b410-c5d60c8f0d6b\") " pod="openstack/keystone-fd43-account-create-update-f6bm7" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.295206 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zskk9\" (UniqueName: \"kubernetes.io/projected/dd3463ca-5f37-4a7e-9f53-c32f2abe3502-kube-api-access-zskk9\") pod \"keystone-db-create-9fbzk\" (UID: \"dd3463ca-5f37-4a7e-9f53-c32f2abe3502\") " pod="openstack/keystone-db-create-9fbzk" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.295244 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29s8q\" (UniqueName: \"kubernetes.io/projected/0b9f6374-66c7-4124-b410-c5d60c8f0d6b-kube-api-access-29s8q\") pod \"keystone-fd43-account-create-update-f6bm7\" (UID: \"0b9f6374-66c7-4124-b410-c5d60c8f0d6b\") " pod="openstack/keystone-fd43-account-create-update-f6bm7" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.397888 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd3463ca-5f37-4a7e-9f53-c32f2abe3502-operator-scripts\") pod \"keystone-db-create-9fbzk\" (UID: \"dd3463ca-5f37-4a7e-9f53-c32f2abe3502\") " pod="openstack/keystone-db-create-9fbzk" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.398085 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b9f6374-66c7-4124-b410-c5d60c8f0d6b-operator-scripts\") pod \"keystone-fd43-account-create-update-f6bm7\" (UID: \"0b9f6374-66c7-4124-b410-c5d60c8f0d6b\") " pod="openstack/keystone-fd43-account-create-update-f6bm7" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.398448 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zskk9\" (UniqueName: \"kubernetes.io/projected/dd3463ca-5f37-4a7e-9f53-c32f2abe3502-kube-api-access-zskk9\") pod \"keystone-db-create-9fbzk\" (UID: \"dd3463ca-5f37-4a7e-9f53-c32f2abe3502\") " pod="openstack/keystone-db-create-9fbzk" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.398505 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29s8q\" (UniqueName: \"kubernetes.io/projected/0b9f6374-66c7-4124-b410-c5d60c8f0d6b-kube-api-access-29s8q\") pod \"keystone-fd43-account-create-update-f6bm7\" (UID: \"0b9f6374-66c7-4124-b410-c5d60c8f0d6b\") " pod="openstack/keystone-fd43-account-create-update-f6bm7" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.398660 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd3463ca-5f37-4a7e-9f53-c32f2abe3502-operator-scripts\") pod \"keystone-db-create-9fbzk\" (UID: \"dd3463ca-5f37-4a7e-9f53-c32f2abe3502\") " pod="openstack/keystone-db-create-9fbzk" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.399144 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b9f6374-66c7-4124-b410-c5d60c8f0d6b-operator-scripts\") pod \"keystone-fd43-account-create-update-f6bm7\" (UID: \"0b9f6374-66c7-4124-b410-c5d60c8f0d6b\") " pod="openstack/keystone-fd43-account-create-update-f6bm7" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.418105 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29s8q\" (UniqueName: \"kubernetes.io/projected/0b9f6374-66c7-4124-b410-c5d60c8f0d6b-kube-api-access-29s8q\") pod \"keystone-fd43-account-create-update-f6bm7\" (UID: \"0b9f6374-66c7-4124-b410-c5d60c8f0d6b\") " pod="openstack/keystone-fd43-account-create-update-f6bm7" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.418167 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zskk9\" (UniqueName: \"kubernetes.io/projected/dd3463ca-5f37-4a7e-9f53-c32f2abe3502-kube-api-access-zskk9\") pod \"keystone-db-create-9fbzk\" (UID: \"dd3463ca-5f37-4a7e-9f53-c32f2abe3502\") " pod="openstack/keystone-db-create-9fbzk" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.475245 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9fbzk" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.561344 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fd43-account-create-update-f6bm7" Jan 21 16:03:41 crc kubenswrapper[4902]: W0121 16:03:41.073792 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd3463ca_5f37_4a7e_9f53_c32f2abe3502.slice/crio-f3b8eb4601e0c26c26e896dc3a959fad15e60b9c8ee5c1b2c51c4659ff955289 WatchSource:0}: Error finding container f3b8eb4601e0c26c26e896dc3a959fad15e60b9c8ee5c1b2c51c4659ff955289: Status 404 returned error can't find the container with id f3b8eb4601e0c26c26e896dc3a959fad15e60b9c8ee5c1b2c51c4659ff955289 Jan 21 16:03:41 crc kubenswrapper[4902]: I0121 16:03:41.076007 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9fbzk"] Jan 21 16:03:41 crc kubenswrapper[4902]: I0121 16:03:41.207442 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-fd43-account-create-update-f6bm7"] Jan 21 16:03:41 crc kubenswrapper[4902]: W0121 16:03:41.210417 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b9f6374_66c7_4124_b410_c5d60c8f0d6b.slice/crio-3854c16cd249a08d3a0c5a72df006174d88fc50b746387702607178c4b097050 WatchSource:0}: Error finding container 3854c16cd249a08d3a0c5a72df006174d88fc50b746387702607178c4b097050: Status 404 returned error can't find the container with id 3854c16cd249a08d3a0c5a72df006174d88fc50b746387702607178c4b097050 Jan 21 16:03:42 crc kubenswrapper[4902]: I0121 16:03:42.087884 4902 generic.go:334] "Generic (PLEG): container finished" podID="0b9f6374-66c7-4124-b410-c5d60c8f0d6b" containerID="994f6f05fed4b0e62e48fa8578c2ecb21f387018408d5954555b07ebf19b3b49" exitCode=0 Jan 21 16:03:42 crc kubenswrapper[4902]: I0121 16:03:42.087949 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fd43-account-create-update-f6bm7" event={"ID":"0b9f6374-66c7-4124-b410-c5d60c8f0d6b","Type":"ContainerDied","Data":"994f6f05fed4b0e62e48fa8578c2ecb21f387018408d5954555b07ebf19b3b49"} Jan 21 16:03:42 crc kubenswrapper[4902]: I0121 16:03:42.088262 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fd43-account-create-update-f6bm7" event={"ID":"0b9f6374-66c7-4124-b410-c5d60c8f0d6b","Type":"ContainerStarted","Data":"3854c16cd249a08d3a0c5a72df006174d88fc50b746387702607178c4b097050"} Jan 21 16:03:42 crc kubenswrapper[4902]: I0121 16:03:42.090512 4902 generic.go:334] "Generic (PLEG): container finished" podID="dd3463ca-5f37-4a7e-9f53-c32f2abe3502" containerID="94e5637468147f71d442912ca57ee6a969ce1c74828b8408d61b57b6d26eda33" exitCode=0 Jan 21 16:03:42 crc kubenswrapper[4902]: I0121 16:03:42.090557 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9fbzk" event={"ID":"dd3463ca-5f37-4a7e-9f53-c32f2abe3502","Type":"ContainerDied","Data":"94e5637468147f71d442912ca57ee6a969ce1c74828b8408d61b57b6d26eda33"} Jan 21 16:03:42 crc kubenswrapper[4902]: I0121 16:03:42.090590 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9fbzk" event={"ID":"dd3463ca-5f37-4a7e-9f53-c32f2abe3502","Type":"ContainerStarted","Data":"f3b8eb4601e0c26c26e896dc3a959fad15e60b9c8ee5c1b2c51c4659ff955289"} Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.473812 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fd43-account-create-update-f6bm7" Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.553998 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29s8q\" (UniqueName: \"kubernetes.io/projected/0b9f6374-66c7-4124-b410-c5d60c8f0d6b-kube-api-access-29s8q\") pod \"0b9f6374-66c7-4124-b410-c5d60c8f0d6b\" (UID: \"0b9f6374-66c7-4124-b410-c5d60c8f0d6b\") " Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.554132 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b9f6374-66c7-4124-b410-c5d60c8f0d6b-operator-scripts\") pod \"0b9f6374-66c7-4124-b410-c5d60c8f0d6b\" (UID: \"0b9f6374-66c7-4124-b410-c5d60c8f0d6b\") " Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.555148 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b9f6374-66c7-4124-b410-c5d60c8f0d6b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b9f6374-66c7-4124-b410-c5d60c8f0d6b" (UID: "0b9f6374-66c7-4124-b410-c5d60c8f0d6b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.560648 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b9f6374-66c7-4124-b410-c5d60c8f0d6b-kube-api-access-29s8q" (OuterVolumeSpecName: "kube-api-access-29s8q") pod "0b9f6374-66c7-4124-b410-c5d60c8f0d6b" (UID: "0b9f6374-66c7-4124-b410-c5d60c8f0d6b"). InnerVolumeSpecName "kube-api-access-29s8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.561817 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9fbzk" Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.655889 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zskk9\" (UniqueName: \"kubernetes.io/projected/dd3463ca-5f37-4a7e-9f53-c32f2abe3502-kube-api-access-zskk9\") pod \"dd3463ca-5f37-4a7e-9f53-c32f2abe3502\" (UID: \"dd3463ca-5f37-4a7e-9f53-c32f2abe3502\") " Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.656055 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd3463ca-5f37-4a7e-9f53-c32f2abe3502-operator-scripts\") pod \"dd3463ca-5f37-4a7e-9f53-c32f2abe3502\" (UID: \"dd3463ca-5f37-4a7e-9f53-c32f2abe3502\") " Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.656388 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29s8q\" (UniqueName: \"kubernetes.io/projected/0b9f6374-66c7-4124-b410-c5d60c8f0d6b-kube-api-access-29s8q\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.656402 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b9f6374-66c7-4124-b410-c5d60c8f0d6b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.657145 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd3463ca-5f37-4a7e-9f53-c32f2abe3502-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd3463ca-5f37-4a7e-9f53-c32f2abe3502" (UID: "dd3463ca-5f37-4a7e-9f53-c32f2abe3502"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.659602 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd3463ca-5f37-4a7e-9f53-c32f2abe3502-kube-api-access-zskk9" (OuterVolumeSpecName: "kube-api-access-zskk9") pod "dd3463ca-5f37-4a7e-9f53-c32f2abe3502" (UID: "dd3463ca-5f37-4a7e-9f53-c32f2abe3502"). InnerVolumeSpecName "kube-api-access-zskk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.757775 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zskk9\" (UniqueName: \"kubernetes.io/projected/dd3463ca-5f37-4a7e-9f53-c32f2abe3502-kube-api-access-zskk9\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.757813 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd3463ca-5f37-4a7e-9f53-c32f2abe3502-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:44 crc kubenswrapper[4902]: I0121 16:03:44.108143 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9fbzk" event={"ID":"dd3463ca-5f37-4a7e-9f53-c32f2abe3502","Type":"ContainerDied","Data":"f3b8eb4601e0c26c26e896dc3a959fad15e60b9c8ee5c1b2c51c4659ff955289"} Jan 21 16:03:44 crc kubenswrapper[4902]: I0121 16:03:44.108595 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3b8eb4601e0c26c26e896dc3a959fad15e60b9c8ee5c1b2c51c4659ff955289" Jan 21 16:03:44 crc kubenswrapper[4902]: I0121 16:03:44.108186 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9fbzk" Jan 21 16:03:44 crc kubenswrapper[4902]: I0121 16:03:44.110238 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fd43-account-create-update-f6bm7" event={"ID":"0b9f6374-66c7-4124-b410-c5d60c8f0d6b","Type":"ContainerDied","Data":"3854c16cd249a08d3a0c5a72df006174d88fc50b746387702607178c4b097050"} Jan 21 16:03:44 crc kubenswrapper[4902]: I0121 16:03:44.110277 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3854c16cd249a08d3a0c5a72df006174d88fc50b746387702607178c4b097050" Jan 21 16:03:44 crc kubenswrapper[4902]: I0121 16:03:44.110347 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fd43-account-create-update-f6bm7" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.691125 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-m6jz2"] Jan 21 16:03:45 crc kubenswrapper[4902]: E0121 16:03:45.715774 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b9f6374-66c7-4124-b410-c5d60c8f0d6b" containerName="mariadb-account-create-update" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.715821 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b9f6374-66c7-4124-b410-c5d60c8f0d6b" containerName="mariadb-account-create-update" Jan 21 16:03:45 crc kubenswrapper[4902]: E0121 16:03:45.715874 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd3463ca-5f37-4a7e-9f53-c32f2abe3502" containerName="mariadb-database-create" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.715886 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd3463ca-5f37-4a7e-9f53-c32f2abe3502" containerName="mariadb-database-create" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.716600 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd3463ca-5f37-4a7e-9f53-c32f2abe3502" containerName="mariadb-database-create" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.716627 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b9f6374-66c7-4124-b410-c5d60c8f0d6b" containerName="mariadb-account-create-update" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.717447 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-m6jz2"] Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.717571 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.720519 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.720655 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-92pp7" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.720687 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.720804 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.792009 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/072d9d46-6930-490e-9561-cd7e75f05451-config-data\") pod \"keystone-db-sync-m6jz2\" (UID: \"072d9d46-6930-490e-9561-cd7e75f05451\") " pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.792084 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfdwq\" (UniqueName: \"kubernetes.io/projected/072d9d46-6930-490e-9561-cd7e75f05451-kube-api-access-lfdwq\") pod \"keystone-db-sync-m6jz2\" (UID: \"072d9d46-6930-490e-9561-cd7e75f05451\") " pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.792114 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/072d9d46-6930-490e-9561-cd7e75f05451-combined-ca-bundle\") pod \"keystone-db-sync-m6jz2\" (UID: \"072d9d46-6930-490e-9561-cd7e75f05451\") " pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.893269 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/072d9d46-6930-490e-9561-cd7e75f05451-config-data\") pod \"keystone-db-sync-m6jz2\" (UID: \"072d9d46-6930-490e-9561-cd7e75f05451\") " pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.893322 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfdwq\" (UniqueName: \"kubernetes.io/projected/072d9d46-6930-490e-9561-cd7e75f05451-kube-api-access-lfdwq\") pod \"keystone-db-sync-m6jz2\" (UID: \"072d9d46-6930-490e-9561-cd7e75f05451\") " pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.893349 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/072d9d46-6930-490e-9561-cd7e75f05451-combined-ca-bundle\") pod \"keystone-db-sync-m6jz2\" (UID: \"072d9d46-6930-490e-9561-cd7e75f05451\") " pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.899925 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/072d9d46-6930-490e-9561-cd7e75f05451-config-data\") pod \"keystone-db-sync-m6jz2\" (UID: \"072d9d46-6930-490e-9561-cd7e75f05451\") " pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.901001 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/072d9d46-6930-490e-9561-cd7e75f05451-combined-ca-bundle\") pod \"keystone-db-sync-m6jz2\" (UID: \"072d9d46-6930-490e-9561-cd7e75f05451\") " pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.911256 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfdwq\" (UniqueName: \"kubernetes.io/projected/072d9d46-6930-490e-9561-cd7e75f05451-kube-api-access-lfdwq\") pod \"keystone-db-sync-m6jz2\" (UID: \"072d9d46-6930-490e-9561-cd7e75f05451\") " pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:46 crc kubenswrapper[4902]: I0121 16:03:46.055119 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:46 crc kubenswrapper[4902]: I0121 16:03:46.469668 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-m6jz2"] Jan 21 16:03:46 crc kubenswrapper[4902]: W0121 16:03:46.481321 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod072d9d46_6930_490e_9561_cd7e75f05451.slice/crio-8692db0225eac7ec321d256533dacda40d0bb2ca0ad4c89595a09a8bdbba897a WatchSource:0}: Error finding container 8692db0225eac7ec321d256533dacda40d0bb2ca0ad4c89595a09a8bdbba897a: Status 404 returned error can't find the container with id 8692db0225eac7ec321d256533dacda40d0bb2ca0ad4c89595a09a8bdbba897a Jan 21 16:03:47 crc kubenswrapper[4902]: I0121 16:03:47.128967 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-m6jz2" event={"ID":"072d9d46-6930-490e-9561-cd7e75f05451","Type":"ContainerStarted","Data":"c898501a393ec12d8bdad3ffbecedd820d45983cad8f57e77c1b8bf1f2602ced"} Jan 21 16:03:47 crc kubenswrapper[4902]: I0121 16:03:47.129319 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-m6jz2" event={"ID":"072d9d46-6930-490e-9561-cd7e75f05451","Type":"ContainerStarted","Data":"8692db0225eac7ec321d256533dacda40d0bb2ca0ad4c89595a09a8bdbba897a"} Jan 21 16:03:47 crc kubenswrapper[4902]: I0121 16:03:47.149194 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-m6jz2" podStartSLOduration=2.149170179 podStartE2EDuration="2.149170179s" podCreationTimestamp="2026-01-21 16:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:47.144305723 +0000 UTC m=+5389.221138812" watchObservedRunningTime="2026-01-21 16:03:47.149170179 +0000 UTC m=+5389.226003238" Jan 21 16:03:49 crc kubenswrapper[4902]: I0121 16:03:49.146597 4902 generic.go:334] "Generic (PLEG): container finished" podID="072d9d46-6930-490e-9561-cd7e75f05451" containerID="c898501a393ec12d8bdad3ffbecedd820d45983cad8f57e77c1b8bf1f2602ced" exitCode=0 Jan 21 16:03:49 crc kubenswrapper[4902]: I0121 16:03:49.146667 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-m6jz2" event={"ID":"072d9d46-6930-490e-9561-cd7e75f05451","Type":"ContainerDied","Data":"c898501a393ec12d8bdad3ffbecedd820d45983cad8f57e77c1b8bf1f2602ced"} Jan 21 16:03:50 crc kubenswrapper[4902]: I0121 16:03:50.521253 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:50 crc kubenswrapper[4902]: I0121 16:03:50.566849 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/072d9d46-6930-490e-9561-cd7e75f05451-config-data\") pod \"072d9d46-6930-490e-9561-cd7e75f05451\" (UID: \"072d9d46-6930-490e-9561-cd7e75f05451\") " Jan 21 16:03:50 crc kubenswrapper[4902]: I0121 16:03:50.567103 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfdwq\" (UniqueName: \"kubernetes.io/projected/072d9d46-6930-490e-9561-cd7e75f05451-kube-api-access-lfdwq\") pod \"072d9d46-6930-490e-9561-cd7e75f05451\" (UID: \"072d9d46-6930-490e-9561-cd7e75f05451\") " Jan 21 16:03:50 crc kubenswrapper[4902]: I0121 16:03:50.567134 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/072d9d46-6930-490e-9561-cd7e75f05451-combined-ca-bundle\") pod \"072d9d46-6930-490e-9561-cd7e75f05451\" (UID: \"072d9d46-6930-490e-9561-cd7e75f05451\") " Jan 21 16:03:50 crc kubenswrapper[4902]: I0121 16:03:50.577101 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/072d9d46-6930-490e-9561-cd7e75f05451-kube-api-access-lfdwq" (OuterVolumeSpecName: "kube-api-access-lfdwq") pod "072d9d46-6930-490e-9561-cd7e75f05451" (UID: "072d9d46-6930-490e-9561-cd7e75f05451"). InnerVolumeSpecName "kube-api-access-lfdwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:50 crc kubenswrapper[4902]: I0121 16:03:50.597596 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/072d9d46-6930-490e-9561-cd7e75f05451-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "072d9d46-6930-490e-9561-cd7e75f05451" (UID: "072d9d46-6930-490e-9561-cd7e75f05451"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:03:50 crc kubenswrapper[4902]: I0121 16:03:50.623459 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 21 16:03:50 crc kubenswrapper[4902]: I0121 16:03:50.626245 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/072d9d46-6930-490e-9561-cd7e75f05451-config-data" (OuterVolumeSpecName: "config-data") pod "072d9d46-6930-490e-9561-cd7e75f05451" (UID: "072d9d46-6930-490e-9561-cd7e75f05451"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:03:50 crc kubenswrapper[4902]: I0121 16:03:50.681173 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfdwq\" (UniqueName: \"kubernetes.io/projected/072d9d46-6930-490e-9561-cd7e75f05451-kube-api-access-lfdwq\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:50 crc kubenswrapper[4902]: I0121 16:03:50.681210 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/072d9d46-6930-490e-9561-cd7e75f05451-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:50 crc kubenswrapper[4902]: I0121 16:03:50.681223 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/072d9d46-6930-490e-9561-cd7e75f05451-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.167451 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-m6jz2" event={"ID":"072d9d46-6930-490e-9561-cd7e75f05451","Type":"ContainerDied","Data":"8692db0225eac7ec321d256533dacda40d0bb2ca0ad4c89595a09a8bdbba897a"} Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.167504 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8692db0225eac7ec321d256533dacda40d0bb2ca0ad4c89595a09a8bdbba897a" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.167557 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.426006 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b4c895fc-lcmhk"] Jan 21 16:03:51 crc kubenswrapper[4902]: E0121 16:03:51.426413 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="072d9d46-6930-490e-9561-cd7e75f05451" containerName="keystone-db-sync" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.426430 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="072d9d46-6930-490e-9561-cd7e75f05451" containerName="keystone-db-sync" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.426588 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="072d9d46-6930-490e-9561-cd7e75f05451" containerName="keystone-db-sync" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.427401 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.446977 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b4c895fc-lcmhk"] Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.470454 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vmkwm"] Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.471819 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.474777 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.475186 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.475287 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.475348 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-92pp7" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.475366 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.482023 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vmkwm"] Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.494912 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-ovsdbserver-sb\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.494978 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-config\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.495033 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-ovsdbserver-nb\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.495106 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-297sx\" (UniqueName: \"kubernetes.io/projected/f2ab6913-fdd0-4944-8c16-c213aecdd825-kube-api-access-297sx\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.495135 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-dns-svc\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.597429 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-ovsdbserver-nb\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.597497 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtn99\" (UniqueName: \"kubernetes.io/projected/986c3cfd-00c0-4c5f-a3af-ef42bb380140-kube-api-access-qtn99\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.597551 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-combined-ca-bundle\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.597639 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-297sx\" (UniqueName: \"kubernetes.io/projected/f2ab6913-fdd0-4944-8c16-c213aecdd825-kube-api-access-297sx\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.598105 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-dns-svc\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.598478 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-ovsdbserver-nb\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.599008 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-dns-svc\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.599127 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-config-data\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.599335 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-fernet-keys\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.599375 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-scripts\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.599416 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-credential-keys\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.601845 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-ovsdbserver-sb\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.601930 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-config\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.602777 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-ovsdbserver-sb\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.603073 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-config\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.618851 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-297sx\" (UniqueName: \"kubernetes.io/projected/f2ab6913-fdd0-4944-8c16-c213aecdd825-kube-api-access-297sx\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.704255 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtn99\" (UniqueName: \"kubernetes.io/projected/986c3cfd-00c0-4c5f-a3af-ef42bb380140-kube-api-access-qtn99\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.704631 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-combined-ca-bundle\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.704710 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-config-data\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.704755 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-fernet-keys\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.704788 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-scripts\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.704823 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-credential-keys\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.710015 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-config-data\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.711987 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-combined-ca-bundle\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.712530 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-scripts\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.713842 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-fernet-keys\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.720186 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-credential-keys\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.722369 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtn99\" (UniqueName: \"kubernetes.io/projected/986c3cfd-00c0-4c5f-a3af-ef42bb380140-kube-api-access-qtn99\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.744639 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.790915 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:52 crc kubenswrapper[4902]: I0121 16:03:52.206307 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b4c895fc-lcmhk"] Jan 21 16:03:52 crc kubenswrapper[4902]: W0121 16:03:52.209084 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2ab6913_fdd0_4944_8c16_c213aecdd825.slice/crio-f65c006b6d1833eb0c682b4e268cdf5aca7299197e459f4d36236b4f3229b9fe WatchSource:0}: Error finding container f65c006b6d1833eb0c682b4e268cdf5aca7299197e459f4d36236b4f3229b9fe: Status 404 returned error can't find the container with id f65c006b6d1833eb0c682b4e268cdf5aca7299197e459f4d36236b4f3229b9fe Jan 21 16:03:52 crc kubenswrapper[4902]: I0121 16:03:52.333136 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vmkwm"] Jan 21 16:03:53 crc kubenswrapper[4902]: I0121 16:03:53.183774 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" event={"ID":"f2ab6913-fdd0-4944-8c16-c213aecdd825","Type":"ContainerDied","Data":"c548aa5ba6d350e77b6beec3d64af186cf452dd8633be8614338761c7800ca06"} Jan 21 16:03:53 crc kubenswrapper[4902]: I0121 16:03:53.184907 4902 generic.go:334] "Generic (PLEG): container finished" podID="f2ab6913-fdd0-4944-8c16-c213aecdd825" containerID="c548aa5ba6d350e77b6beec3d64af186cf452dd8633be8614338761c7800ca06" exitCode=0 Jan 21 16:03:53 crc kubenswrapper[4902]: I0121 16:03:53.185066 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" event={"ID":"f2ab6913-fdd0-4944-8c16-c213aecdd825","Type":"ContainerStarted","Data":"f65c006b6d1833eb0c682b4e268cdf5aca7299197e459f4d36236b4f3229b9fe"} Jan 21 16:03:53 crc kubenswrapper[4902]: I0121 16:03:53.187379 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vmkwm" event={"ID":"986c3cfd-00c0-4c5f-a3af-ef42bb380140","Type":"ContainerStarted","Data":"aba9f698bb3c03d4e31ec5eca5323d9be2568c046cc99860cd7803581de5e34e"} Jan 21 16:03:53 crc kubenswrapper[4902]: I0121 16:03:53.187467 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vmkwm" event={"ID":"986c3cfd-00c0-4c5f-a3af-ef42bb380140","Type":"ContainerStarted","Data":"841a20b7c5a423f1f6ce2baa8b54da8b3caff167a48c142e143e37a1664a974a"} Jan 21 16:03:53 crc kubenswrapper[4902]: I0121 16:03:53.264879 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vmkwm" podStartSLOduration=2.264852249 podStartE2EDuration="2.264852249s" podCreationTimestamp="2026-01-21 16:03:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:53.261709051 +0000 UTC m=+5395.338542110" watchObservedRunningTime="2026-01-21 16:03:53.264852249 +0000 UTC m=+5395.341685278" Jan 21 16:03:53 crc kubenswrapper[4902]: I0121 16:03:53.294694 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:03:53 crc kubenswrapper[4902]: E0121 16:03:53.294979 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:03:54 crc kubenswrapper[4902]: I0121 16:03:54.197867 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" event={"ID":"f2ab6913-fdd0-4944-8c16-c213aecdd825","Type":"ContainerStarted","Data":"d712f1b4cdf6346532f6de92bb64a6956b68ba70087482d2c995c46acdeba1e0"} Jan 21 16:03:54 crc kubenswrapper[4902]: I0121 16:03:54.198613 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:54 crc kubenswrapper[4902]: I0121 16:03:54.222996 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" podStartSLOduration=3.222963775 podStartE2EDuration="3.222963775s" podCreationTimestamp="2026-01-21 16:03:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:54.212977326 +0000 UTC m=+5396.289810365" watchObservedRunningTime="2026-01-21 16:03:54.222963775 +0000 UTC m=+5396.299796844" Jan 21 16:03:57 crc kubenswrapper[4902]: I0121 16:03:57.223618 4902 generic.go:334] "Generic (PLEG): container finished" podID="986c3cfd-00c0-4c5f-a3af-ef42bb380140" containerID="aba9f698bb3c03d4e31ec5eca5323d9be2568c046cc99860cd7803581de5e34e" exitCode=0 Jan 21 16:03:57 crc kubenswrapper[4902]: I0121 16:03:57.223677 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vmkwm" event={"ID":"986c3cfd-00c0-4c5f-a3af-ef42bb380140","Type":"ContainerDied","Data":"aba9f698bb3c03d4e31ec5eca5323d9be2568c046cc99860cd7803581de5e34e"} Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.605603 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.728522 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-scripts\") pod \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.728577 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-config-data\") pod \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.728613 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-combined-ca-bundle\") pod \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.728646 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-fernet-keys\") pod \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.728714 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtn99\" (UniqueName: \"kubernetes.io/projected/986c3cfd-00c0-4c5f-a3af-ef42bb380140-kube-api-access-qtn99\") pod \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.728827 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-credential-keys\") pod \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.734803 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/986c3cfd-00c0-4c5f-a3af-ef42bb380140-kube-api-access-qtn99" (OuterVolumeSpecName: "kube-api-access-qtn99") pod "986c3cfd-00c0-4c5f-a3af-ef42bb380140" (UID: "986c3cfd-00c0-4c5f-a3af-ef42bb380140"). InnerVolumeSpecName "kube-api-access-qtn99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.735480 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "986c3cfd-00c0-4c5f-a3af-ef42bb380140" (UID: "986c3cfd-00c0-4c5f-a3af-ef42bb380140"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.735864 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "986c3cfd-00c0-4c5f-a3af-ef42bb380140" (UID: "986c3cfd-00c0-4c5f-a3af-ef42bb380140"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.737338 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-scripts" (OuterVolumeSpecName: "scripts") pod "986c3cfd-00c0-4c5f-a3af-ef42bb380140" (UID: "986c3cfd-00c0-4c5f-a3af-ef42bb380140"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.755818 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-config-data" (OuterVolumeSpecName: "config-data") pod "986c3cfd-00c0-4c5f-a3af-ef42bb380140" (UID: "986c3cfd-00c0-4c5f-a3af-ef42bb380140"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.762177 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "986c3cfd-00c0-4c5f-a3af-ef42bb380140" (UID: "986c3cfd-00c0-4c5f-a3af-ef42bb380140"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.831138 4902 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.831187 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.831198 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.831208 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.831216 4902 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.831224 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtn99\" (UniqueName: \"kubernetes.io/projected/986c3cfd-00c0-4c5f-a3af-ef42bb380140-kube-api-access-qtn99\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.239573 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vmkwm" event={"ID":"986c3cfd-00c0-4c5f-a3af-ef42bb380140","Type":"ContainerDied","Data":"841a20b7c5a423f1f6ce2baa8b54da8b3caff167a48c142e143e37a1664a974a"} Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.239614 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="841a20b7c5a423f1f6ce2baa8b54da8b3caff167a48c142e143e37a1664a974a" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.239685 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.328019 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vmkwm"] Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.334342 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vmkwm"] Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.414420 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-clvkp"] Jan 21 16:03:59 crc kubenswrapper[4902]: E0121 16:03:59.414744 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986c3cfd-00c0-4c5f-a3af-ef42bb380140" containerName="keystone-bootstrap" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.414762 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="986c3cfd-00c0-4c5f-a3af-ef42bb380140" containerName="keystone-bootstrap" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.414974 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="986c3cfd-00c0-4c5f-a3af-ef42bb380140" containerName="keystone-bootstrap" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.415665 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.425403 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.425708 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.425609 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.425873 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-92pp7" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.429411 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.432910 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-clvkp"] Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.541290 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-combined-ca-bundle\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.541401 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-credential-keys\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.541444 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd8wh\" (UniqueName: \"kubernetes.io/projected/663c22ab-26c3-4d29-8965-255dc095eef2-kube-api-access-dd8wh\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.541667 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-fernet-keys\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.541786 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-scripts\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.541935 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-config-data\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.643867 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-fernet-keys\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.643994 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-scripts\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.644110 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-config-data\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.644156 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-combined-ca-bundle\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.644184 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-credential-keys\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.644231 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd8wh\" (UniqueName: \"kubernetes.io/projected/663c22ab-26c3-4d29-8965-255dc095eef2-kube-api-access-dd8wh\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.649753 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-credential-keys\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.650200 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-fernet-keys\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.651918 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-config-data\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.653034 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-combined-ca-bundle\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.653552 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-scripts\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.664658 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd8wh\" (UniqueName: \"kubernetes.io/projected/663c22ab-26c3-4d29-8965-255dc095eef2-kube-api-access-dd8wh\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.736697 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:04:00 crc kubenswrapper[4902]: I0121 16:04:00.032978 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-clvkp"] Jan 21 16:04:00 crc kubenswrapper[4902]: I0121 16:04:00.270224 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-clvkp" event={"ID":"663c22ab-26c3-4d29-8965-255dc095eef2","Type":"ContainerStarted","Data":"8e7b81ffed093606aaee9fbef35f94103abd1548cced4aa289004fb371568398"} Jan 21 16:04:00 crc kubenswrapper[4902]: I0121 16:04:00.270631 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-clvkp" event={"ID":"663c22ab-26c3-4d29-8965-255dc095eef2","Type":"ContainerStarted","Data":"d409881154f9d8385023276daaa4e4cc4b728edd944f8b0811375cdf56503acc"} Jan 21 16:04:00 crc kubenswrapper[4902]: I0121 16:04:00.304693 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="986c3cfd-00c0-4c5f-a3af-ef42bb380140" path="/var/lib/kubelet/pods/986c3cfd-00c0-4c5f-a3af-ef42bb380140/volumes" Jan 21 16:04:01 crc kubenswrapper[4902]: I0121 16:04:01.746200 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:04:01 crc kubenswrapper[4902]: I0121 16:04:01.761757 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-clvkp" podStartSLOduration=2.761736286 podStartE2EDuration="2.761736286s" podCreationTimestamp="2026-01-21 16:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:04:00.289996253 +0000 UTC m=+5402.366829282" watchObservedRunningTime="2026-01-21 16:04:01.761736286 +0000 UTC m=+5403.838569315" Jan 21 16:04:01 crc kubenswrapper[4902]: I0121 16:04:01.807994 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57f688859c-fb82z"] Jan 21 16:04:01 crc kubenswrapper[4902]: I0121 16:04:01.808573 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57f688859c-fb82z" podUID="fdfebc8b-bc5c-4214-acee-021a404994bf" containerName="dnsmasq-dns" containerID="cri-o://401f56f07810074a750a97f4da0d7c60e93e7a8c193e6d8365b52546dfbecc13" gracePeriod=10 Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.295997 4902 generic.go:334] "Generic (PLEG): container finished" podID="fdfebc8b-bc5c-4214-acee-021a404994bf" containerID="401f56f07810074a750a97f4da0d7c60e93e7a8c193e6d8365b52546dfbecc13" exitCode=0 Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.303541 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57f688859c-fb82z" event={"ID":"fdfebc8b-bc5c-4214-acee-021a404994bf","Type":"ContainerDied","Data":"401f56f07810074a750a97f4da0d7c60e93e7a8c193e6d8365b52546dfbecc13"} Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.303589 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57f688859c-fb82z" event={"ID":"fdfebc8b-bc5c-4214-acee-021a404994bf","Type":"ContainerDied","Data":"f3c03b700062d45894766c079b31174948f641c82af2f122619825aabb3684d0"} Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.303604 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3c03b700062d45894766c079b31174948f641c82af2f122619825aabb3684d0" Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.305606 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.406648 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-ovsdbserver-sb\") pod \"fdfebc8b-bc5c-4214-acee-021a404994bf\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.406739 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-dns-svc\") pod \"fdfebc8b-bc5c-4214-acee-021a404994bf\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.406810 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-config\") pod \"fdfebc8b-bc5c-4214-acee-021a404994bf\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.406850 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-ovsdbserver-nb\") pod \"fdfebc8b-bc5c-4214-acee-021a404994bf\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.406884 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b77t\" (UniqueName: \"kubernetes.io/projected/fdfebc8b-bc5c-4214-acee-021a404994bf-kube-api-access-7b77t\") pod \"fdfebc8b-bc5c-4214-acee-021a404994bf\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.412670 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdfebc8b-bc5c-4214-acee-021a404994bf-kube-api-access-7b77t" (OuterVolumeSpecName: "kube-api-access-7b77t") pod "fdfebc8b-bc5c-4214-acee-021a404994bf" (UID: "fdfebc8b-bc5c-4214-acee-021a404994bf"). InnerVolumeSpecName "kube-api-access-7b77t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.446847 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fdfebc8b-bc5c-4214-acee-021a404994bf" (UID: "fdfebc8b-bc5c-4214-acee-021a404994bf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.449446 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fdfebc8b-bc5c-4214-acee-021a404994bf" (UID: "fdfebc8b-bc5c-4214-acee-021a404994bf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.456959 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-config" (OuterVolumeSpecName: "config") pod "fdfebc8b-bc5c-4214-acee-021a404994bf" (UID: "fdfebc8b-bc5c-4214-acee-021a404994bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.460223 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fdfebc8b-bc5c-4214-acee-021a404994bf" (UID: "fdfebc8b-bc5c-4214-acee-021a404994bf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.508962 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.509000 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.509010 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.509021 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.509032 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b77t\" (UniqueName: \"kubernetes.io/projected/fdfebc8b-bc5c-4214-acee-021a404994bf-kube-api-access-7b77t\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:03 crc kubenswrapper[4902]: I0121 16:04:03.309423 4902 generic.go:334] "Generic (PLEG): container finished" podID="663c22ab-26c3-4d29-8965-255dc095eef2" containerID="8e7b81ffed093606aaee9fbef35f94103abd1548cced4aa289004fb371568398" exitCode=0 Jan 21 16:04:03 crc kubenswrapper[4902]: I0121 16:04:03.309476 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-clvkp" event={"ID":"663c22ab-26c3-4d29-8965-255dc095eef2","Type":"ContainerDied","Data":"8e7b81ffed093606aaee9fbef35f94103abd1548cced4aa289004fb371568398"} Jan 21 16:04:03 crc kubenswrapper[4902]: I0121 16:04:03.309795 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:04:03 crc kubenswrapper[4902]: I0121 16:04:03.360882 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57f688859c-fb82z"] Jan 21 16:04:03 crc kubenswrapper[4902]: I0121 16:04:03.368473 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57f688859c-fb82z"] Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.302774 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdfebc8b-bc5c-4214-acee-021a404994bf" path="/var/lib/kubelet/pods/fdfebc8b-bc5c-4214-acee-021a404994bf/volumes" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.629520 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.748708 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-credential-keys\") pod \"663c22ab-26c3-4d29-8965-255dc095eef2\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.748827 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-scripts\") pod \"663c22ab-26c3-4d29-8965-255dc095eef2\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.748860 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-config-data\") pod \"663c22ab-26c3-4d29-8965-255dc095eef2\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.748906 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-combined-ca-bundle\") pod \"663c22ab-26c3-4d29-8965-255dc095eef2\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.748939 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd8wh\" (UniqueName: \"kubernetes.io/projected/663c22ab-26c3-4d29-8965-255dc095eef2-kube-api-access-dd8wh\") pod \"663c22ab-26c3-4d29-8965-255dc095eef2\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.749066 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-fernet-keys\") pod \"663c22ab-26c3-4d29-8965-255dc095eef2\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.758861 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/663c22ab-26c3-4d29-8965-255dc095eef2-kube-api-access-dd8wh" (OuterVolumeSpecName: "kube-api-access-dd8wh") pod "663c22ab-26c3-4d29-8965-255dc095eef2" (UID: "663c22ab-26c3-4d29-8965-255dc095eef2"). InnerVolumeSpecName "kube-api-access-dd8wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.760301 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "663c22ab-26c3-4d29-8965-255dc095eef2" (UID: "663c22ab-26c3-4d29-8965-255dc095eef2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.760333 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-scripts" (OuterVolumeSpecName: "scripts") pod "663c22ab-26c3-4d29-8965-255dc095eef2" (UID: "663c22ab-26c3-4d29-8965-255dc095eef2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.760794 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "663c22ab-26c3-4d29-8965-255dc095eef2" (UID: "663c22ab-26c3-4d29-8965-255dc095eef2"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.782862 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "663c22ab-26c3-4d29-8965-255dc095eef2" (UID: "663c22ab-26c3-4d29-8965-255dc095eef2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.786617 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-config-data" (OuterVolumeSpecName: "config-data") pod "663c22ab-26c3-4d29-8965-255dc095eef2" (UID: "663c22ab-26c3-4d29-8965-255dc095eef2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.851524 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.851566 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.851580 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.851593 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd8wh\" (UniqueName: \"kubernetes.io/projected/663c22ab-26c3-4d29-8965-255dc095eef2-kube-api-access-dd8wh\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.851604 4902 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.851613 4902 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.323399 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-clvkp" event={"ID":"663c22ab-26c3-4d29-8965-255dc095eef2","Type":"ContainerDied","Data":"d409881154f9d8385023276daaa4e4cc4b728edd944f8b0811375cdf56503acc"} Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.323440 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d409881154f9d8385023276daaa4e4cc4b728edd944f8b0811375cdf56503acc" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.323461 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.408765 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-67bfc4c47-flndt"] Jan 21 16:04:05 crc kubenswrapper[4902]: E0121 16:04:05.409166 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="663c22ab-26c3-4d29-8965-255dc095eef2" containerName="keystone-bootstrap" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.409187 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="663c22ab-26c3-4d29-8965-255dc095eef2" containerName="keystone-bootstrap" Jan 21 16:04:05 crc kubenswrapper[4902]: E0121 16:04:05.409201 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdfebc8b-bc5c-4214-acee-021a404994bf" containerName="init" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.409209 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdfebc8b-bc5c-4214-acee-021a404994bf" containerName="init" Jan 21 16:04:05 crc kubenswrapper[4902]: E0121 16:04:05.409236 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdfebc8b-bc5c-4214-acee-021a404994bf" containerName="dnsmasq-dns" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.409244 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdfebc8b-bc5c-4214-acee-021a404994bf" containerName="dnsmasq-dns" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.409420 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdfebc8b-bc5c-4214-acee-021a404994bf" containerName="dnsmasq-dns" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.409442 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="663c22ab-26c3-4d29-8965-255dc095eef2" containerName="keystone-bootstrap" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.410139 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.412836 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.412843 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.412958 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.413496 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.413498 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.413755 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-92pp7" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.427197 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-67bfc4c47-flndt"] Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.565337 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-config-data\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.565562 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-scripts\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.565668 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqv86\" (UniqueName: \"kubernetes.io/projected/1bc7e490-49b1-4eef-ab29-4453235cf752-kube-api-access-kqv86\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.565752 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-combined-ca-bundle\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.565841 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-internal-tls-certs\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.565923 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-public-tls-certs\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.566147 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-credential-keys\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.566288 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-fernet-keys\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.667403 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-fernet-keys\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.667478 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-config-data\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.667514 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-scripts\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.667538 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqv86\" (UniqueName: \"kubernetes.io/projected/1bc7e490-49b1-4eef-ab29-4453235cf752-kube-api-access-kqv86\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.667557 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-combined-ca-bundle\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.667591 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-internal-tls-certs\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.667620 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-public-tls-certs\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.667643 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-credential-keys\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.671362 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-credential-keys\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.671475 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-public-tls-certs\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.672449 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-scripts\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.673055 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-config-data\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.673829 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-fernet-keys\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.675346 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-combined-ca-bundle\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.676473 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-internal-tls-certs\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.684775 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqv86\" (UniqueName: \"kubernetes.io/projected/1bc7e490-49b1-4eef-ab29-4453235cf752-kube-api-access-kqv86\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.764475 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:06 crc kubenswrapper[4902]: I0121 16:04:06.193741 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-67bfc4c47-flndt"] Jan 21 16:04:06 crc kubenswrapper[4902]: W0121 16:04:06.197478 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bc7e490_49b1_4eef_ab29_4453235cf752.slice/crio-2b5a54b234682175cc1ef1f64c55ed18ff50fdcf429befb210b8ea2f3117936e WatchSource:0}: Error finding container 2b5a54b234682175cc1ef1f64c55ed18ff50fdcf429befb210b8ea2f3117936e: Status 404 returned error can't find the container with id 2b5a54b234682175cc1ef1f64c55ed18ff50fdcf429befb210b8ea2f3117936e Jan 21 16:04:06 crc kubenswrapper[4902]: I0121 16:04:06.333478 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-67bfc4c47-flndt" event={"ID":"1bc7e490-49b1-4eef-ab29-4453235cf752","Type":"ContainerStarted","Data":"2b5a54b234682175cc1ef1f64c55ed18ff50fdcf429befb210b8ea2f3117936e"} Jan 21 16:04:07 crc kubenswrapper[4902]: I0121 16:04:07.294991 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:04:07 crc kubenswrapper[4902]: E0121 16:04:07.295625 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:04:07 crc kubenswrapper[4902]: I0121 16:04:07.342666 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-67bfc4c47-flndt" event={"ID":"1bc7e490-49b1-4eef-ab29-4453235cf752","Type":"ContainerStarted","Data":"f1346ef846aac7fea42a88a6bdd4bb7ec6ffb6acdf430d21727340e0fbaa8000"} Jan 21 16:04:07 crc kubenswrapper[4902]: I0121 16:04:07.342794 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:07 crc kubenswrapper[4902]: I0121 16:04:07.366692 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-67bfc4c47-flndt" podStartSLOduration=2.366657859 podStartE2EDuration="2.366657859s" podCreationTimestamp="2026-01-21 16:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:04:07.363160171 +0000 UTC m=+5409.439993210" watchObservedRunningTime="2026-01-21 16:04:07.366657859 +0000 UTC m=+5409.443490888" Jan 21 16:04:19 crc kubenswrapper[4902]: I0121 16:04:19.295887 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:04:19 crc kubenswrapper[4902]: E0121 16:04:19.296945 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:04:32 crc kubenswrapper[4902]: I0121 16:04:32.295524 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:04:32 crc kubenswrapper[4902]: E0121 16:04:32.296531 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:04:37 crc kubenswrapper[4902]: I0121 16:04:37.346285 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.169430 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.170785 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.172763 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.173442 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-p6shw" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.173998 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.180279 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.260438 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f901a0e2-6941-4d4e-a90a-2905acf87521-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.260803 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f901a0e2-6941-4d4e-a90a-2905acf87521-openstack-config-secret\") pod \"openstackclient\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.260851 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f901a0e2-6941-4d4e-a90a-2905acf87521-openstack-config\") pod \"openstackclient\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.260902 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89zjk\" (UniqueName: \"kubernetes.io/projected/f901a0e2-6941-4d4e-a90a-2905acf87521-kube-api-access-89zjk\") pod \"openstackclient\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.363076 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f901a0e2-6941-4d4e-a90a-2905acf87521-openstack-config\") pod \"openstackclient\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.363162 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89zjk\" (UniqueName: \"kubernetes.io/projected/f901a0e2-6941-4d4e-a90a-2905acf87521-kube-api-access-89zjk\") pod \"openstackclient\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.363298 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f901a0e2-6941-4d4e-a90a-2905acf87521-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.363317 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f901a0e2-6941-4d4e-a90a-2905acf87521-openstack-config-secret\") pod \"openstackclient\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.364836 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f901a0e2-6941-4d4e-a90a-2905acf87521-openstack-config\") pod \"openstackclient\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.386080 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f901a0e2-6941-4d4e-a90a-2905acf87521-openstack-config-secret\") pod \"openstackclient\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.409097 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f901a0e2-6941-4d4e-a90a-2905acf87521-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.409638 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89zjk\" (UniqueName: \"kubernetes.io/projected/f901a0e2-6941-4d4e-a90a-2905acf87521-kube-api-access-89zjk\") pod \"openstackclient\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.491683 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.953728 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 16:04:41 crc kubenswrapper[4902]: I0121 16:04:41.633436 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f901a0e2-6941-4d4e-a90a-2905acf87521","Type":"ContainerStarted","Data":"0820f291e3e79ca9f589a0a9fd094ceca1ca151624389e86bb426b3920d38db1"} Jan 21 16:04:41 crc kubenswrapper[4902]: I0121 16:04:41.633740 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f901a0e2-6941-4d4e-a90a-2905acf87521","Type":"ContainerStarted","Data":"1e1d0a1b83d0024d201e7ee3eaa5897f636699fac45162048a3d139fcb0fa621"} Jan 21 16:04:41 crc kubenswrapper[4902]: I0121 16:04:41.664309 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.66427788 podStartE2EDuration="1.66427788s" podCreationTimestamp="2026-01-21 16:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:04:41.652783608 +0000 UTC m=+5443.729616697" watchObservedRunningTime="2026-01-21 16:04:41.66427788 +0000 UTC m=+5443.741110949" Jan 21 16:04:46 crc kubenswrapper[4902]: I0121 16:04:46.295752 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:04:46 crc kubenswrapper[4902]: E0121 16:04:46.296365 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:05:00 crc kubenswrapper[4902]: I0121 16:05:00.295646 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:05:00 crc kubenswrapper[4902]: E0121 16:05:00.296844 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:05:12 crc kubenswrapper[4902]: I0121 16:05:12.295271 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:05:12 crc kubenswrapper[4902]: E0121 16:05:12.296577 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:05:26 crc kubenswrapper[4902]: I0121 16:05:26.295207 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:05:26 crc kubenswrapper[4902]: E0121 16:05:26.295966 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:05:39 crc kubenswrapper[4902]: I0121 16:05:39.294846 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:05:39 crc kubenswrapper[4902]: E0121 16:05:39.295925 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:05:51 crc kubenswrapper[4902]: I0121 16:05:51.294773 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:05:51 crc kubenswrapper[4902]: E0121 16:05:51.297601 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:06:02 crc kubenswrapper[4902]: I0121 16:06:02.297664 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:06:02 crc kubenswrapper[4902]: E0121 16:06:02.299314 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.294768 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:06:16 crc kubenswrapper[4902]: E0121 16:06:16.295624 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.726306 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-85k9w"] Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.727525 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-85k9w" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.740300 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-85k9w"] Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.748903 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-0136-account-create-update-k4cmq"] Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.749946 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0136-account-create-update-k4cmq" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.752098 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.757438 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0136-account-create-update-k4cmq"] Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.865814 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm9sk\" (UniqueName: \"kubernetes.io/projected/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06-kube-api-access-dm9sk\") pod \"barbican-db-create-85k9w\" (UID: \"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06\") " pod="openstack/barbican-db-create-85k9w" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.865921 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76jlc\" (UniqueName: \"kubernetes.io/projected/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8-kube-api-access-76jlc\") pod \"barbican-0136-account-create-update-k4cmq\" (UID: \"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8\") " pod="openstack/barbican-0136-account-create-update-k4cmq" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.866103 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06-operator-scripts\") pod \"barbican-db-create-85k9w\" (UID: \"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06\") " pod="openstack/barbican-db-create-85k9w" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.866212 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8-operator-scripts\") pod \"barbican-0136-account-create-update-k4cmq\" (UID: \"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8\") " pod="openstack/barbican-0136-account-create-update-k4cmq" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.967344 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8-operator-scripts\") pod \"barbican-0136-account-create-update-k4cmq\" (UID: \"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8\") " pod="openstack/barbican-0136-account-create-update-k4cmq" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.967433 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm9sk\" (UniqueName: \"kubernetes.io/projected/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06-kube-api-access-dm9sk\") pod \"barbican-db-create-85k9w\" (UID: \"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06\") " pod="openstack/barbican-db-create-85k9w" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.967459 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76jlc\" (UniqueName: \"kubernetes.io/projected/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8-kube-api-access-76jlc\") pod \"barbican-0136-account-create-update-k4cmq\" (UID: \"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8\") " pod="openstack/barbican-0136-account-create-update-k4cmq" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.967513 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06-operator-scripts\") pod \"barbican-db-create-85k9w\" (UID: \"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06\") " pod="openstack/barbican-db-create-85k9w" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.968307 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06-operator-scripts\") pod \"barbican-db-create-85k9w\" (UID: \"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06\") " pod="openstack/barbican-db-create-85k9w" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.968805 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8-operator-scripts\") pod \"barbican-0136-account-create-update-k4cmq\" (UID: \"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8\") " pod="openstack/barbican-0136-account-create-update-k4cmq" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.987641 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76jlc\" (UniqueName: \"kubernetes.io/projected/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8-kube-api-access-76jlc\") pod \"barbican-0136-account-create-update-k4cmq\" (UID: \"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8\") " pod="openstack/barbican-0136-account-create-update-k4cmq" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.991090 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm9sk\" (UniqueName: \"kubernetes.io/projected/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06-kube-api-access-dm9sk\") pod \"barbican-db-create-85k9w\" (UID: \"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06\") " pod="openstack/barbican-db-create-85k9w" Jan 21 16:06:17 crc kubenswrapper[4902]: I0121 16:06:17.047657 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-85k9w" Jan 21 16:06:17 crc kubenswrapper[4902]: I0121 16:06:17.065525 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0136-account-create-update-k4cmq" Jan 21 16:06:17 crc kubenswrapper[4902]: I0121 16:06:17.491683 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-85k9w"] Jan 21 16:06:17 crc kubenswrapper[4902]: I0121 16:06:17.570573 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0136-account-create-update-k4cmq"] Jan 21 16:06:17 crc kubenswrapper[4902]: W0121 16:06:17.580606 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedc6e2c0_6737_49e0_b5d8_f77a5de0a7f8.slice/crio-b6c7a3a5979b7a774d7759ed19e39816ae89d88b76761b46d2df20ec67b763ef WatchSource:0}: Error finding container b6c7a3a5979b7a774d7759ed19e39816ae89d88b76761b46d2df20ec67b763ef: Status 404 returned error can't find the container with id b6c7a3a5979b7a774d7759ed19e39816ae89d88b76761b46d2df20ec67b763ef Jan 21 16:06:18 crc kubenswrapper[4902]: I0121 16:06:18.452903 4902 generic.go:334] "Generic (PLEG): container finished" podID="edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8" containerID="c71cec8eacda47056c7a215f2b04bc9d493e2cbfdf871841495ef07bfb7eb7a5" exitCode=0 Jan 21 16:06:18 crc kubenswrapper[4902]: I0121 16:06:18.452992 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0136-account-create-update-k4cmq" event={"ID":"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8","Type":"ContainerDied","Data":"c71cec8eacda47056c7a215f2b04bc9d493e2cbfdf871841495ef07bfb7eb7a5"} Jan 21 16:06:18 crc kubenswrapper[4902]: I0121 16:06:18.453277 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0136-account-create-update-k4cmq" event={"ID":"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8","Type":"ContainerStarted","Data":"b6c7a3a5979b7a774d7759ed19e39816ae89d88b76761b46d2df20ec67b763ef"} Jan 21 16:06:18 crc kubenswrapper[4902]: I0121 16:06:18.455227 4902 generic.go:334] "Generic (PLEG): container finished" podID="e8c6f518-fd8b-4c60-9f36-1eb57bd30b06" containerID="e7ae920f7061533fd1ae5c5eabfd18124e9c27f0aad7594a5b9ba20211753b38" exitCode=0 Jan 21 16:06:18 crc kubenswrapper[4902]: I0121 16:06:18.455259 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-85k9w" event={"ID":"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06","Type":"ContainerDied","Data":"e7ae920f7061533fd1ae5c5eabfd18124e9c27f0aad7594a5b9ba20211753b38"} Jan 21 16:06:18 crc kubenswrapper[4902]: I0121 16:06:18.455277 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-85k9w" event={"ID":"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06","Type":"ContainerStarted","Data":"134c81f0abf8e37719a1daae08481f6ead7f458acd92a02ec1a2553905e643b7"} Jan 21 16:06:19 crc kubenswrapper[4902]: I0121 16:06:19.883711 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-85k9w" Jan 21 16:06:19 crc kubenswrapper[4902]: I0121 16:06:19.890779 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0136-account-create-update-k4cmq" Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.023086 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06-operator-scripts\") pod \"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06\" (UID: \"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06\") " Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.023134 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8-operator-scripts\") pod \"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8\" (UID: \"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8\") " Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.023162 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76jlc\" (UniqueName: \"kubernetes.io/projected/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8-kube-api-access-76jlc\") pod \"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8\" (UID: \"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8\") " Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.023298 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm9sk\" (UniqueName: \"kubernetes.io/projected/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06-kube-api-access-dm9sk\") pod \"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06\" (UID: \"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06\") " Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.023600 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e8c6f518-fd8b-4c60-9f36-1eb57bd30b06" (UID: "e8c6f518-fd8b-4c60-9f36-1eb57bd30b06"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.023833 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8" (UID: "edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.028724 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06-kube-api-access-dm9sk" (OuterVolumeSpecName: "kube-api-access-dm9sk") pod "e8c6f518-fd8b-4c60-9f36-1eb57bd30b06" (UID: "e8c6f518-fd8b-4c60-9f36-1eb57bd30b06"). InnerVolumeSpecName "kube-api-access-dm9sk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.029569 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8-kube-api-access-76jlc" (OuterVolumeSpecName: "kube-api-access-76jlc") pod "edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8" (UID: "edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8"). InnerVolumeSpecName "kube-api-access-76jlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.125263 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm9sk\" (UniqueName: \"kubernetes.io/projected/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06-kube-api-access-dm9sk\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.125312 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.125321 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.125342 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76jlc\" (UniqueName: \"kubernetes.io/projected/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8-kube-api-access-76jlc\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.490263 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0136-account-create-update-k4cmq" Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.490255 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0136-account-create-update-k4cmq" event={"ID":"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8","Type":"ContainerDied","Data":"b6c7a3a5979b7a774d7759ed19e39816ae89d88b76761b46d2df20ec67b763ef"} Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.491293 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6c7a3a5979b7a774d7759ed19e39816ae89d88b76761b46d2df20ec67b763ef" Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.493513 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-85k9w" event={"ID":"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06","Type":"ContainerDied","Data":"134c81f0abf8e37719a1daae08481f6ead7f458acd92a02ec1a2553905e643b7"} Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.493539 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="134c81f0abf8e37719a1daae08481f6ead7f458acd92a02ec1a2553905e643b7" Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.493601 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-85k9w" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.029091 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-7k4p6"] Jan 21 16:06:22 crc kubenswrapper[4902]: E0121 16:06:22.029504 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c6f518-fd8b-4c60-9f36-1eb57bd30b06" containerName="mariadb-database-create" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.029524 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c6f518-fd8b-4c60-9f36-1eb57bd30b06" containerName="mariadb-database-create" Jan 21 16:06:22 crc kubenswrapper[4902]: E0121 16:06:22.029567 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8" containerName="mariadb-account-create-update" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.029575 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8" containerName="mariadb-account-create-update" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.029753 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8" containerName="mariadb-account-create-update" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.029773 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8c6f518-fd8b-4c60-9f36-1eb57bd30b06" containerName="mariadb-database-create" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.030441 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.034685 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-b64cz" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.034886 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.063448 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-7k4p6"] Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.169163 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-db-sync-config-data\") pod \"barbican-db-sync-7k4p6\" (UID: \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\") " pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.169226 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-combined-ca-bundle\") pod \"barbican-db-sync-7k4p6\" (UID: \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\") " pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.169283 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7nnc\" (UniqueName: \"kubernetes.io/projected/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-kube-api-access-d7nnc\") pod \"barbican-db-sync-7k4p6\" (UID: \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\") " pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.270818 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-combined-ca-bundle\") pod \"barbican-db-sync-7k4p6\" (UID: \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\") " pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.270881 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7nnc\" (UniqueName: \"kubernetes.io/projected/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-kube-api-access-d7nnc\") pod \"barbican-db-sync-7k4p6\" (UID: \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\") " pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.271005 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-db-sync-config-data\") pod \"barbican-db-sync-7k4p6\" (UID: \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\") " pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.275645 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-db-sync-config-data\") pod \"barbican-db-sync-7k4p6\" (UID: \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\") " pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.276840 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-combined-ca-bundle\") pod \"barbican-db-sync-7k4p6\" (UID: \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\") " pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.291604 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7nnc\" (UniqueName: \"kubernetes.io/projected/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-kube-api-access-d7nnc\") pod \"barbican-db-sync-7k4p6\" (UID: \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\") " pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.375693 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.619089 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-7k4p6"] Jan 21 16:06:22 crc kubenswrapper[4902]: W0121 16:06:22.633770 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58a9fed4_e340_4ac7_a3a6_750ce7aa3ad2.slice/crio-497d95fbf98203bf6a8c356b922e547bcb6fb481a1e1214efc82b5a776c64feb WatchSource:0}: Error finding container 497d95fbf98203bf6a8c356b922e547bcb6fb481a1e1214efc82b5a776c64feb: Status 404 returned error can't find the container with id 497d95fbf98203bf6a8c356b922e547bcb6fb481a1e1214efc82b5a776c64feb Jan 21 16:06:23 crc kubenswrapper[4902]: I0121 16:06:23.516058 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7k4p6" event={"ID":"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2","Type":"ContainerStarted","Data":"65fe44f3b0e17d56dbcc24184af4bec7f8662c78351c1314a8a65ecfa5dbb257"} Jan 21 16:06:23 crc kubenswrapper[4902]: I0121 16:06:23.516364 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7k4p6" event={"ID":"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2","Type":"ContainerStarted","Data":"497d95fbf98203bf6a8c356b922e547bcb6fb481a1e1214efc82b5a776c64feb"} Jan 21 16:06:23 crc kubenswrapper[4902]: I0121 16:06:23.536118 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-7k4p6" podStartSLOduration=2.53610071 podStartE2EDuration="2.53610071s" podCreationTimestamp="2026-01-21 16:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:23.529092333 +0000 UTC m=+5545.605925372" watchObservedRunningTime="2026-01-21 16:06:23.53610071 +0000 UTC m=+5545.612933739" Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.528518 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lqndd"] Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.530309 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.541192 4902 generic.go:334] "Generic (PLEG): container finished" podID="58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2" containerID="65fe44f3b0e17d56dbcc24184af4bec7f8662c78351c1314a8a65ecfa5dbb257" exitCode=0 Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.541234 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7k4p6" event={"ID":"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2","Type":"ContainerDied","Data":"65fe44f3b0e17d56dbcc24184af4bec7f8662c78351c1314a8a65ecfa5dbb257"} Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.547404 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lqndd"] Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.625933 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqg6q\" (UniqueName: \"kubernetes.io/projected/ddd543be-03fc-4a61-bb0b-55a066361a5f-kube-api-access-gqg6q\") pod \"redhat-operators-lqndd\" (UID: \"ddd543be-03fc-4a61-bb0b-55a066361a5f\") " pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.626013 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddd543be-03fc-4a61-bb0b-55a066361a5f-catalog-content\") pod \"redhat-operators-lqndd\" (UID: \"ddd543be-03fc-4a61-bb0b-55a066361a5f\") " pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.626099 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddd543be-03fc-4a61-bb0b-55a066361a5f-utilities\") pod \"redhat-operators-lqndd\" (UID: \"ddd543be-03fc-4a61-bb0b-55a066361a5f\") " pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.727917 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddd543be-03fc-4a61-bb0b-55a066361a5f-catalog-content\") pod \"redhat-operators-lqndd\" (UID: \"ddd543be-03fc-4a61-bb0b-55a066361a5f\") " pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.728029 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddd543be-03fc-4a61-bb0b-55a066361a5f-utilities\") pod \"redhat-operators-lqndd\" (UID: \"ddd543be-03fc-4a61-bb0b-55a066361a5f\") " pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.728117 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqg6q\" (UniqueName: \"kubernetes.io/projected/ddd543be-03fc-4a61-bb0b-55a066361a5f-kube-api-access-gqg6q\") pod \"redhat-operators-lqndd\" (UID: \"ddd543be-03fc-4a61-bb0b-55a066361a5f\") " pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.728955 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddd543be-03fc-4a61-bb0b-55a066361a5f-catalog-content\") pod \"redhat-operators-lqndd\" (UID: \"ddd543be-03fc-4a61-bb0b-55a066361a5f\") " pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.729241 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddd543be-03fc-4a61-bb0b-55a066361a5f-utilities\") pod \"redhat-operators-lqndd\" (UID: \"ddd543be-03fc-4a61-bb0b-55a066361a5f\") " pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.756916 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqg6q\" (UniqueName: \"kubernetes.io/projected/ddd543be-03fc-4a61-bb0b-55a066361a5f-kube-api-access-gqg6q\") pod \"redhat-operators-lqndd\" (UID: \"ddd543be-03fc-4a61-bb0b-55a066361a5f\") " pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.855495 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:26 crc kubenswrapper[4902]: I0121 16:06:26.789342 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lqndd"] Jan 21 16:06:26 crc kubenswrapper[4902]: W0121 16:06:26.807229 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddd543be_03fc_4a61_bb0b_55a066361a5f.slice/crio-ff9be32e5e25b980d1ae27fea132e376859f3819b5b41cfaa777f65b307f07ce WatchSource:0}: Error finding container ff9be32e5e25b980d1ae27fea132e376859f3819b5b41cfaa777f65b307f07ce: Status 404 returned error can't find the container with id ff9be32e5e25b980d1ae27fea132e376859f3819b5b41cfaa777f65b307f07ce Jan 21 16:06:26 crc kubenswrapper[4902]: I0121 16:06:26.998741 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.150214 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-db-sync-config-data\") pod \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\" (UID: \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\") " Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.150292 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7nnc\" (UniqueName: \"kubernetes.io/projected/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-kube-api-access-d7nnc\") pod \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\" (UID: \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\") " Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.150427 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-combined-ca-bundle\") pod \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\" (UID: \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\") " Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.157861 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2" (UID: "58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.158338 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-kube-api-access-d7nnc" (OuterVolumeSpecName: "kube-api-access-d7nnc") pod "58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2" (UID: "58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2"). InnerVolumeSpecName "kube-api-access-d7nnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.195226 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2" (UID: "58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.252219 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.252263 4902 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.252276 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7nnc\" (UniqueName: \"kubernetes.io/projected/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-kube-api-access-d7nnc\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.558907 4902 generic.go:334] "Generic (PLEG): container finished" podID="ddd543be-03fc-4a61-bb0b-55a066361a5f" containerID="c1974a18bb600f84ad592fcb1ec0fc601ac073e08d0d02562db2c3da418aff99" exitCode=0 Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.558971 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqndd" event={"ID":"ddd543be-03fc-4a61-bb0b-55a066361a5f","Type":"ContainerDied","Data":"c1974a18bb600f84ad592fcb1ec0fc601ac073e08d0d02562db2c3da418aff99"} Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.559034 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqndd" event={"ID":"ddd543be-03fc-4a61-bb0b-55a066361a5f","Type":"ContainerStarted","Data":"ff9be32e5e25b980d1ae27fea132e376859f3819b5b41cfaa777f65b307f07ce"} Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.561182 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7k4p6" event={"ID":"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2","Type":"ContainerDied","Data":"497d95fbf98203bf6a8c356b922e547bcb6fb481a1e1214efc82b5a776c64feb"} Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.561212 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.561220 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="497d95fbf98203bf6a8c356b922e547bcb6fb481a1e1214efc82b5a776c64feb" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.728431 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-c94b5b747-nxfg6"] Jan 21 16:06:27 crc kubenswrapper[4902]: E0121 16:06:27.728864 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2" containerName="barbican-db-sync" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.728882 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2" containerName="barbican-db-sync" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.729112 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2" containerName="barbican-db-sync" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.731944 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.734419 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-b64cz" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.734598 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.738223 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.753254 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-c94b5b747-nxfg6"] Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.775248 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-8458cc5fd6-z5j6z"] Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.777764 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.786289 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.807840 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8458cc5fd6-z5j6z"] Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.824310 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85d446946c-gb4r2"] Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.830262 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.862764 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95cef3f6-598c-483e-b2b6-bb3d2942f18e-config-data-custom\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.862833 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-config-data-custom\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.862855 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cef3f6-598c-483e-b2b6-bb3d2942f18e-combined-ca-bundle\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.862874 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-combined-ca-bundle\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.862909 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-logs\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.862943 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dzcd\" (UniqueName: \"kubernetes.io/projected/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-kube-api-access-5dzcd\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.862971 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-config-data\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.862999 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95cef3f6-598c-483e-b2b6-bb3d2942f18e-config-data\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.863023 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95cef3f6-598c-483e-b2b6-bb3d2942f18e-logs\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.863054 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vddm\" (UniqueName: \"kubernetes.io/projected/95cef3f6-598c-483e-b2b6-bb3d2942f18e-kube-api-access-9vddm\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.867854 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85d446946c-gb4r2"] Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.926178 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-659b467f5b-b29gg"] Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.927585 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.930120 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.933941 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-659b467f5b-b29gg"] Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.965924 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-ovsdbserver-sb\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966546 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-config-data-custom\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966589 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cef3f6-598c-483e-b2b6-bb3d2942f18e-combined-ca-bundle\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966618 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-combined-ca-bundle\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966649 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-config\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966675 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww75l\" (UniqueName: \"kubernetes.io/projected/ff4fadc7-2c31-451f-9455-5112a195b36e-kube-api-access-ww75l\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966703 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-logs\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966741 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-ovsdbserver-nb\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966769 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dzcd\" (UniqueName: \"kubernetes.io/projected/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-kube-api-access-5dzcd\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966823 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-config-data\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966861 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95cef3f6-598c-483e-b2b6-bb3d2942f18e-config-data\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966895 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95cef3f6-598c-483e-b2b6-bb3d2942f18e-logs\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966915 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vddm\" (UniqueName: \"kubernetes.io/projected/95cef3f6-598c-483e-b2b6-bb3d2942f18e-kube-api-access-9vddm\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966951 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-dns-svc\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966981 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95cef3f6-598c-483e-b2b6-bb3d2942f18e-config-data-custom\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.968270 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95cef3f6-598c-483e-b2b6-bb3d2942f18e-logs\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.969650 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-logs\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.972431 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cef3f6-598c-483e-b2b6-bb3d2942f18e-combined-ca-bundle\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.973661 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-combined-ca-bundle\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.974161 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95cef3f6-598c-483e-b2b6-bb3d2942f18e-config-data\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.975806 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-config-data\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.978161 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95cef3f6-598c-483e-b2b6-bb3d2942f18e-config-data-custom\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.979691 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-config-data-custom\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.984099 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vddm\" (UniqueName: \"kubernetes.io/projected/95cef3f6-598c-483e-b2b6-bb3d2942f18e-kube-api-access-9vddm\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.986551 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dzcd\" (UniqueName: \"kubernetes.io/projected/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-kube-api-access-5dzcd\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.069207 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79ad32fd-7d7a-4779-87c5-093c16782962-logs\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.069885 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-combined-ca-bundle\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.069921 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-ovsdbserver-sb\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.070065 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-config\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.070154 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww75l\" (UniqueName: \"kubernetes.io/projected/ff4fadc7-2c31-451f-9455-5112a195b36e-kube-api-access-ww75l\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.070234 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-ovsdbserver-nb\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.070274 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-config-data\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.070399 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6gdv\" (UniqueName: \"kubernetes.io/projected/79ad32fd-7d7a-4779-87c5-093c16782962-kube-api-access-n6gdv\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.070456 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-config-data-custom\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.070613 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-dns-svc\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.071001 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-config\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.071193 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-ovsdbserver-nb\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.071385 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-dns-svc\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.071398 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-ovsdbserver-sb\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.075689 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.099324 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww75l\" (UniqueName: \"kubernetes.io/projected/ff4fadc7-2c31-451f-9455-5112a195b36e-kube-api-access-ww75l\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.132291 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.165371 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.172477 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-config-data\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.172546 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6gdv\" (UniqueName: \"kubernetes.io/projected/79ad32fd-7d7a-4779-87c5-093c16782962-kube-api-access-n6gdv\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.172576 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-config-data-custom\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.172637 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79ad32fd-7d7a-4779-87c5-093c16782962-logs\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.172656 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-combined-ca-bundle\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.177152 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79ad32fd-7d7a-4779-87c5-093c16782962-logs\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.181660 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-config-data-custom\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.182087 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-config-data\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.185663 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-combined-ca-bundle\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.205121 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6gdv\" (UniqueName: \"kubernetes.io/projected/79ad32fd-7d7a-4779-87c5-093c16782962-kube-api-access-n6gdv\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.251851 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.532171 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-c94b5b747-nxfg6"] Jan 21 16:06:28 crc kubenswrapper[4902]: W0121 16:06:28.545669 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9162d3ad_8f1a_4998_9f4d_a1869af6a23f.slice/crio-13073fb5597aabcc0be45afe7891278491e8fe551edd5d8f36d37628cbf3ad78 WatchSource:0}: Error finding container 13073fb5597aabcc0be45afe7891278491e8fe551edd5d8f36d37628cbf3ad78: Status 404 returned error can't find the container with id 13073fb5597aabcc0be45afe7891278491e8fe551edd5d8f36d37628cbf3ad78 Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.589878 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c94b5b747-nxfg6" event={"ID":"9162d3ad-8f1a-4998-9f4d-a1869af6a23f","Type":"ContainerStarted","Data":"13073fb5597aabcc0be45afe7891278491e8fe551edd5d8f36d37628cbf3ad78"} Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.672925 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8458cc5fd6-z5j6z"] Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.740357 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85d446946c-gb4r2"] Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.819826 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-659b467f5b-b29gg"] Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.296569 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:06:29 crc kubenswrapper[4902]: E0121 16:06:29.297071 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.602687 4902 generic.go:334] "Generic (PLEG): container finished" podID="ff4fadc7-2c31-451f-9455-5112a195b36e" containerID="78e0f5562314520f841b7ea0877c38f4f434c16d4495c8580ea2c10f6698660a" exitCode=0 Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.603485 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" event={"ID":"ff4fadc7-2c31-451f-9455-5112a195b36e","Type":"ContainerDied","Data":"78e0f5562314520f841b7ea0877c38f4f434c16d4495c8580ea2c10f6698660a"} Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.605259 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" event={"ID":"ff4fadc7-2c31-451f-9455-5112a195b36e","Type":"ContainerStarted","Data":"85869cd817e0c1d272d916eb419b76f6201c88f1c3e64ad5a43adcae83c81773"} Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.630656 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c94b5b747-nxfg6" event={"ID":"9162d3ad-8f1a-4998-9f4d-a1869af6a23f","Type":"ContainerStarted","Data":"ea38d7edf84c05ec880f1c91f064fad70b15df79455e9e00e8194282fabb1f64"} Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.630706 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c94b5b747-nxfg6" event={"ID":"9162d3ad-8f1a-4998-9f4d-a1869af6a23f","Type":"ContainerStarted","Data":"34e5ff8f6124c4d89f1e5fd95c1c06e57d5eed2cbe37d03c28fb671f72ccaa1a"} Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.634353 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-659b467f5b-b29gg" event={"ID":"79ad32fd-7d7a-4779-87c5-093c16782962","Type":"ContainerStarted","Data":"fd2638be10a4932da8d6b26c06b5ad301fa3bce23378df3af60cb4cb40724ee4"} Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.634401 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-659b467f5b-b29gg" event={"ID":"79ad32fd-7d7a-4779-87c5-093c16782962","Type":"ContainerStarted","Data":"3aa4ff5500e0f1a699c62a0b183168e55443f1a98525a6c911857d407fabc6d6"} Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.634418 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-659b467f5b-b29gg" event={"ID":"79ad32fd-7d7a-4779-87c5-093c16782962","Type":"ContainerStarted","Data":"ab7459deb556f801c8bbce99eae2c2e300be05d5f0dd8719e129ec50c380cba7"} Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.634936 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.635073 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.646799 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-c94b5b747-nxfg6" podStartSLOduration=2.646781995 podStartE2EDuration="2.646781995s" podCreationTimestamp="2026-01-21 16:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:29.644918542 +0000 UTC m=+5551.721751571" watchObservedRunningTime="2026-01-21 16:06:29.646781995 +0000 UTC m=+5551.723615024" Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.649277 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" event={"ID":"95cef3f6-598c-483e-b2b6-bb3d2942f18e","Type":"ContainerStarted","Data":"b49919a24663d6af6440fdfa1aede199fcb6440512ce811d7c9f7bfa99213e0d"} Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.649327 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" event={"ID":"95cef3f6-598c-483e-b2b6-bb3d2942f18e","Type":"ContainerStarted","Data":"fb2b0fdd1c51ff429d9fa5f4cf442c6b1046ac107d956a8562e2a03d47b8bf76"} Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.649345 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" event={"ID":"95cef3f6-598c-483e-b2b6-bb3d2942f18e","Type":"ContainerStarted","Data":"4af23d677cf7b1dfe1cac1e02051031118fab8aa51d79a4cebb67955df545551"} Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.668145 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-659b467f5b-b29gg" podStartSLOduration=2.668126465 podStartE2EDuration="2.668126465s" podCreationTimestamp="2026-01-21 16:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:29.667112966 +0000 UTC m=+5551.743945995" watchObservedRunningTime="2026-01-21 16:06:29.668126465 +0000 UTC m=+5551.744959504" Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.676547 4902 generic.go:334] "Generic (PLEG): container finished" podID="ddd543be-03fc-4a61-bb0b-55a066361a5f" containerID="b573b028c6503d871975bc29f32aef22c05a1b0aa15962dbb2b5064c028f1d54" exitCode=0 Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.676598 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqndd" event={"ID":"ddd543be-03fc-4a61-bb0b-55a066361a5f","Type":"ContainerDied","Data":"b573b028c6503d871975bc29f32aef22c05a1b0aa15962dbb2b5064c028f1d54"} Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.773137 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" podStartSLOduration=2.773111215 podStartE2EDuration="2.773111215s" podCreationTimestamp="2026-01-21 16:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:29.70816856 +0000 UTC m=+5551.785001589" watchObservedRunningTime="2026-01-21 16:06:29.773111215 +0000 UTC m=+5551.849944244" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.707019 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqndd" event={"ID":"ddd543be-03fc-4a61-bb0b-55a066361a5f","Type":"ContainerStarted","Data":"d5c168b8bfc82e8b469571ae78e3766b5afadc9dedc2c3dfaca0d6e58daff150"} Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.715261 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" event={"ID":"ff4fadc7-2c31-451f-9455-5112a195b36e","Type":"ContainerStarted","Data":"5d178668253a4565a7e272761e78d3c8af2f6d158e8aec8d4e0682f8d430786d"} Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.718528 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-85645f8dd4-bf5z5"] Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.720210 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.722346 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.722381 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.734346 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-85645f8dd4-bf5z5"] Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.737992 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lqndd" podStartSLOduration=3.18290342 podStartE2EDuration="5.737973255s" podCreationTimestamp="2026-01-21 16:06:25 +0000 UTC" firstStartedPulling="2026-01-21 16:06:27.560510086 +0000 UTC m=+5549.637343115" lastFinishedPulling="2026-01-21 16:06:30.115579921 +0000 UTC m=+5552.192412950" observedRunningTime="2026-01-21 16:06:30.736500164 +0000 UTC m=+5552.813333203" watchObservedRunningTime="2026-01-21 16:06:30.737973255 +0000 UTC m=+5552.814806284" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.784353 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" podStartSLOduration=3.7843367690000003 podStartE2EDuration="3.784336769s" podCreationTimestamp="2026-01-21 16:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:30.783654209 +0000 UTC m=+5552.860487238" watchObservedRunningTime="2026-01-21 16:06:30.784336769 +0000 UTC m=+5552.861169798" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.850814 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2frk\" (UniqueName: \"kubernetes.io/projected/49dfaf72-0f35-4705-a9d8-830878fc46d1-kube-api-access-m2frk\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.851033 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-combined-ca-bundle\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.851147 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-internal-tls-certs\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.851214 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49dfaf72-0f35-4705-a9d8-830878fc46d1-logs\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.851267 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-public-tls-certs\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.851363 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-config-data-custom\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.851395 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-config-data\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.954171 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-combined-ca-bundle\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.954212 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-internal-tls-certs\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.954243 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49dfaf72-0f35-4705-a9d8-830878fc46d1-logs\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.954261 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-public-tls-certs\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.954295 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-config-data-custom\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.954311 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-config-data\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.954386 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2frk\" (UniqueName: \"kubernetes.io/projected/49dfaf72-0f35-4705-a9d8-830878fc46d1-kube-api-access-m2frk\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.954790 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49dfaf72-0f35-4705-a9d8-830878fc46d1-logs\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.961144 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-public-tls-certs\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.961698 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-config-data-custom\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.962703 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-internal-tls-certs\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.975479 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-combined-ca-bundle\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.976411 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-config-data\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.983081 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2frk\" (UniqueName: \"kubernetes.io/projected/49dfaf72-0f35-4705-a9d8-830878fc46d1-kube-api-access-m2frk\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:31 crc kubenswrapper[4902]: I0121 16:06:31.044533 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:31 crc kubenswrapper[4902]: I0121 16:06:31.471720 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-85645f8dd4-bf5z5"] Jan 21 16:06:31 crc kubenswrapper[4902]: W0121 16:06:31.474874 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49dfaf72_0f35_4705_a9d8_830878fc46d1.slice/crio-ce538407c072e16f9728c02b46405dc894ea7f51826ce506d68b3e7971f3db4a WatchSource:0}: Error finding container ce538407c072e16f9728c02b46405dc894ea7f51826ce506d68b3e7971f3db4a: Status 404 returned error can't find the container with id ce538407c072e16f9728c02b46405dc894ea7f51826ce506d68b3e7971f3db4a Jan 21 16:06:31 crc kubenswrapper[4902]: I0121 16:06:31.723818 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85645f8dd4-bf5z5" event={"ID":"49dfaf72-0f35-4705-a9d8-830878fc46d1","Type":"ContainerStarted","Data":"2a39058449ea7ade45f8902dd1b1ab04a92974d6273b9cf8224f9eb4f50e0ebd"} Jan 21 16:06:31 crc kubenswrapper[4902]: I0121 16:06:31.723850 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85645f8dd4-bf5z5" event={"ID":"49dfaf72-0f35-4705-a9d8-830878fc46d1","Type":"ContainerStarted","Data":"ce538407c072e16f9728c02b46405dc894ea7f51826ce506d68b3e7971f3db4a"} Jan 21 16:06:31 crc kubenswrapper[4902]: I0121 16:06:31.724256 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:32 crc kubenswrapper[4902]: I0121 16:06:32.738570 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85645f8dd4-bf5z5" event={"ID":"49dfaf72-0f35-4705-a9d8-830878fc46d1","Type":"ContainerStarted","Data":"2c9d9f1101a8f35b09635f69444a56d9252c27db830d4f2642f1eb2abea8a024"} Jan 21 16:06:32 crc kubenswrapper[4902]: I0121 16:06:32.782298 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-85645f8dd4-bf5z5" podStartSLOduration=2.782279955 podStartE2EDuration="2.782279955s" podCreationTimestamp="2026-01-21 16:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:32.77676867 +0000 UTC m=+5554.853601709" watchObservedRunningTime="2026-01-21 16:06:32.782279955 +0000 UTC m=+5554.859112984" Jan 21 16:06:33 crc kubenswrapper[4902]: I0121 16:06:33.745067 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:33 crc kubenswrapper[4902]: I0121 16:06:33.745147 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:35 crc kubenswrapper[4902]: I0121 16:06:35.196773 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:35 crc kubenswrapper[4902]: I0121 16:06:35.857993 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:35 crc kubenswrapper[4902]: I0121 16:06:35.858358 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:36 crc kubenswrapper[4902]: I0121 16:06:36.609487 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:36 crc kubenswrapper[4902]: I0121 16:06:36.943914 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lqndd" podUID="ddd543be-03fc-4a61-bb0b-55a066361a5f" containerName="registry-server" probeResult="failure" output=< Jan 21 16:06:36 crc kubenswrapper[4902]: timeout: failed to connect service ":50051" within 1s Jan 21 16:06:36 crc kubenswrapper[4902]: > Jan 21 16:06:37 crc kubenswrapper[4902]: I0121 16:06:37.495826 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:37 crc kubenswrapper[4902]: I0121 16:06:37.537052 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:37 crc kubenswrapper[4902]: I0121 16:06:37.604592 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-659b467f5b-b29gg"] Jan 21 16:06:37 crc kubenswrapper[4902]: I0121 16:06:37.604850 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-659b467f5b-b29gg" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api-log" containerID="cri-o://3aa4ff5500e0f1a699c62a0b183168e55443f1a98525a6c911857d407fabc6d6" gracePeriod=30 Jan 21 16:06:37 crc kubenswrapper[4902]: I0121 16:06:37.604994 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-659b467f5b-b29gg" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api" containerID="cri-o://fd2638be10a4932da8d6b26c06b5ad301fa3bce23378df3af60cb4cb40724ee4" gracePeriod=30 Jan 21 16:06:37 crc kubenswrapper[4902]: I0121 16:06:37.613527 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-659b467f5b-b29gg" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.30:9311/healthcheck\": EOF" Jan 21 16:06:37 crc kubenswrapper[4902]: I0121 16:06:37.621781 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-659b467f5b-b29gg" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.30:9311/healthcheck\": EOF" Jan 21 16:06:37 crc kubenswrapper[4902]: I0121 16:06:37.794719 4902 generic.go:334] "Generic (PLEG): container finished" podID="79ad32fd-7d7a-4779-87c5-093c16782962" containerID="3aa4ff5500e0f1a699c62a0b183168e55443f1a98525a6c911857d407fabc6d6" exitCode=143 Jan 21 16:06:37 crc kubenswrapper[4902]: I0121 16:06:37.795001 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-659b467f5b-b29gg" event={"ID":"79ad32fd-7d7a-4779-87c5-093c16782962","Type":"ContainerDied","Data":"3aa4ff5500e0f1a699c62a0b183168e55443f1a98525a6c911857d407fabc6d6"} Jan 21 16:06:38 crc kubenswrapper[4902]: I0121 16:06:38.168219 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:38 crc kubenswrapper[4902]: I0121 16:06:38.261992 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b4c895fc-lcmhk"] Jan 21 16:06:38 crc kubenswrapper[4902]: I0121 16:06:38.262304 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" podUID="f2ab6913-fdd0-4944-8c16-c213aecdd825" containerName="dnsmasq-dns" containerID="cri-o://d712f1b4cdf6346532f6de92bb64a6956b68ba70087482d2c995c46acdeba1e0" gracePeriod=10 Jan 21 16:06:38 crc kubenswrapper[4902]: I0121 16:06:38.830659 4902 generic.go:334] "Generic (PLEG): container finished" podID="f2ab6913-fdd0-4944-8c16-c213aecdd825" containerID="d712f1b4cdf6346532f6de92bb64a6956b68ba70087482d2c995c46acdeba1e0" exitCode=0 Jan 21 16:06:38 crc kubenswrapper[4902]: I0121 16:06:38.830924 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" event={"ID":"f2ab6913-fdd0-4944-8c16-c213aecdd825","Type":"ContainerDied","Data":"d712f1b4cdf6346532f6de92bb64a6956b68ba70087482d2c995c46acdeba1e0"} Jan 21 16:06:38 crc kubenswrapper[4902]: I0121 16:06:38.830949 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" event={"ID":"f2ab6913-fdd0-4944-8c16-c213aecdd825","Type":"ContainerDied","Data":"f65c006b6d1833eb0c682b4e268cdf5aca7299197e459f4d36236b4f3229b9fe"} Jan 21 16:06:38 crc kubenswrapper[4902]: I0121 16:06:38.830961 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f65c006b6d1833eb0c682b4e268cdf5aca7299197e459f4d36236b4f3229b9fe" Jan 21 16:06:38 crc kubenswrapper[4902]: I0121 16:06:38.895116 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.021679 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-297sx\" (UniqueName: \"kubernetes.io/projected/f2ab6913-fdd0-4944-8c16-c213aecdd825-kube-api-access-297sx\") pod \"f2ab6913-fdd0-4944-8c16-c213aecdd825\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.021743 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-ovsdbserver-nb\") pod \"f2ab6913-fdd0-4944-8c16-c213aecdd825\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.021811 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-dns-svc\") pod \"f2ab6913-fdd0-4944-8c16-c213aecdd825\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.021891 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-config\") pod \"f2ab6913-fdd0-4944-8c16-c213aecdd825\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.021994 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-ovsdbserver-sb\") pod \"f2ab6913-fdd0-4944-8c16-c213aecdd825\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.027885 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2ab6913-fdd0-4944-8c16-c213aecdd825-kube-api-access-297sx" (OuterVolumeSpecName: "kube-api-access-297sx") pod "f2ab6913-fdd0-4944-8c16-c213aecdd825" (UID: "f2ab6913-fdd0-4944-8c16-c213aecdd825"). InnerVolumeSpecName "kube-api-access-297sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.070938 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f2ab6913-fdd0-4944-8c16-c213aecdd825" (UID: "f2ab6913-fdd0-4944-8c16-c213aecdd825"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.074919 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f2ab6913-fdd0-4944-8c16-c213aecdd825" (UID: "f2ab6913-fdd0-4944-8c16-c213aecdd825"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.092901 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-config" (OuterVolumeSpecName: "config") pod "f2ab6913-fdd0-4944-8c16-c213aecdd825" (UID: "f2ab6913-fdd0-4944-8c16-c213aecdd825"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.105948 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f2ab6913-fdd0-4944-8c16-c213aecdd825" (UID: "f2ab6913-fdd0-4944-8c16-c213aecdd825"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.124234 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.124260 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-297sx\" (UniqueName: \"kubernetes.io/projected/f2ab6913-fdd0-4944-8c16-c213aecdd825-kube-api-access-297sx\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.124269 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.124279 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.124287 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.840395 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.894199 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b4c895fc-lcmhk"] Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.896218 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b4c895fc-lcmhk"] Jan 21 16:06:40 crc kubenswrapper[4902]: I0121 16:06:40.305566 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2ab6913-fdd0-4944-8c16-c213aecdd825" path="/var/lib/kubelet/pods/f2ab6913-fdd0-4944-8c16-c213aecdd825/volumes" Jan 21 16:06:41 crc kubenswrapper[4902]: I0121 16:06:41.294847 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:06:41 crc kubenswrapper[4902]: E0121 16:06:41.295229 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:06:43 crc kubenswrapper[4902]: I0121 16:06:43.001593 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-659b467f5b-b29gg" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.30:9311/healthcheck\": read tcp 10.217.0.2:50934->10.217.1.30:9311: read: connection reset by peer" Jan 21 16:06:43 crc kubenswrapper[4902]: I0121 16:06:43.001601 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-659b467f5b-b29gg" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.30:9311/healthcheck\": read tcp 10.217.0.2:50942->10.217.1.30:9311: read: connection reset by peer" Jan 21 16:06:43 crc kubenswrapper[4902]: I0121 16:06:43.254119 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-659b467f5b-b29gg" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.30:9311/healthcheck\": dial tcp 10.217.1.30:9311: connect: connection refused" Jan 21 16:06:43 crc kubenswrapper[4902]: I0121 16:06:43.254246 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-659b467f5b-b29gg" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.30:9311/healthcheck\": dial tcp 10.217.1.30:9311: connect: connection refused" Jan 21 16:06:43 crc kubenswrapper[4902]: I0121 16:06:43.891016 4902 generic.go:334] "Generic (PLEG): container finished" podID="79ad32fd-7d7a-4779-87c5-093c16782962" containerID="fd2638be10a4932da8d6b26c06b5ad301fa3bce23378df3af60cb4cb40724ee4" exitCode=0 Jan 21 16:06:43 crc kubenswrapper[4902]: I0121 16:06:43.891091 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-659b467f5b-b29gg" event={"ID":"79ad32fd-7d7a-4779-87c5-093c16782962","Type":"ContainerDied","Data":"fd2638be10a4932da8d6b26c06b5ad301fa3bce23378df3af60cb4cb40724ee4"} Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.112658 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.125913 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-config-data\") pod \"79ad32fd-7d7a-4779-87c5-093c16782962\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.126032 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-config-data-custom\") pod \"79ad32fd-7d7a-4779-87c5-093c16782962\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.126151 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-combined-ca-bundle\") pod \"79ad32fd-7d7a-4779-87c5-093c16782962\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.126227 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79ad32fd-7d7a-4779-87c5-093c16782962-logs\") pod \"79ad32fd-7d7a-4779-87c5-093c16782962\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.126276 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6gdv\" (UniqueName: \"kubernetes.io/projected/79ad32fd-7d7a-4779-87c5-093c16782962-kube-api-access-n6gdv\") pod \"79ad32fd-7d7a-4779-87c5-093c16782962\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.126924 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79ad32fd-7d7a-4779-87c5-093c16782962-logs" (OuterVolumeSpecName: "logs") pod "79ad32fd-7d7a-4779-87c5-093c16782962" (UID: "79ad32fd-7d7a-4779-87c5-093c16782962"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.133181 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "79ad32fd-7d7a-4779-87c5-093c16782962" (UID: "79ad32fd-7d7a-4779-87c5-093c16782962"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.137481 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79ad32fd-7d7a-4779-87c5-093c16782962-kube-api-access-n6gdv" (OuterVolumeSpecName: "kube-api-access-n6gdv") pod "79ad32fd-7d7a-4779-87c5-093c16782962" (UID: "79ad32fd-7d7a-4779-87c5-093c16782962"). InnerVolumeSpecName "kube-api-access-n6gdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.193192 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79ad32fd-7d7a-4779-87c5-093c16782962" (UID: "79ad32fd-7d7a-4779-87c5-093c16782962"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.227526 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.227552 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.227561 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79ad32fd-7d7a-4779-87c5-093c16782962-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.227569 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6gdv\" (UniqueName: \"kubernetes.io/projected/79ad32fd-7d7a-4779-87c5-093c16782962-kube-api-access-n6gdv\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.227637 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-config-data" (OuterVolumeSpecName: "config-data") pod "79ad32fd-7d7a-4779-87c5-093c16782962" (UID: "79ad32fd-7d7a-4779-87c5-093c16782962"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.329758 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.901750 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-659b467f5b-b29gg" event={"ID":"79ad32fd-7d7a-4779-87c5-093c16782962","Type":"ContainerDied","Data":"ab7459deb556f801c8bbce99eae2c2e300be05d5f0dd8719e129ec50c380cba7"} Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.901792 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.901817 4902 scope.go:117] "RemoveContainer" containerID="fd2638be10a4932da8d6b26c06b5ad301fa3bce23378df3af60cb4cb40724ee4" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.924522 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-659b467f5b-b29gg"] Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.935428 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-659b467f5b-b29gg"] Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.937383 4902 scope.go:117] "RemoveContainer" containerID="3aa4ff5500e0f1a699c62a0b183168e55443f1a98525a6c911857d407fabc6d6" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.615988 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-w8j46"] Jan 21 16:06:45 crc kubenswrapper[4902]: E0121 16:06:45.616325 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2ab6913-fdd0-4944-8c16-c213aecdd825" containerName="dnsmasq-dns" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.616337 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2ab6913-fdd0-4944-8c16-c213aecdd825" containerName="dnsmasq-dns" Jan 21 16:06:45 crc kubenswrapper[4902]: E0121 16:06:45.616352 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.616358 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api" Jan 21 16:06:45 crc kubenswrapper[4902]: E0121 16:06:45.616377 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api-log" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.616383 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api-log" Jan 21 16:06:45 crc kubenswrapper[4902]: E0121 16:06:45.616400 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2ab6913-fdd0-4944-8c16-c213aecdd825" containerName="init" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.616405 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2ab6913-fdd0-4944-8c16-c213aecdd825" containerName="init" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.616537 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2ab6913-fdd0-4944-8c16-c213aecdd825" containerName="dnsmasq-dns" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.616551 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.616566 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api-log" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.617200 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w8j46" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.626409 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-w8j46"] Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.650244 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b91136e9-5bad-4d5c-8eff-8a77985a1726-operator-scripts\") pod \"neutron-db-create-w8j46\" (UID: \"b91136e9-5bad-4d5c-8eff-8a77985a1726\") " pod="openstack/neutron-db-create-w8j46" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.650608 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5fbq\" (UniqueName: \"kubernetes.io/projected/b91136e9-5bad-4d5c-8eff-8a77985a1726-kube-api-access-g5fbq\") pod \"neutron-db-create-w8j46\" (UID: \"b91136e9-5bad-4d5c-8eff-8a77985a1726\") " pod="openstack/neutron-db-create-w8j46" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.716591 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-cb7a-account-create-update-qqdxl"] Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.717586 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cb7a-account-create-update-qqdxl" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.719157 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.743838 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cb7a-account-create-update-qqdxl"] Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.751643 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5fbq\" (UniqueName: \"kubernetes.io/projected/b91136e9-5bad-4d5c-8eff-8a77985a1726-kube-api-access-g5fbq\") pod \"neutron-db-create-w8j46\" (UID: \"b91136e9-5bad-4d5c-8eff-8a77985a1726\") " pod="openstack/neutron-db-create-w8j46" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.751691 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b-operator-scripts\") pod \"neutron-cb7a-account-create-update-qqdxl\" (UID: \"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b\") " pod="openstack/neutron-cb7a-account-create-update-qqdxl" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.751764 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b91136e9-5bad-4d5c-8eff-8a77985a1726-operator-scripts\") pod \"neutron-db-create-w8j46\" (UID: \"b91136e9-5bad-4d5c-8eff-8a77985a1726\") " pod="openstack/neutron-db-create-w8j46" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.751798 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqbhs\" (UniqueName: \"kubernetes.io/projected/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b-kube-api-access-tqbhs\") pod \"neutron-cb7a-account-create-update-qqdxl\" (UID: \"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b\") " pod="openstack/neutron-cb7a-account-create-update-qqdxl" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.752771 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b91136e9-5bad-4d5c-8eff-8a77985a1726-operator-scripts\") pod \"neutron-db-create-w8j46\" (UID: \"b91136e9-5bad-4d5c-8eff-8a77985a1726\") " pod="openstack/neutron-db-create-w8j46" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.804873 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5fbq\" (UniqueName: \"kubernetes.io/projected/b91136e9-5bad-4d5c-8eff-8a77985a1726-kube-api-access-g5fbq\") pod \"neutron-db-create-w8j46\" (UID: \"b91136e9-5bad-4d5c-8eff-8a77985a1726\") " pod="openstack/neutron-db-create-w8j46" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.852788 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b-operator-scripts\") pod \"neutron-cb7a-account-create-update-qqdxl\" (UID: \"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b\") " pod="openstack/neutron-cb7a-account-create-update-qqdxl" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.852927 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqbhs\" (UniqueName: \"kubernetes.io/projected/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b-kube-api-access-tqbhs\") pod \"neutron-cb7a-account-create-update-qqdxl\" (UID: \"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b\") " pod="openstack/neutron-cb7a-account-create-update-qqdxl" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.853592 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b-operator-scripts\") pod \"neutron-cb7a-account-create-update-qqdxl\" (UID: \"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b\") " pod="openstack/neutron-cb7a-account-create-update-qqdxl" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.874680 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqbhs\" (UniqueName: \"kubernetes.io/projected/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b-kube-api-access-tqbhs\") pod \"neutron-cb7a-account-create-update-qqdxl\" (UID: \"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b\") " pod="openstack/neutron-cb7a-account-create-update-qqdxl" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.907962 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.934100 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w8j46" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.959488 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:46 crc kubenswrapper[4902]: I0121 16:06:46.051163 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cb7a-account-create-update-qqdxl" Jan 21 16:06:46 crc kubenswrapper[4902]: I0121 16:06:46.142446 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lqndd"] Jan 21 16:06:46 crc kubenswrapper[4902]: I0121 16:06:46.306429 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" path="/var/lib/kubelet/pods/79ad32fd-7d7a-4779-87c5-093c16782962/volumes" Jan 21 16:06:46 crc kubenswrapper[4902]: I0121 16:06:46.446733 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-w8j46"] Jan 21 16:06:46 crc kubenswrapper[4902]: I0121 16:06:46.544141 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cb7a-account-create-update-qqdxl"] Jan 21 16:06:46 crc kubenswrapper[4902]: W0121 16:06:46.546663 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3ecff7c_0bbc_47c7_82b4_fbdce132c94b.slice/crio-cc9b3facb086736d2fafbcc6a490312b82234f09a075436f35f7597a70f964dc WatchSource:0}: Error finding container cc9b3facb086736d2fafbcc6a490312b82234f09a075436f35f7597a70f964dc: Status 404 returned error can't find the container with id cc9b3facb086736d2fafbcc6a490312b82234f09a075436f35f7597a70f964dc Jan 21 16:06:46 crc kubenswrapper[4902]: I0121 16:06:46.918404 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-w8j46" event={"ID":"b91136e9-5bad-4d5c-8eff-8a77985a1726","Type":"ContainerStarted","Data":"efad9d3030aa3752d324b9640e74fe010cdfafc51d4ab887dfdd4055c1f6fa5a"} Jan 21 16:06:46 crc kubenswrapper[4902]: I0121 16:06:46.918722 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-w8j46" event={"ID":"b91136e9-5bad-4d5c-8eff-8a77985a1726","Type":"ContainerStarted","Data":"ed2497b8bd2c814230d43e74849565f16f3e8ac55df9e5dbe20de8f27938bd87"} Jan 21 16:06:46 crc kubenswrapper[4902]: I0121 16:06:46.920249 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cb7a-account-create-update-qqdxl" event={"ID":"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b","Type":"ContainerStarted","Data":"371e1a26e1a76ba398e48c1e98072317dd29a6c8abf9e8ab60b15d658481161c"} Jan 21 16:06:46 crc kubenswrapper[4902]: I0121 16:06:46.920294 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cb7a-account-create-update-qqdxl" event={"ID":"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b","Type":"ContainerStarted","Data":"cc9b3facb086736d2fafbcc6a490312b82234f09a075436f35f7597a70f964dc"} Jan 21 16:06:46 crc kubenswrapper[4902]: I0121 16:06:46.938189 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-w8j46" podStartSLOduration=1.9381693279999999 podStartE2EDuration="1.938169328s" podCreationTimestamp="2026-01-21 16:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:46.931072928 +0000 UTC m=+5569.007905947" watchObservedRunningTime="2026-01-21 16:06:46.938169328 +0000 UTC m=+5569.015002357" Jan 21 16:06:46 crc kubenswrapper[4902]: I0121 16:06:46.947865 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-cb7a-account-create-update-qqdxl" podStartSLOduration=1.94784998 podStartE2EDuration="1.94784998s" podCreationTimestamp="2026-01-21 16:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:46.946633536 +0000 UTC m=+5569.023466585" watchObservedRunningTime="2026-01-21 16:06:46.94784998 +0000 UTC m=+5569.024683009" Jan 21 16:06:47 crc kubenswrapper[4902]: I0121 16:06:47.930456 4902 generic.go:334] "Generic (PLEG): container finished" podID="d3ecff7c-0bbc-47c7-82b4-fbdce132c94b" containerID="371e1a26e1a76ba398e48c1e98072317dd29a6c8abf9e8ab60b15d658481161c" exitCode=0 Jan 21 16:06:47 crc kubenswrapper[4902]: I0121 16:06:47.930516 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cb7a-account-create-update-qqdxl" event={"ID":"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b","Type":"ContainerDied","Data":"371e1a26e1a76ba398e48c1e98072317dd29a6c8abf9e8ab60b15d658481161c"} Jan 21 16:06:47 crc kubenswrapper[4902]: I0121 16:06:47.932967 4902 generic.go:334] "Generic (PLEG): container finished" podID="b91136e9-5bad-4d5c-8eff-8a77985a1726" containerID="efad9d3030aa3752d324b9640e74fe010cdfafc51d4ab887dfdd4055c1f6fa5a" exitCode=0 Jan 21 16:06:47 crc kubenswrapper[4902]: I0121 16:06:47.933590 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lqndd" podUID="ddd543be-03fc-4a61-bb0b-55a066361a5f" containerName="registry-server" containerID="cri-o://d5c168b8bfc82e8b469571ae78e3766b5afadc9dedc2c3dfaca0d6e58daff150" gracePeriod=2 Jan 21 16:06:47 crc kubenswrapper[4902]: I0121 16:06:47.933020 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-w8j46" event={"ID":"b91136e9-5bad-4d5c-8eff-8a77985a1726","Type":"ContainerDied","Data":"efad9d3030aa3752d324b9640e74fe010cdfafc51d4ab887dfdd4055c1f6fa5a"} Jan 21 16:06:48 crc kubenswrapper[4902]: I0121 16:06:48.070251 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qc8ct"] Jan 21 16:06:48 crc kubenswrapper[4902]: I0121 16:06:48.076647 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qc8ct"] Jan 21 16:06:48 crc kubenswrapper[4902]: I0121 16:06:48.313693 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d642b708-8313-4edd-8183-4dcd679721b6" path="/var/lib/kubelet/pods/d642b708-8313-4edd-8183-4dcd679721b6/volumes" Jan 21 16:06:48 crc kubenswrapper[4902]: I0121 16:06:48.942592 4902 generic.go:334] "Generic (PLEG): container finished" podID="ddd543be-03fc-4a61-bb0b-55a066361a5f" containerID="d5c168b8bfc82e8b469571ae78e3766b5afadc9dedc2c3dfaca0d6e58daff150" exitCode=0 Jan 21 16:06:48 crc kubenswrapper[4902]: I0121 16:06:48.943008 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqndd" event={"ID":"ddd543be-03fc-4a61-bb0b-55a066361a5f","Type":"ContainerDied","Data":"d5c168b8bfc82e8b469571ae78e3766b5afadc9dedc2c3dfaca0d6e58daff150"} Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.046301 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.139909 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqg6q\" (UniqueName: \"kubernetes.io/projected/ddd543be-03fc-4a61-bb0b-55a066361a5f-kube-api-access-gqg6q\") pod \"ddd543be-03fc-4a61-bb0b-55a066361a5f\" (UID: \"ddd543be-03fc-4a61-bb0b-55a066361a5f\") " Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.140189 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddd543be-03fc-4a61-bb0b-55a066361a5f-utilities\") pod \"ddd543be-03fc-4a61-bb0b-55a066361a5f\" (UID: \"ddd543be-03fc-4a61-bb0b-55a066361a5f\") " Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.140783 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddd543be-03fc-4a61-bb0b-55a066361a5f-catalog-content\") pod \"ddd543be-03fc-4a61-bb0b-55a066361a5f\" (UID: \"ddd543be-03fc-4a61-bb0b-55a066361a5f\") " Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.149264 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddd543be-03fc-4a61-bb0b-55a066361a5f-utilities" (OuterVolumeSpecName: "utilities") pod "ddd543be-03fc-4a61-bb0b-55a066361a5f" (UID: "ddd543be-03fc-4a61-bb0b-55a066361a5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.150912 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddd543be-03fc-4a61-bb0b-55a066361a5f-kube-api-access-gqg6q" (OuterVolumeSpecName: "kube-api-access-gqg6q") pod "ddd543be-03fc-4a61-bb0b-55a066361a5f" (UID: "ddd543be-03fc-4a61-bb0b-55a066361a5f"). InnerVolumeSpecName "kube-api-access-gqg6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.243525 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqg6q\" (UniqueName: \"kubernetes.io/projected/ddd543be-03fc-4a61-bb0b-55a066361a5f-kube-api-access-gqg6q\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.243569 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddd543be-03fc-4a61-bb0b-55a066361a5f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.271262 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddd543be-03fc-4a61-bb0b-55a066361a5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ddd543be-03fc-4a61-bb0b-55a066361a5f" (UID: "ddd543be-03fc-4a61-bb0b-55a066361a5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.347588 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddd543be-03fc-4a61-bb0b-55a066361a5f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.361829 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cb7a-account-create-update-qqdxl" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.386817 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w8j46" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.448741 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqbhs\" (UniqueName: \"kubernetes.io/projected/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b-kube-api-access-tqbhs\") pod \"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b\" (UID: \"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b\") " Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.448835 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b-operator-scripts\") pod \"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b\" (UID: \"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b\") " Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.449666 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d3ecff7c-0bbc-47c7-82b4-fbdce132c94b" (UID: "d3ecff7c-0bbc-47c7-82b4-fbdce132c94b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.455841 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b-kube-api-access-tqbhs" (OuterVolumeSpecName: "kube-api-access-tqbhs") pod "d3ecff7c-0bbc-47c7-82b4-fbdce132c94b" (UID: "d3ecff7c-0bbc-47c7-82b4-fbdce132c94b"). InnerVolumeSpecName "kube-api-access-tqbhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.550902 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b91136e9-5bad-4d5c-8eff-8a77985a1726-operator-scripts\") pod \"b91136e9-5bad-4d5c-8eff-8a77985a1726\" (UID: \"b91136e9-5bad-4d5c-8eff-8a77985a1726\") " Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.551314 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5fbq\" (UniqueName: \"kubernetes.io/projected/b91136e9-5bad-4d5c-8eff-8a77985a1726-kube-api-access-g5fbq\") pod \"b91136e9-5bad-4d5c-8eff-8a77985a1726\" (UID: \"b91136e9-5bad-4d5c-8eff-8a77985a1726\") " Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.551654 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqbhs\" (UniqueName: \"kubernetes.io/projected/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b-kube-api-access-tqbhs\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.551670 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.551900 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b91136e9-5bad-4d5c-8eff-8a77985a1726-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b91136e9-5bad-4d5c-8eff-8a77985a1726" (UID: "b91136e9-5bad-4d5c-8eff-8a77985a1726"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.557237 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b91136e9-5bad-4d5c-8eff-8a77985a1726-kube-api-access-g5fbq" (OuterVolumeSpecName: "kube-api-access-g5fbq") pod "b91136e9-5bad-4d5c-8eff-8a77985a1726" (UID: "b91136e9-5bad-4d5c-8eff-8a77985a1726"). InnerVolumeSpecName "kube-api-access-g5fbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.654707 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b91136e9-5bad-4d5c-8eff-8a77985a1726-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.654744 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5fbq\" (UniqueName: \"kubernetes.io/projected/b91136e9-5bad-4d5c-8eff-8a77985a1726-kube-api-access-g5fbq\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.953522 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-w8j46" event={"ID":"b91136e9-5bad-4d5c-8eff-8a77985a1726","Type":"ContainerDied","Data":"ed2497b8bd2c814230d43e74849565f16f3e8ac55df9e5dbe20de8f27938bd87"} Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.953563 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed2497b8bd2c814230d43e74849565f16f3e8ac55df9e5dbe20de8f27938bd87" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.953631 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w8j46" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.956440 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqndd" event={"ID":"ddd543be-03fc-4a61-bb0b-55a066361a5f","Type":"ContainerDied","Data":"ff9be32e5e25b980d1ae27fea132e376859f3819b5b41cfaa777f65b307f07ce"} Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.956477 4902 scope.go:117] "RemoveContainer" containerID="d5c168b8bfc82e8b469571ae78e3766b5afadc9dedc2c3dfaca0d6e58daff150" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.956605 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.972072 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cb7a-account-create-update-qqdxl" event={"ID":"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b","Type":"ContainerDied","Data":"cc9b3facb086736d2fafbcc6a490312b82234f09a075436f35f7597a70f964dc"} Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.972104 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc9b3facb086736d2fafbcc6a490312b82234f09a075436f35f7597a70f964dc" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.972129 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cb7a-account-create-update-qqdxl" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.996242 4902 scope.go:117] "RemoveContainer" containerID="b573b028c6503d871975bc29f32aef22c05a1b0aa15962dbb2b5064c028f1d54" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.008240 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lqndd"] Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.017780 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lqndd"] Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.018261 4902 scope.go:117] "RemoveContainer" containerID="c1974a18bb600f84ad592fcb1ec0fc601ac073e08d0d02562db2c3da418aff99" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.311656 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddd543be-03fc-4a61-bb0b-55a066361a5f" path="/var/lib/kubelet/pods/ddd543be-03fc-4a61-bb0b-55a066361a5f/volumes" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.936909 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-fj4nd"] Jan 21 16:06:50 crc kubenswrapper[4902]: E0121 16:06:50.938025 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddd543be-03fc-4a61-bb0b-55a066361a5f" containerName="registry-server" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.938083 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddd543be-03fc-4a61-bb0b-55a066361a5f" containerName="registry-server" Jan 21 16:06:50 crc kubenswrapper[4902]: E0121 16:06:50.938112 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddd543be-03fc-4a61-bb0b-55a066361a5f" containerName="extract-utilities" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.938124 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddd543be-03fc-4a61-bb0b-55a066361a5f" containerName="extract-utilities" Jan 21 16:06:50 crc kubenswrapper[4902]: E0121 16:06:50.938168 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddd543be-03fc-4a61-bb0b-55a066361a5f" containerName="extract-content" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.938180 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddd543be-03fc-4a61-bb0b-55a066361a5f" containerName="extract-content" Jan 21 16:06:50 crc kubenswrapper[4902]: E0121 16:06:50.938194 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b91136e9-5bad-4d5c-8eff-8a77985a1726" containerName="mariadb-database-create" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.938205 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b91136e9-5bad-4d5c-8eff-8a77985a1726" containerName="mariadb-database-create" Jan 21 16:06:50 crc kubenswrapper[4902]: E0121 16:06:50.938265 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ecff7c-0bbc-47c7-82b4-fbdce132c94b" containerName="mariadb-account-create-update" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.938277 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ecff7c-0bbc-47c7-82b4-fbdce132c94b" containerName="mariadb-account-create-update" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.938612 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b91136e9-5bad-4d5c-8eff-8a77985a1726" containerName="mariadb-database-create" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.938676 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ecff7c-0bbc-47c7-82b4-fbdce132c94b" containerName="mariadb-account-create-update" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.938711 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddd543be-03fc-4a61-bb0b-55a066361a5f" containerName="registry-server" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.939958 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.942505 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.942590 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.942519 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-kpzjm" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.949941 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-fj4nd"] Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.083122 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz5qx\" (UniqueName: \"kubernetes.io/projected/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-kube-api-access-rz5qx\") pod \"neutron-db-sync-fj4nd\" (UID: \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\") " pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.083258 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-combined-ca-bundle\") pod \"neutron-db-sync-fj4nd\" (UID: \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\") " pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.083298 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-config\") pod \"neutron-db-sync-fj4nd\" (UID: \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\") " pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.184630 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz5qx\" (UniqueName: \"kubernetes.io/projected/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-kube-api-access-rz5qx\") pod \"neutron-db-sync-fj4nd\" (UID: \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\") " pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.184738 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-combined-ca-bundle\") pod \"neutron-db-sync-fj4nd\" (UID: \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\") " pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.184778 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-config\") pod \"neutron-db-sync-fj4nd\" (UID: \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\") " pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.189654 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-config\") pod \"neutron-db-sync-fj4nd\" (UID: \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\") " pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.195737 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-combined-ca-bundle\") pod \"neutron-db-sync-fj4nd\" (UID: \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\") " pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.201566 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz5qx\" (UniqueName: \"kubernetes.io/projected/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-kube-api-access-rz5qx\") pod \"neutron-db-sync-fj4nd\" (UID: \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\") " pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.265410 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.704147 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-fj4nd"] Jan 21 16:06:51 crc kubenswrapper[4902]: W0121 16:06:51.707288 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97a9d8bd_92b5_42ef_b945_6b3ccc65b48b.slice/crio-10973f4d131c614f6a2244337364a4826c042c9f800704193f82c6fb2420e131 WatchSource:0}: Error finding container 10973f4d131c614f6a2244337364a4826c042c9f800704193f82c6fb2420e131: Status 404 returned error can't find the container with id 10973f4d131c614f6a2244337364a4826c042c9f800704193f82c6fb2420e131 Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.995738 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fj4nd" event={"ID":"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b","Type":"ContainerStarted","Data":"f86be6b9f95f42ed7575c1db6ca5d50d96cc6520921a01dd5dff53f1cdbb4ae8"} Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.996115 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fj4nd" event={"ID":"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b","Type":"ContainerStarted","Data":"10973f4d131c614f6a2244337364a4826c042c9f800704193f82c6fb2420e131"} Jan 21 16:06:52 crc kubenswrapper[4902]: I0121 16:06:52.013718 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-fj4nd" podStartSLOduration=2.013700508 podStartE2EDuration="2.013700508s" podCreationTimestamp="2026-01-21 16:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:52.011171427 +0000 UTC m=+5574.088004456" watchObservedRunningTime="2026-01-21 16:06:52.013700508 +0000 UTC m=+5574.090533537" Jan 21 16:06:53 crc kubenswrapper[4902]: I0121 16:06:53.296616 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:06:53 crc kubenswrapper[4902]: E0121 16:06:53.296920 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:06:56 crc kubenswrapper[4902]: I0121 16:06:56.031479 4902 generic.go:334] "Generic (PLEG): container finished" podID="97a9d8bd-92b5-42ef-b945-6b3ccc65b48b" containerID="f86be6b9f95f42ed7575c1db6ca5d50d96cc6520921a01dd5dff53f1cdbb4ae8" exitCode=0 Jan 21 16:06:56 crc kubenswrapper[4902]: I0121 16:06:56.031552 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fj4nd" event={"ID":"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b","Type":"ContainerDied","Data":"f86be6b9f95f42ed7575c1db6ca5d50d96cc6520921a01dd5dff53f1cdbb4ae8"} Jan 21 16:06:57 crc kubenswrapper[4902]: I0121 16:06:57.485670 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:57 crc kubenswrapper[4902]: I0121 16:06:57.601403 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz5qx\" (UniqueName: \"kubernetes.io/projected/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-kube-api-access-rz5qx\") pod \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\" (UID: \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\") " Jan 21 16:06:57 crc kubenswrapper[4902]: I0121 16:06:57.601646 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-config\") pod \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\" (UID: \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\") " Jan 21 16:06:57 crc kubenswrapper[4902]: I0121 16:06:57.601700 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-combined-ca-bundle\") pod \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\" (UID: \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\") " Jan 21 16:06:57 crc kubenswrapper[4902]: I0121 16:06:57.607285 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-kube-api-access-rz5qx" (OuterVolumeSpecName: "kube-api-access-rz5qx") pod "97a9d8bd-92b5-42ef-b945-6b3ccc65b48b" (UID: "97a9d8bd-92b5-42ef-b945-6b3ccc65b48b"). InnerVolumeSpecName "kube-api-access-rz5qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:57 crc kubenswrapper[4902]: I0121 16:06:57.626059 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97a9d8bd-92b5-42ef-b945-6b3ccc65b48b" (UID: "97a9d8bd-92b5-42ef-b945-6b3ccc65b48b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:57 crc kubenswrapper[4902]: I0121 16:06:57.626900 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-config" (OuterVolumeSpecName: "config") pod "97a9d8bd-92b5-42ef-b945-6b3ccc65b48b" (UID: "97a9d8bd-92b5-42ef-b945-6b3ccc65b48b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:57 crc kubenswrapper[4902]: I0121 16:06:57.704022 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rz5qx\" (UniqueName: \"kubernetes.io/projected/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-kube-api-access-rz5qx\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:57 crc kubenswrapper[4902]: I0121 16:06:57.704086 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:57 crc kubenswrapper[4902]: I0121 16:06:57.704108 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.057895 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fj4nd" event={"ID":"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b","Type":"ContainerDied","Data":"10973f4d131c614f6a2244337364a4826c042c9f800704193f82c6fb2420e131"} Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.058234 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10973f4d131c614f6a2244337364a4826c042c9f800704193f82c6fb2420e131" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.057944 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.323931 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578fc9f6df-sv7cs"] Jan 21 16:06:58 crc kubenswrapper[4902]: E0121 16:06:58.324325 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a9d8bd-92b5-42ef-b945-6b3ccc65b48b" containerName="neutron-db-sync" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.324339 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a9d8bd-92b5-42ef-b945-6b3ccc65b48b" containerName="neutron-db-sync" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.324477 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="97a9d8bd-92b5-42ef-b945-6b3ccc65b48b" containerName="neutron-db-sync" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.325310 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.381373 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578fc9f6df-sv7cs"] Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.401946 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5cd8bf9fdd-mdn4r"] Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.404486 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.407027 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-kpzjm" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.407128 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.407422 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.410429 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.419634 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-ovsdbserver-nb\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.419733 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp2sl\" (UniqueName: \"kubernetes.io/projected/2187cb72-8703-4c5a-b8ae-b08461a35e1b-kube-api-access-kp2sl\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.419778 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-dns-svc\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.419806 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-ovsdbserver-sb\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.419834 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-config\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.437106 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5cd8bf9fdd-mdn4r"] Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.522556 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-combined-ca-bundle\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.522625 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp2sl\" (UniqueName: \"kubernetes.io/projected/2187cb72-8703-4c5a-b8ae-b08461a35e1b-kube-api-access-kp2sl\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.522770 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzfrg\" (UniqueName: \"kubernetes.io/projected/ccca0d57-e560-4b6a-9e68-930df6654ae6-kube-api-access-kzfrg\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.522835 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-dns-svc\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.522901 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-ovsdbserver-sb\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.522960 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-config\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.522991 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-httpd-config\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.523173 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-ovsdbserver-nb\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.523192 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-config\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.523244 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-ovndb-tls-certs\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.523841 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-ovsdbserver-sb\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.523839 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-dns-svc\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.524145 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-config\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.524524 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-ovsdbserver-nb\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.545424 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp2sl\" (UniqueName: \"kubernetes.io/projected/2187cb72-8703-4c5a-b8ae-b08461a35e1b-kube-api-access-kp2sl\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.625237 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-httpd-config\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.625332 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-config\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.625370 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-ovndb-tls-certs\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.625398 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-combined-ca-bundle\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.625436 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzfrg\" (UniqueName: \"kubernetes.io/projected/ccca0d57-e560-4b6a-9e68-930df6654ae6-kube-api-access-kzfrg\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.629889 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-httpd-config\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.629969 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-combined-ca-bundle\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.631440 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-ovndb-tls-certs\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.631728 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-config\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.654929 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzfrg\" (UniqueName: \"kubernetes.io/projected/ccca0d57-e560-4b6a-9e68-930df6654ae6-kube-api-access-kzfrg\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.664065 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.738130 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:59 crc kubenswrapper[4902]: I0121 16:06:59.185235 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578fc9f6df-sv7cs"] Jan 21 16:06:59 crc kubenswrapper[4902]: W0121 16:06:59.186979 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2187cb72_8703_4c5a_b8ae_b08461a35e1b.slice/crio-4b735d436fe292c999e308fdb1e7a5b7b7db30557572e001d066c0b1035f3a36 WatchSource:0}: Error finding container 4b735d436fe292c999e308fdb1e7a5b7b7db30557572e001d066c0b1035f3a36: Status 404 returned error can't find the container with id 4b735d436fe292c999e308fdb1e7a5b7b7db30557572e001d066c0b1035f3a36 Jan 21 16:06:59 crc kubenswrapper[4902]: I0121 16:06:59.517215 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5cd8bf9fdd-mdn4r"] Jan 21 16:06:59 crc kubenswrapper[4902]: W0121 16:06:59.522601 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccca0d57_e560_4b6a_9e68_930df6654ae6.slice/crio-1b8bc07a5488c91aa60176a1b153e4219c4f47797c1b031d3b65cdd8eacf2919 WatchSource:0}: Error finding container 1b8bc07a5488c91aa60176a1b153e4219c4f47797c1b031d3b65cdd8eacf2919: Status 404 returned error can't find the container with id 1b8bc07a5488c91aa60176a1b153e4219c4f47797c1b031d3b65cdd8eacf2919 Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.085544 4902 generic.go:334] "Generic (PLEG): container finished" podID="2187cb72-8703-4c5a-b8ae-b08461a35e1b" containerID="d20e1e16697cfc6d2b0773a52a542bdb6a438acc78c99ef60039303f8affa50a" exitCode=0 Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.087285 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" event={"ID":"2187cb72-8703-4c5a-b8ae-b08461a35e1b","Type":"ContainerDied","Data":"d20e1e16697cfc6d2b0773a52a542bdb6a438acc78c99ef60039303f8affa50a"} Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.087360 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" event={"ID":"2187cb72-8703-4c5a-b8ae-b08461a35e1b","Type":"ContainerStarted","Data":"4b735d436fe292c999e308fdb1e7a5b7b7db30557572e001d066c0b1035f3a36"} Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.089816 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cd8bf9fdd-mdn4r" event={"ID":"ccca0d57-e560-4b6a-9e68-930df6654ae6","Type":"ContainerStarted","Data":"fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904"} Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.089858 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cd8bf9fdd-mdn4r" event={"ID":"ccca0d57-e560-4b6a-9e68-930df6654ae6","Type":"ContainerStarted","Data":"ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04"} Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.089871 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cd8bf9fdd-mdn4r" event={"ID":"ccca0d57-e560-4b6a-9e68-930df6654ae6","Type":"ContainerStarted","Data":"1b8bc07a5488c91aa60176a1b153e4219c4f47797c1b031d3b65cdd8eacf2919"} Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.089981 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.130429 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5cd8bf9fdd-mdn4r" podStartSLOduration=2.1303866559999998 podStartE2EDuration="2.130386656s" podCreationTimestamp="2026-01-21 16:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:00.125245122 +0000 UTC m=+5582.202078171" watchObservedRunningTime="2026-01-21 16:07:00.130386656 +0000 UTC m=+5582.207219706" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.343143 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-66b9c9869c-btkxh"] Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.359230 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.362163 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.362326 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.363932 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66b9c9869c-btkxh"] Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.466194 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-internal-tls-certs\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.466262 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-ovndb-tls-certs\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.466291 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-combined-ca-bundle\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.466321 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b5l4\" (UniqueName: \"kubernetes.io/projected/565a7068-4930-41e5-99bb-a08376495b63-kube-api-access-5b5l4\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.466363 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-httpd-config\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.466402 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-config\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.466431 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-public-tls-certs\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.569611 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-config\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.570820 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-public-tls-certs\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.570954 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-internal-tls-certs\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.570986 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-ovndb-tls-certs\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.571014 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-combined-ca-bundle\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.571217 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b5l4\" (UniqueName: \"kubernetes.io/projected/565a7068-4930-41e5-99bb-a08376495b63-kube-api-access-5b5l4\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.571280 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-httpd-config\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.575701 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-config\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.576532 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-combined-ca-bundle\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.577199 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-internal-tls-certs\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.577314 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-httpd-config\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.578828 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-ovndb-tls-certs\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.580649 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-public-tls-certs\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.600204 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b5l4\" (UniqueName: \"kubernetes.io/projected/565a7068-4930-41e5-99bb-a08376495b63-kube-api-access-5b5l4\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.679920 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:01 crc kubenswrapper[4902]: I0121 16:07:01.101931 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" event={"ID":"2187cb72-8703-4c5a-b8ae-b08461a35e1b","Type":"ContainerStarted","Data":"61e16963fa09505056e5b4b488dc20f78815be4f3c3bdb6f8785b3afb332dfda"} Jan 21 16:07:01 crc kubenswrapper[4902]: I0121 16:07:01.128104 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" podStartSLOduration=3.128086658 podStartE2EDuration="3.128086658s" podCreationTimestamp="2026-01-21 16:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:01.120443774 +0000 UTC m=+5583.197276793" watchObservedRunningTime="2026-01-21 16:07:01.128086658 +0000 UTC m=+5583.204919687" Jan 21 16:07:01 crc kubenswrapper[4902]: I0121 16:07:01.203803 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66b9c9869c-btkxh"] Jan 21 16:07:02 crc kubenswrapper[4902]: I0121 16:07:02.111687 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66b9c9869c-btkxh" event={"ID":"565a7068-4930-41e5-99bb-a08376495b63","Type":"ContainerStarted","Data":"0287633e1aa6dad7b8e21acde3d76b800797e287231d5c35f09d1ec2c866d8ff"} Jan 21 16:07:02 crc kubenswrapper[4902]: I0121 16:07:02.111997 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:07:02 crc kubenswrapper[4902]: I0121 16:07:02.112016 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66b9c9869c-btkxh" event={"ID":"565a7068-4930-41e5-99bb-a08376495b63","Type":"ContainerStarted","Data":"4a840a4d15d6dedf573309427149cc5c58b3a84758a393d7b40ddaeec8719850"} Jan 21 16:07:02 crc kubenswrapper[4902]: I0121 16:07:02.112026 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66b9c9869c-btkxh" event={"ID":"565a7068-4930-41e5-99bb-a08376495b63","Type":"ContainerStarted","Data":"23025e690c1cf9ea2e18a5a67272598f4f17d1cb6a734ff1518f017078dae43e"} Jan 21 16:07:02 crc kubenswrapper[4902]: I0121 16:07:02.134404 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-66b9c9869c-btkxh" podStartSLOduration=2.134208498 podStartE2EDuration="2.134208498s" podCreationTimestamp="2026-01-21 16:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:02.130123773 +0000 UTC m=+5584.206956802" watchObservedRunningTime="2026-01-21 16:07:02.134208498 +0000 UTC m=+5584.211041527" Jan 21 16:07:03 crc kubenswrapper[4902]: I0121 16:07:03.120565 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:08 crc kubenswrapper[4902]: I0121 16:07:08.301490 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:07:08 crc kubenswrapper[4902]: E0121 16:07:08.303209 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:07:08 crc kubenswrapper[4902]: I0121 16:07:08.666293 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:07:08 crc kubenswrapper[4902]: I0121 16:07:08.748070 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85d446946c-gb4r2"] Jan 21 16:07:08 crc kubenswrapper[4902]: I0121 16:07:08.749016 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" podUID="ff4fadc7-2c31-451f-9455-5112a195b36e" containerName="dnsmasq-dns" containerID="cri-o://5d178668253a4565a7e272761e78d3c8af2f6d158e8aec8d4e0682f8d430786d" gracePeriod=10 Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.181801 4902 generic.go:334] "Generic (PLEG): container finished" podID="ff4fadc7-2c31-451f-9455-5112a195b36e" containerID="5d178668253a4565a7e272761e78d3c8af2f6d158e8aec8d4e0682f8d430786d" exitCode=0 Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.181883 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" event={"ID":"ff4fadc7-2c31-451f-9455-5112a195b36e","Type":"ContainerDied","Data":"5d178668253a4565a7e272761e78d3c8af2f6d158e8aec8d4e0682f8d430786d"} Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.303217 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.365555 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-dns-svc\") pod \"ff4fadc7-2c31-451f-9455-5112a195b36e\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.365952 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww75l\" (UniqueName: \"kubernetes.io/projected/ff4fadc7-2c31-451f-9455-5112a195b36e-kube-api-access-ww75l\") pod \"ff4fadc7-2c31-451f-9455-5112a195b36e\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.385247 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-ovsdbserver-nb\") pod \"ff4fadc7-2c31-451f-9455-5112a195b36e\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.385359 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-config\") pod \"ff4fadc7-2c31-451f-9455-5112a195b36e\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.385382 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-ovsdbserver-sb\") pod \"ff4fadc7-2c31-451f-9455-5112a195b36e\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.390300 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff4fadc7-2c31-451f-9455-5112a195b36e-kube-api-access-ww75l" (OuterVolumeSpecName: "kube-api-access-ww75l") pod "ff4fadc7-2c31-451f-9455-5112a195b36e" (UID: "ff4fadc7-2c31-451f-9455-5112a195b36e"). InnerVolumeSpecName "kube-api-access-ww75l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.417857 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ff4fadc7-2c31-451f-9455-5112a195b36e" (UID: "ff4fadc7-2c31-451f-9455-5112a195b36e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.431352 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ff4fadc7-2c31-451f-9455-5112a195b36e" (UID: "ff4fadc7-2c31-451f-9455-5112a195b36e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.452146 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-config" (OuterVolumeSpecName: "config") pod "ff4fadc7-2c31-451f-9455-5112a195b36e" (UID: "ff4fadc7-2c31-451f-9455-5112a195b36e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.454910 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ff4fadc7-2c31-451f-9455-5112a195b36e" (UID: "ff4fadc7-2c31-451f-9455-5112a195b36e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.493716 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.493751 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww75l\" (UniqueName: \"kubernetes.io/projected/ff4fadc7-2c31-451f-9455-5112a195b36e-kube-api-access-ww75l\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.493764 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.493774 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.493782 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:10 crc kubenswrapper[4902]: I0121 16:07:10.198554 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" event={"ID":"ff4fadc7-2c31-451f-9455-5112a195b36e","Type":"ContainerDied","Data":"85869cd817e0c1d272d916eb419b76f6201c88f1c3e64ad5a43adcae83c81773"} Jan 21 16:07:10 crc kubenswrapper[4902]: I0121 16:07:10.198614 4902 scope.go:117] "RemoveContainer" containerID="5d178668253a4565a7e272761e78d3c8af2f6d158e8aec8d4e0682f8d430786d" Jan 21 16:07:10 crc kubenswrapper[4902]: I0121 16:07:10.198681 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:07:10 crc kubenswrapper[4902]: I0121 16:07:10.245438 4902 scope.go:117] "RemoveContainer" containerID="78e0f5562314520f841b7ea0877c38f4f434c16d4495c8580ea2c10f6698660a" Jan 21 16:07:10 crc kubenswrapper[4902]: I0121 16:07:10.265957 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85d446946c-gb4r2"] Jan 21 16:07:10 crc kubenswrapper[4902]: I0121 16:07:10.276223 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85d446946c-gb4r2"] Jan 21 16:07:10 crc kubenswrapper[4902]: I0121 16:07:10.306664 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff4fadc7-2c31-451f-9455-5112a195b36e" path="/var/lib/kubelet/pods/ff4fadc7-2c31-451f-9455-5112a195b36e/volumes" Jan 21 16:07:20 crc kubenswrapper[4902]: I0121 16:07:20.294758 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:07:21 crc kubenswrapper[4902]: I0121 16:07:21.306071 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"db5e286ed12d5cdac8541e22aa5c6794629a15f27a4e802d85c369fc2b4f4f6b"} Jan 21 16:07:24 crc kubenswrapper[4902]: I0121 16:07:24.365994 4902 scope.go:117] "RemoveContainer" containerID="2f8dc76ea47c61aa0225c738e775c625c670e1dc7f5e344791fe2553026ed3d2" Jan 21 16:07:28 crc kubenswrapper[4902]: I0121 16:07:28.756711 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:07:30 crc kubenswrapper[4902]: I0121 16:07:30.692627 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:30 crc kubenswrapper[4902]: I0121 16:07:30.771182 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5cd8bf9fdd-mdn4r"] Jan 21 16:07:30 crc kubenswrapper[4902]: I0121 16:07:30.771387 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5cd8bf9fdd-mdn4r" podUID="ccca0d57-e560-4b6a-9e68-930df6654ae6" containerName="neutron-api" containerID="cri-o://ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04" gracePeriod=30 Jan 21 16:07:30 crc kubenswrapper[4902]: I0121 16:07:30.771503 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5cd8bf9fdd-mdn4r" podUID="ccca0d57-e560-4b6a-9e68-930df6654ae6" containerName="neutron-httpd" containerID="cri-o://fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904" gracePeriod=30 Jan 21 16:07:31 crc kubenswrapper[4902]: I0121 16:07:31.633964 4902 generic.go:334] "Generic (PLEG): container finished" podID="ccca0d57-e560-4b6a-9e68-930df6654ae6" containerID="fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904" exitCode=0 Jan 21 16:07:31 crc kubenswrapper[4902]: I0121 16:07:31.634300 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cd8bf9fdd-mdn4r" event={"ID":"ccca0d57-e560-4b6a-9e68-930df6654ae6","Type":"ContainerDied","Data":"fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904"} Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.315003 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.399590 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-combined-ca-bundle\") pod \"ccca0d57-e560-4b6a-9e68-930df6654ae6\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.399638 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-config\") pod \"ccca0d57-e560-4b6a-9e68-930df6654ae6\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.399678 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-ovndb-tls-certs\") pod \"ccca0d57-e560-4b6a-9e68-930df6654ae6\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.399708 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzfrg\" (UniqueName: \"kubernetes.io/projected/ccca0d57-e560-4b6a-9e68-930df6654ae6-kube-api-access-kzfrg\") pod \"ccca0d57-e560-4b6a-9e68-930df6654ae6\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.399817 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-httpd-config\") pod \"ccca0d57-e560-4b6a-9e68-930df6654ae6\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.405439 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccca0d57-e560-4b6a-9e68-930df6654ae6-kube-api-access-kzfrg" (OuterVolumeSpecName: "kube-api-access-kzfrg") pod "ccca0d57-e560-4b6a-9e68-930df6654ae6" (UID: "ccca0d57-e560-4b6a-9e68-930df6654ae6"). InnerVolumeSpecName "kube-api-access-kzfrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.405545 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ccca0d57-e560-4b6a-9e68-930df6654ae6" (UID: "ccca0d57-e560-4b6a-9e68-930df6654ae6"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.449241 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-config" (OuterVolumeSpecName: "config") pod "ccca0d57-e560-4b6a-9e68-930df6654ae6" (UID: "ccca0d57-e560-4b6a-9e68-930df6654ae6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.455018 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ccca0d57-e560-4b6a-9e68-930df6654ae6" (UID: "ccca0d57-e560-4b6a-9e68-930df6654ae6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.467985 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "ccca0d57-e560-4b6a-9e68-930df6654ae6" (UID: "ccca0d57-e560-4b6a-9e68-930df6654ae6"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.501152 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.501187 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.501199 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.501207 4902 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.501216 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzfrg\" (UniqueName: \"kubernetes.io/projected/ccca0d57-e560-4b6a-9e68-930df6654ae6-kube-api-access-kzfrg\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.669928 4902 generic.go:334] "Generic (PLEG): container finished" podID="ccca0d57-e560-4b6a-9e68-930df6654ae6" containerID="ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04" exitCode=0 Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.669966 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cd8bf9fdd-mdn4r" event={"ID":"ccca0d57-e560-4b6a-9e68-930df6654ae6","Type":"ContainerDied","Data":"ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04"} Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.669987 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.670037 4902 scope.go:117] "RemoveContainer" containerID="fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.670025 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cd8bf9fdd-mdn4r" event={"ID":"ccca0d57-e560-4b6a-9e68-930df6654ae6","Type":"ContainerDied","Data":"1b8bc07a5488c91aa60176a1b153e4219c4f47797c1b031d3b65cdd8eacf2919"} Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.698659 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5cd8bf9fdd-mdn4r"] Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.704707 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5cd8bf9fdd-mdn4r"] Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.708120 4902 scope.go:117] "RemoveContainer" containerID="ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.732189 4902 scope.go:117] "RemoveContainer" containerID="fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904" Jan 21 16:07:35 crc kubenswrapper[4902]: E0121 16:07:35.732709 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904\": container with ID starting with fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904 not found: ID does not exist" containerID="fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.732743 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904"} err="failed to get container status \"fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904\": rpc error: code = NotFound desc = could not find container \"fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904\": container with ID starting with fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904 not found: ID does not exist" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.732775 4902 scope.go:117] "RemoveContainer" containerID="ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04" Jan 21 16:07:35 crc kubenswrapper[4902]: E0121 16:07:35.733410 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04\": container with ID starting with ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04 not found: ID does not exist" containerID="ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.733430 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04"} err="failed to get container status \"ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04\": rpc error: code = NotFound desc = could not find container \"ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04\": container with ID starting with ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04 not found: ID does not exist" Jan 21 16:07:36 crc kubenswrapper[4902]: I0121 16:07:36.307310 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccca0d57-e560-4b6a-9e68-930df6654ae6" path="/var/lib/kubelet/pods/ccca0d57-e560-4b6a-9e68-930df6654ae6/volumes" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.009705 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-mmsfz"] Jan 21 16:07:40 crc kubenswrapper[4902]: E0121 16:07:40.010545 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccca0d57-e560-4b6a-9e68-930df6654ae6" containerName="neutron-httpd" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.010558 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccca0d57-e560-4b6a-9e68-930df6654ae6" containerName="neutron-httpd" Jan 21 16:07:40 crc kubenswrapper[4902]: E0121 16:07:40.010586 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff4fadc7-2c31-451f-9455-5112a195b36e" containerName="dnsmasq-dns" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.010592 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff4fadc7-2c31-451f-9455-5112a195b36e" containerName="dnsmasq-dns" Jan 21 16:07:40 crc kubenswrapper[4902]: E0121 16:07:40.010602 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccca0d57-e560-4b6a-9e68-930df6654ae6" containerName="neutron-api" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.010608 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccca0d57-e560-4b6a-9e68-930df6654ae6" containerName="neutron-api" Jan 21 16:07:40 crc kubenswrapper[4902]: E0121 16:07:40.010625 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff4fadc7-2c31-451f-9455-5112a195b36e" containerName="init" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.010631 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff4fadc7-2c31-451f-9455-5112a195b36e" containerName="init" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.010809 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff4fadc7-2c31-451f-9455-5112a195b36e" containerName="dnsmasq-dns" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.010822 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccca0d57-e560-4b6a-9e68-930df6654ae6" containerName="neutron-httpd" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.010838 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccca0d57-e560-4b6a-9e68-930df6654ae6" containerName="neutron-api" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.011389 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.015750 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-ccbtr" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.016011 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.016207 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.020397 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.027469 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.028790 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-mmsfz"] Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.082957 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-swiftconf\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.083279 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp8vb\" (UniqueName: \"kubernetes.io/projected/4000cb23-899c-4f52-8c37-8e1c7108a21d-kube-api-access-rp8vb\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.083374 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4000cb23-899c-4f52-8c37-8e1c7108a21d-scripts\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.083493 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-dispersionconf\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.083580 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4000cb23-899c-4f52-8c37-8e1c7108a21d-etc-swift\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.083653 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4000cb23-899c-4f52-8c37-8e1c7108a21d-ring-data-devices\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.083720 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-combined-ca-bundle\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.116105 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fc6cc5b55-79wth"] Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.117432 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.154576 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fc6cc5b55-79wth"] Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.185949 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-ovsdbserver-nb\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.185996 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-config\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.186015 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plr2q\" (UniqueName: \"kubernetes.io/projected/111bf0bc-8088-42d9-bf09-396b7d087ae8-kube-api-access-plr2q\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.186072 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp8vb\" (UniqueName: \"kubernetes.io/projected/4000cb23-899c-4f52-8c37-8e1c7108a21d-kube-api-access-rp8vb\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.186092 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4000cb23-899c-4f52-8c37-8e1c7108a21d-scripts\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.186112 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-dns-svc\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.186157 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-ovsdbserver-sb\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.186191 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-dispersionconf\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.186225 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4000cb23-899c-4f52-8c37-8e1c7108a21d-etc-swift\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.186244 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4000cb23-899c-4f52-8c37-8e1c7108a21d-ring-data-devices\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.186259 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-combined-ca-bundle\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.186290 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-swiftconf\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.187978 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4000cb23-899c-4f52-8c37-8e1c7108a21d-scripts\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.188562 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4000cb23-899c-4f52-8c37-8e1c7108a21d-ring-data-devices\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.188828 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4000cb23-899c-4f52-8c37-8e1c7108a21d-etc-swift\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.198430 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-dispersionconf\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.206412 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp8vb\" (UniqueName: \"kubernetes.io/projected/4000cb23-899c-4f52-8c37-8e1c7108a21d-kube-api-access-rp8vb\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.215479 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-swiftconf\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.249626 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-combined-ca-bundle\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.288246 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-ovsdbserver-nb\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.288299 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-config\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.288319 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plr2q\" (UniqueName: \"kubernetes.io/projected/111bf0bc-8088-42d9-bf09-396b7d087ae8-kube-api-access-plr2q\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.288362 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-dns-svc\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.288722 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-ovsdbserver-sb\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.289236 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-config\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.289324 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-dns-svc\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.289541 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-ovsdbserver-nb\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.291465 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-ovsdbserver-sb\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.335222 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plr2q\" (UniqueName: \"kubernetes.io/projected/111bf0bc-8088-42d9-bf09-396b7d087ae8-kube-api-access-plr2q\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.352583 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.447439 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.822578 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-mmsfz"] Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.972451 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fc6cc5b55-79wth"] Jan 21 16:07:41 crc kubenswrapper[4902]: I0121 16:07:41.741840 4902 generic.go:334] "Generic (PLEG): container finished" podID="111bf0bc-8088-42d9-bf09-396b7d087ae8" containerID="f7e84713417d76194209c0593e58d711f067795272f7e92b6fd9b97ab7a3b30b" exitCode=0 Jan 21 16:07:41 crc kubenswrapper[4902]: I0121 16:07:41.742332 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" event={"ID":"111bf0bc-8088-42d9-bf09-396b7d087ae8","Type":"ContainerDied","Data":"f7e84713417d76194209c0593e58d711f067795272f7e92b6fd9b97ab7a3b30b"} Jan 21 16:07:41 crc kubenswrapper[4902]: I0121 16:07:41.744771 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" event={"ID":"111bf0bc-8088-42d9-bf09-396b7d087ae8","Type":"ContainerStarted","Data":"755a6256ef5a838eba2c3bd36413f341c71947696922ab6ab33e27a445898c69"} Jan 21 16:07:41 crc kubenswrapper[4902]: I0121 16:07:41.752697 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mmsfz" event={"ID":"4000cb23-899c-4f52-8c37-8e1c7108a21d","Type":"ContainerStarted","Data":"1109cd9543d9ca0e8eb4144afbd398a9130cce515cb6f9310120753b66d88c44"} Jan 21 16:07:41 crc kubenswrapper[4902]: I0121 16:07:41.752945 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mmsfz" event={"ID":"4000cb23-899c-4f52-8c37-8e1c7108a21d","Type":"ContainerStarted","Data":"6e9da8422935b55b08f6368255008dcb58e01be8b6781b8ec0c502e854c13813"} Jan 21 16:07:41 crc kubenswrapper[4902]: I0121 16:07:41.788595 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-mmsfz" podStartSLOduration=2.788576195 podStartE2EDuration="2.788576195s" podCreationTimestamp="2026-01-21 16:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:41.782977948 +0000 UTC m=+5623.859810987" watchObservedRunningTime="2026-01-21 16:07:41.788576195 +0000 UTC m=+5623.865409224" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.188971 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-84746f8478-mdj2b"] Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.190699 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.192822 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.202948 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-84746f8478-mdj2b"] Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.224331 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de4224d4-f2fc-49c1-99cb-a5be69aa192a-run-httpd\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.224412 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/de4224d4-f2fc-49c1-99cb-a5be69aa192a-etc-swift\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.224466 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de4224d4-f2fc-49c1-99cb-a5be69aa192a-log-httpd\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.224508 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de4224d4-f2fc-49c1-99cb-a5be69aa192a-combined-ca-bundle\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.224585 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67jhd\" (UniqueName: \"kubernetes.io/projected/de4224d4-f2fc-49c1-99cb-a5be69aa192a-kube-api-access-67jhd\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.224656 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de4224d4-f2fc-49c1-99cb-a5be69aa192a-config-data\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.326748 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de4224d4-f2fc-49c1-99cb-a5be69aa192a-run-httpd\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.326819 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/de4224d4-f2fc-49c1-99cb-a5be69aa192a-etc-swift\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.326883 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de4224d4-f2fc-49c1-99cb-a5be69aa192a-log-httpd\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.326931 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de4224d4-f2fc-49c1-99cb-a5be69aa192a-combined-ca-bundle\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.327004 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67jhd\" (UniqueName: \"kubernetes.io/projected/de4224d4-f2fc-49c1-99cb-a5be69aa192a-kube-api-access-67jhd\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.327106 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de4224d4-f2fc-49c1-99cb-a5be69aa192a-config-data\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.327781 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de4224d4-f2fc-49c1-99cb-a5be69aa192a-run-httpd\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.327822 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de4224d4-f2fc-49c1-99cb-a5be69aa192a-log-httpd\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.331904 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/de4224d4-f2fc-49c1-99cb-a5be69aa192a-etc-swift\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.332022 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de4224d4-f2fc-49c1-99cb-a5be69aa192a-combined-ca-bundle\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.332696 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de4224d4-f2fc-49c1-99cb-a5be69aa192a-config-data\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.343588 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67jhd\" (UniqueName: \"kubernetes.io/projected/de4224d4-f2fc-49c1-99cb-a5be69aa192a-kube-api-access-67jhd\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.508174 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.779608 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" event={"ID":"111bf0bc-8088-42d9-bf09-396b7d087ae8","Type":"ContainerStarted","Data":"93731fa0d5c4e506f4bd5fe81d39edaa6cbe9eb7e4608fe707e85adf923a410f"} Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.798309 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" podStartSLOduration=2.798286736 podStartE2EDuration="2.798286736s" podCreationTimestamp="2026-01-21 16:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:42.794610513 +0000 UTC m=+5624.871443542" watchObservedRunningTime="2026-01-21 16:07:42.798286736 +0000 UTC m=+5624.875119765" Jan 21 16:07:43 crc kubenswrapper[4902]: I0121 16:07:43.168892 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-84746f8478-mdj2b"] Jan 21 16:07:43 crc kubenswrapper[4902]: I0121 16:07:43.787654 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84746f8478-mdj2b" event={"ID":"de4224d4-f2fc-49c1-99cb-a5be69aa192a","Type":"ContainerStarted","Data":"404c4a3c21381e7cc61f99e3986fd4e1183c5fbe8f0d1f4a4f670f8a4ba3edf5"} Jan 21 16:07:43 crc kubenswrapper[4902]: I0121 16:07:43.788224 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:43 crc kubenswrapper[4902]: I0121 16:07:43.788274 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84746f8478-mdj2b" event={"ID":"de4224d4-f2fc-49c1-99cb-a5be69aa192a","Type":"ContainerStarted","Data":"43e71d5e398cf8f40ce3eb06bd5ddd543ee0ab417f623428bd4d97f168c68a10"} Jan 21 16:07:43 crc kubenswrapper[4902]: I0121 16:07:43.788298 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84746f8478-mdj2b" event={"ID":"de4224d4-f2fc-49c1-99cb-a5be69aa192a","Type":"ContainerStarted","Data":"1bb55214e5dca05bf8741ccdd7566d6ef5e813cbe0984652ffe8f4ee39b2d239"} Jan 21 16:07:43 crc kubenswrapper[4902]: I0121 16:07:43.813936 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-84746f8478-mdj2b" podStartSLOduration=1.813912282 podStartE2EDuration="1.813912282s" podCreationTimestamp="2026-01-21 16:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:43.805026312 +0000 UTC m=+5625.881859361" watchObservedRunningTime="2026-01-21 16:07:43.813912282 +0000 UTC m=+5625.890745311" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.447461 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5866fbc874-ktwnr"] Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.449262 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.456281 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.456486 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.462192 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5866fbc874-ktwnr"] Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.501876 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d3194a4-20d2-47cf-8d32-37a8afa5738d-public-tls-certs\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.502166 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d3194a4-20d2-47cf-8d32-37a8afa5738d-run-httpd\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.504191 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d3194a4-20d2-47cf-8d32-37a8afa5738d-log-httpd\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.504262 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4d3194a4-20d2-47cf-8d32-37a8afa5738d-etc-swift\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.504387 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d3194a4-20d2-47cf-8d32-37a8afa5738d-config-data\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.504402 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzdfn\" (UniqueName: \"kubernetes.io/projected/4d3194a4-20d2-47cf-8d32-37a8afa5738d-kube-api-access-dzdfn\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.504464 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d3194a4-20d2-47cf-8d32-37a8afa5738d-combined-ca-bundle\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.504525 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d3194a4-20d2-47cf-8d32-37a8afa5738d-internal-tls-certs\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.606633 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d3194a4-20d2-47cf-8d32-37a8afa5738d-internal-tls-certs\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.606694 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d3194a4-20d2-47cf-8d32-37a8afa5738d-public-tls-certs\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.606726 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d3194a4-20d2-47cf-8d32-37a8afa5738d-run-httpd\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.606746 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d3194a4-20d2-47cf-8d32-37a8afa5738d-log-httpd\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.606778 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4d3194a4-20d2-47cf-8d32-37a8afa5738d-etc-swift\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.606856 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d3194a4-20d2-47cf-8d32-37a8afa5738d-config-data\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.606874 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzdfn\" (UniqueName: \"kubernetes.io/projected/4d3194a4-20d2-47cf-8d32-37a8afa5738d-kube-api-access-dzdfn\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.606921 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d3194a4-20d2-47cf-8d32-37a8afa5738d-combined-ca-bundle\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.607486 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d3194a4-20d2-47cf-8d32-37a8afa5738d-run-httpd\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.607945 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d3194a4-20d2-47cf-8d32-37a8afa5738d-log-httpd\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.619177 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d3194a4-20d2-47cf-8d32-37a8afa5738d-combined-ca-bundle\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.619675 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4d3194a4-20d2-47cf-8d32-37a8afa5738d-etc-swift\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.632104 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d3194a4-20d2-47cf-8d32-37a8afa5738d-public-tls-certs\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.632431 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d3194a4-20d2-47cf-8d32-37a8afa5738d-config-data\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.636708 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d3194a4-20d2-47cf-8d32-37a8afa5738d-internal-tls-certs\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.646169 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzdfn\" (UniqueName: \"kubernetes.io/projected/4d3194a4-20d2-47cf-8d32-37a8afa5738d-kube-api-access-dzdfn\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.801511 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.814918 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.815852 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:45 crc kubenswrapper[4902]: I0121 16:07:45.469354 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5866fbc874-ktwnr"] Jan 21 16:07:45 crc kubenswrapper[4902]: I0121 16:07:45.822365 4902 generic.go:334] "Generic (PLEG): container finished" podID="4000cb23-899c-4f52-8c37-8e1c7108a21d" containerID="1109cd9543d9ca0e8eb4144afbd398a9130cce515cb6f9310120753b66d88c44" exitCode=0 Jan 21 16:07:45 crc kubenswrapper[4902]: I0121 16:07:45.822436 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mmsfz" event={"ID":"4000cb23-899c-4f52-8c37-8e1c7108a21d","Type":"ContainerDied","Data":"1109cd9543d9ca0e8eb4144afbd398a9130cce515cb6f9310120753b66d88c44"} Jan 21 16:07:45 crc kubenswrapper[4902]: I0121 16:07:45.824572 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5866fbc874-ktwnr" event={"ID":"4d3194a4-20d2-47cf-8d32-37a8afa5738d","Type":"ContainerStarted","Data":"9149bfcc7534562e0aae4c781ffc79d16af585f90262aa8c324e59ea87159aec"} Jan 21 16:07:45 crc kubenswrapper[4902]: I0121 16:07:45.824635 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5866fbc874-ktwnr" event={"ID":"4d3194a4-20d2-47cf-8d32-37a8afa5738d","Type":"ContainerStarted","Data":"2be58878fc3439679995a3fabdf3d9ab8ce6569b5c9b04dd00889b253bac2ece"} Jan 21 16:07:46 crc kubenswrapper[4902]: I0121 16:07:46.837645 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5866fbc874-ktwnr" event={"ID":"4d3194a4-20d2-47cf-8d32-37a8afa5738d","Type":"ContainerStarted","Data":"4e9287e6759385ce2f52852e70535e9ba99af81abf6af1572e4a2c22448cad16"} Jan 21 16:07:46 crc kubenswrapper[4902]: I0121 16:07:46.837997 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:46 crc kubenswrapper[4902]: I0121 16:07:46.838064 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:46 crc kubenswrapper[4902]: I0121 16:07:46.867215 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5866fbc874-ktwnr" podStartSLOduration=2.867194121 podStartE2EDuration="2.867194121s" podCreationTimestamp="2026-01-21 16:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:46.865471383 +0000 UTC m=+5628.942304412" watchObservedRunningTime="2026-01-21 16:07:46.867194121 +0000 UTC m=+5628.944027150" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.240383 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.253033 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4000cb23-899c-4f52-8c37-8e1c7108a21d-ring-data-devices\") pod \"4000cb23-899c-4f52-8c37-8e1c7108a21d\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.253259 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4000cb23-899c-4f52-8c37-8e1c7108a21d-scripts\") pod \"4000cb23-899c-4f52-8c37-8e1c7108a21d\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.253329 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-dispersionconf\") pod \"4000cb23-899c-4f52-8c37-8e1c7108a21d\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.253445 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-swiftconf\") pod \"4000cb23-899c-4f52-8c37-8e1c7108a21d\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.253598 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp8vb\" (UniqueName: \"kubernetes.io/projected/4000cb23-899c-4f52-8c37-8e1c7108a21d-kube-api-access-rp8vb\") pod \"4000cb23-899c-4f52-8c37-8e1c7108a21d\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.253643 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-combined-ca-bundle\") pod \"4000cb23-899c-4f52-8c37-8e1c7108a21d\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.253706 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4000cb23-899c-4f52-8c37-8e1c7108a21d-etc-swift\") pod \"4000cb23-899c-4f52-8c37-8e1c7108a21d\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.253945 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4000cb23-899c-4f52-8c37-8e1c7108a21d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "4000cb23-899c-4f52-8c37-8e1c7108a21d" (UID: "4000cb23-899c-4f52-8c37-8e1c7108a21d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.254748 4902 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4000cb23-899c-4f52-8c37-8e1c7108a21d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.256513 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4000cb23-899c-4f52-8c37-8e1c7108a21d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4000cb23-899c-4f52-8c37-8e1c7108a21d" (UID: "4000cb23-899c-4f52-8c37-8e1c7108a21d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.263154 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4000cb23-899c-4f52-8c37-8e1c7108a21d-kube-api-access-rp8vb" (OuterVolumeSpecName: "kube-api-access-rp8vb") pod "4000cb23-899c-4f52-8c37-8e1c7108a21d" (UID: "4000cb23-899c-4f52-8c37-8e1c7108a21d"). InnerVolumeSpecName "kube-api-access-rp8vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.266383 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "4000cb23-899c-4f52-8c37-8e1c7108a21d" (UID: "4000cb23-899c-4f52-8c37-8e1c7108a21d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.282767 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4000cb23-899c-4f52-8c37-8e1c7108a21d-scripts" (OuterVolumeSpecName: "scripts") pod "4000cb23-899c-4f52-8c37-8e1c7108a21d" (UID: "4000cb23-899c-4f52-8c37-8e1c7108a21d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.290403 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "4000cb23-899c-4f52-8c37-8e1c7108a21d" (UID: "4000cb23-899c-4f52-8c37-8e1c7108a21d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.292324 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4000cb23-899c-4f52-8c37-8e1c7108a21d" (UID: "4000cb23-899c-4f52-8c37-8e1c7108a21d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.356513 4902 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4000cb23-899c-4f52-8c37-8e1c7108a21d-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.356546 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4000cb23-899c-4f52-8c37-8e1c7108a21d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.356556 4902 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.356565 4902 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.356573 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp8vb\" (UniqueName: \"kubernetes.io/projected/4000cb23-899c-4f52-8c37-8e1c7108a21d-kube-api-access-rp8vb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.356582 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.845729 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mmsfz" event={"ID":"4000cb23-899c-4f52-8c37-8e1c7108a21d","Type":"ContainerDied","Data":"6e9da8422935b55b08f6368255008dcb58e01be8b6781b8ec0c502e854c13813"} Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.845777 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e9da8422935b55b08f6368255008dcb58e01be8b6781b8ec0c502e854c13813" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.845745 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:50 crc kubenswrapper[4902]: I0121 16:07:50.449350 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:50 crc kubenswrapper[4902]: I0121 16:07:50.538514 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578fc9f6df-sv7cs"] Jan 21 16:07:50 crc kubenswrapper[4902]: I0121 16:07:50.538767 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" podUID="2187cb72-8703-4c5a-b8ae-b08461a35e1b" containerName="dnsmasq-dns" containerID="cri-o://61e16963fa09505056e5b4b488dc20f78815be4f3c3bdb6f8785b3afb332dfda" gracePeriod=10 Jan 21 16:07:50 crc kubenswrapper[4902]: I0121 16:07:50.885630 4902 generic.go:334] "Generic (PLEG): container finished" podID="2187cb72-8703-4c5a-b8ae-b08461a35e1b" containerID="61e16963fa09505056e5b4b488dc20f78815be4f3c3bdb6f8785b3afb332dfda" exitCode=0 Jan 21 16:07:50 crc kubenswrapper[4902]: I0121 16:07:50.885753 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" event={"ID":"2187cb72-8703-4c5a-b8ae-b08461a35e1b","Type":"ContainerDied","Data":"61e16963fa09505056e5b4b488dc20f78815be4f3c3bdb6f8785b3afb332dfda"} Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.607092 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.639794 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp2sl\" (UniqueName: \"kubernetes.io/projected/2187cb72-8703-4c5a-b8ae-b08461a35e1b-kube-api-access-kp2sl\") pod \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.640014 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-dns-svc\") pod \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.640090 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-ovsdbserver-nb\") pod \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.640161 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-config\") pod \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.640203 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-ovsdbserver-sb\") pod \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.655321 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2187cb72-8703-4c5a-b8ae-b08461a35e1b-kube-api-access-kp2sl" (OuterVolumeSpecName: "kube-api-access-kp2sl") pod "2187cb72-8703-4c5a-b8ae-b08461a35e1b" (UID: "2187cb72-8703-4c5a-b8ae-b08461a35e1b"). InnerVolumeSpecName "kube-api-access-kp2sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.682000 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2187cb72-8703-4c5a-b8ae-b08461a35e1b" (UID: "2187cb72-8703-4c5a-b8ae-b08461a35e1b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.683408 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2187cb72-8703-4c5a-b8ae-b08461a35e1b" (UID: "2187cb72-8703-4c5a-b8ae-b08461a35e1b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.697408 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-config" (OuterVolumeSpecName: "config") pod "2187cb72-8703-4c5a-b8ae-b08461a35e1b" (UID: "2187cb72-8703-4c5a-b8ae-b08461a35e1b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.702815 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2187cb72-8703-4c5a-b8ae-b08461a35e1b" (UID: "2187cb72-8703-4c5a-b8ae-b08461a35e1b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.742953 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp2sl\" (UniqueName: \"kubernetes.io/projected/2187cb72-8703-4c5a-b8ae-b08461a35e1b-kube-api-access-kp2sl\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.743001 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.743018 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.743029 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.743061 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.897367 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" event={"ID":"2187cb72-8703-4c5a-b8ae-b08461a35e1b","Type":"ContainerDied","Data":"4b735d436fe292c999e308fdb1e7a5b7b7db30557572e001d066c0b1035f3a36"} Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.897412 4902 scope.go:117] "RemoveContainer" containerID="61e16963fa09505056e5b4b488dc20f78815be4f3c3bdb6f8785b3afb332dfda" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.897466 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.919113 4902 scope.go:117] "RemoveContainer" containerID="d20e1e16697cfc6d2b0773a52a542bdb6a438acc78c99ef60039303f8affa50a" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.939268 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578fc9f6df-sv7cs"] Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.947664 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578fc9f6df-sv7cs"] Jan 21 16:07:52 crc kubenswrapper[4902]: I0121 16:07:52.306975 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2187cb72-8703-4c5a-b8ae-b08461a35e1b" path="/var/lib/kubelet/pods/2187cb72-8703-4c5a-b8ae-b08461a35e1b/volumes" Jan 21 16:07:52 crc kubenswrapper[4902]: I0121 16:07:52.511816 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:52 crc kubenswrapper[4902]: I0121 16:07:52.512673 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:54 crc kubenswrapper[4902]: I0121 16:07:54.812649 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:54 crc kubenswrapper[4902]: I0121 16:07:54.814366 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:54 crc kubenswrapper[4902]: I0121 16:07:54.930216 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-84746f8478-mdj2b"] Jan 21 16:07:54 crc kubenswrapper[4902]: I0121 16:07:54.931223 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-84746f8478-mdj2b" podUID="de4224d4-f2fc-49c1-99cb-a5be69aa192a" containerName="proxy-httpd" containerID="cri-o://43e71d5e398cf8f40ce3eb06bd5ddd543ee0ab417f623428bd4d97f168c68a10" gracePeriod=30 Jan 21 16:07:54 crc kubenswrapper[4902]: I0121 16:07:54.931384 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-84746f8478-mdj2b" podUID="de4224d4-f2fc-49c1-99cb-a5be69aa192a" containerName="proxy-server" containerID="cri-o://404c4a3c21381e7cc61f99e3986fd4e1183c5fbe8f0d1f4a4f670f8a4ba3edf5" gracePeriod=30 Jan 21 16:07:55 crc kubenswrapper[4902]: I0121 16:07:55.950370 4902 generic.go:334] "Generic (PLEG): container finished" podID="de4224d4-f2fc-49c1-99cb-a5be69aa192a" containerID="404c4a3c21381e7cc61f99e3986fd4e1183c5fbe8f0d1f4a4f670f8a4ba3edf5" exitCode=0 Jan 21 16:07:55 crc kubenswrapper[4902]: I0121 16:07:55.950648 4902 generic.go:334] "Generic (PLEG): container finished" podID="de4224d4-f2fc-49c1-99cb-a5be69aa192a" containerID="43e71d5e398cf8f40ce3eb06bd5ddd543ee0ab417f623428bd4d97f168c68a10" exitCode=0 Jan 21 16:07:55 crc kubenswrapper[4902]: I0121 16:07:55.950419 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84746f8478-mdj2b" event={"ID":"de4224d4-f2fc-49c1-99cb-a5be69aa192a","Type":"ContainerDied","Data":"404c4a3c21381e7cc61f99e3986fd4e1183c5fbe8f0d1f4a4f670f8a4ba3edf5"} Jan 21 16:07:55 crc kubenswrapper[4902]: I0121 16:07:55.950686 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84746f8478-mdj2b" event={"ID":"de4224d4-f2fc-49c1-99cb-a5be69aa192a","Type":"ContainerDied","Data":"43e71d5e398cf8f40ce3eb06bd5ddd543ee0ab417f623428bd4d97f168c68a10"} Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.119587 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.228014 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de4224d4-f2fc-49c1-99cb-a5be69aa192a-config-data\") pod \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.228451 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de4224d4-f2fc-49c1-99cb-a5be69aa192a-log-httpd\") pod \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.228631 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de4224d4-f2fc-49c1-99cb-a5be69aa192a-combined-ca-bundle\") pod \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.228751 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67jhd\" (UniqueName: \"kubernetes.io/projected/de4224d4-f2fc-49c1-99cb-a5be69aa192a-kube-api-access-67jhd\") pod \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.228889 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de4224d4-f2fc-49c1-99cb-a5be69aa192a-run-httpd\") pod \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.229088 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/de4224d4-f2fc-49c1-99cb-a5be69aa192a-etc-swift\") pod \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.231073 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de4224d4-f2fc-49c1-99cb-a5be69aa192a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "de4224d4-f2fc-49c1-99cb-a5be69aa192a" (UID: "de4224d4-f2fc-49c1-99cb-a5be69aa192a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.231216 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de4224d4-f2fc-49c1-99cb-a5be69aa192a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "de4224d4-f2fc-49c1-99cb-a5be69aa192a" (UID: "de4224d4-f2fc-49c1-99cb-a5be69aa192a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.235783 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de4224d4-f2fc-49c1-99cb-a5be69aa192a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "de4224d4-f2fc-49c1-99cb-a5be69aa192a" (UID: "de4224d4-f2fc-49c1-99cb-a5be69aa192a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.236247 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de4224d4-f2fc-49c1-99cb-a5be69aa192a-kube-api-access-67jhd" (OuterVolumeSpecName: "kube-api-access-67jhd") pod "de4224d4-f2fc-49c1-99cb-a5be69aa192a" (UID: "de4224d4-f2fc-49c1-99cb-a5be69aa192a"). InnerVolumeSpecName "kube-api-access-67jhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.273422 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4224d4-f2fc-49c1-99cb-a5be69aa192a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de4224d4-f2fc-49c1-99cb-a5be69aa192a" (UID: "de4224d4-f2fc-49c1-99cb-a5be69aa192a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.281813 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4224d4-f2fc-49c1-99cb-a5be69aa192a-config-data" (OuterVolumeSpecName: "config-data") pod "de4224d4-f2fc-49c1-99cb-a5be69aa192a" (UID: "de4224d4-f2fc-49c1-99cb-a5be69aa192a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.331201 4902 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/de4224d4-f2fc-49c1-99cb-a5be69aa192a-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.331235 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de4224d4-f2fc-49c1-99cb-a5be69aa192a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.331247 4902 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de4224d4-f2fc-49c1-99cb-a5be69aa192a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.331257 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de4224d4-f2fc-49c1-99cb-a5be69aa192a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.331272 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67jhd\" (UniqueName: \"kubernetes.io/projected/de4224d4-f2fc-49c1-99cb-a5be69aa192a-kube-api-access-67jhd\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.331282 4902 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de4224d4-f2fc-49c1-99cb-a5be69aa192a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.961743 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.962987 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84746f8478-mdj2b" event={"ID":"de4224d4-f2fc-49c1-99cb-a5be69aa192a","Type":"ContainerDied","Data":"1bb55214e5dca05bf8741ccdd7566d6ef5e813cbe0984652ffe8f4ee39b2d239"} Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.963098 4902 scope.go:117] "RemoveContainer" containerID="404c4a3c21381e7cc61f99e3986fd4e1183c5fbe8f0d1f4a4f670f8a4ba3edf5" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.983746 4902 scope.go:117] "RemoveContainer" containerID="43e71d5e398cf8f40ce3eb06bd5ddd543ee0ab417f623428bd4d97f168c68a10" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.993249 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-84746f8478-mdj2b"] Jan 21 16:07:57 crc kubenswrapper[4902]: I0121 16:07:57.000156 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-84746f8478-mdj2b"] Jan 21 16:07:58 crc kubenswrapper[4902]: I0121 16:07:58.309735 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de4224d4-f2fc-49c1-99cb-a5be69aa192a" path="/var/lib/kubelet/pods/de4224d4-f2fc-49c1-99cb-a5be69aa192a/volumes" Jan 21 16:08:00 crc kubenswrapper[4902]: I0121 16:08:00.967194 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-nh5zs"] Jan 21 16:08:00 crc kubenswrapper[4902]: E0121 16:08:00.968911 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de4224d4-f2fc-49c1-99cb-a5be69aa192a" containerName="proxy-server" Jan 21 16:08:00 crc kubenswrapper[4902]: I0121 16:08:00.968948 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="de4224d4-f2fc-49c1-99cb-a5be69aa192a" containerName="proxy-server" Jan 21 16:08:00 crc kubenswrapper[4902]: E0121 16:08:00.968978 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4000cb23-899c-4f52-8c37-8e1c7108a21d" containerName="swift-ring-rebalance" Jan 21 16:08:00 crc kubenswrapper[4902]: I0121 16:08:00.968986 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4000cb23-899c-4f52-8c37-8e1c7108a21d" containerName="swift-ring-rebalance" Jan 21 16:08:00 crc kubenswrapper[4902]: E0121 16:08:00.969008 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de4224d4-f2fc-49c1-99cb-a5be69aa192a" containerName="proxy-httpd" Jan 21 16:08:00 crc kubenswrapper[4902]: I0121 16:08:00.969017 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="de4224d4-f2fc-49c1-99cb-a5be69aa192a" containerName="proxy-httpd" Jan 21 16:08:00 crc kubenswrapper[4902]: E0121 16:08:00.969119 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2187cb72-8703-4c5a-b8ae-b08461a35e1b" containerName="init" Jan 21 16:08:00 crc kubenswrapper[4902]: I0121 16:08:00.969130 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2187cb72-8703-4c5a-b8ae-b08461a35e1b" containerName="init" Jan 21 16:08:00 crc kubenswrapper[4902]: E0121 16:08:00.969158 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2187cb72-8703-4c5a-b8ae-b08461a35e1b" containerName="dnsmasq-dns" Jan 21 16:08:00 crc kubenswrapper[4902]: I0121 16:08:00.969165 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2187cb72-8703-4c5a-b8ae-b08461a35e1b" containerName="dnsmasq-dns" Jan 21 16:08:00 crc kubenswrapper[4902]: I0121 16:08:00.970366 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="de4224d4-f2fc-49c1-99cb-a5be69aa192a" containerName="proxy-httpd" Jan 21 16:08:00 crc kubenswrapper[4902]: I0121 16:08:00.970386 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="4000cb23-899c-4f52-8c37-8e1c7108a21d" containerName="swift-ring-rebalance" Jan 21 16:08:00 crc kubenswrapper[4902]: I0121 16:08:00.970409 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="de4224d4-f2fc-49c1-99cb-a5be69aa192a" containerName="proxy-server" Jan 21 16:08:00 crc kubenswrapper[4902]: I0121 16:08:00.970424 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="2187cb72-8703-4c5a-b8ae-b08461a35e1b" containerName="dnsmasq-dns" Jan 21 16:08:00 crc kubenswrapper[4902]: I0121 16:08:00.971189 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nh5zs" Jan 21 16:08:00 crc kubenswrapper[4902]: I0121 16:08:00.983903 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-nh5zs"] Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.064084 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-5eaa-account-create-update-6b2pj"] Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.065679 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5eaa-account-create-update-6b2pj" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.068294 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.071821 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5eaa-account-create-update-6b2pj"] Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.128212 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/316e80e8-1286-4be7-b686-90693f8e7c95-operator-scripts\") pod \"cinder-db-create-nh5zs\" (UID: \"316e80e8-1286-4be7-b686-90693f8e7c95\") " pod="openstack/cinder-db-create-nh5zs" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.128275 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqfkj\" (UniqueName: \"kubernetes.io/projected/316e80e8-1286-4be7-b686-90693f8e7c95-kube-api-access-hqfkj\") pod \"cinder-db-create-nh5zs\" (UID: \"316e80e8-1286-4be7-b686-90693f8e7c95\") " pod="openstack/cinder-db-create-nh5zs" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.229374 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh28h\" (UniqueName: \"kubernetes.io/projected/d8d97084-2d8b-44c2-877e-b09211b7d84d-kube-api-access-mh28h\") pod \"cinder-5eaa-account-create-update-6b2pj\" (UID: \"d8d97084-2d8b-44c2-877e-b09211b7d84d\") " pod="openstack/cinder-5eaa-account-create-update-6b2pj" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.229681 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/316e80e8-1286-4be7-b686-90693f8e7c95-operator-scripts\") pod \"cinder-db-create-nh5zs\" (UID: \"316e80e8-1286-4be7-b686-90693f8e7c95\") " pod="openstack/cinder-db-create-nh5zs" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.229790 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqfkj\" (UniqueName: \"kubernetes.io/projected/316e80e8-1286-4be7-b686-90693f8e7c95-kube-api-access-hqfkj\") pod \"cinder-db-create-nh5zs\" (UID: \"316e80e8-1286-4be7-b686-90693f8e7c95\") " pod="openstack/cinder-db-create-nh5zs" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.229899 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8d97084-2d8b-44c2-877e-b09211b7d84d-operator-scripts\") pod \"cinder-5eaa-account-create-update-6b2pj\" (UID: \"d8d97084-2d8b-44c2-877e-b09211b7d84d\") " pod="openstack/cinder-5eaa-account-create-update-6b2pj" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.230476 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/316e80e8-1286-4be7-b686-90693f8e7c95-operator-scripts\") pod \"cinder-db-create-nh5zs\" (UID: \"316e80e8-1286-4be7-b686-90693f8e7c95\") " pod="openstack/cinder-db-create-nh5zs" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.249607 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqfkj\" (UniqueName: \"kubernetes.io/projected/316e80e8-1286-4be7-b686-90693f8e7c95-kube-api-access-hqfkj\") pod \"cinder-db-create-nh5zs\" (UID: \"316e80e8-1286-4be7-b686-90693f8e7c95\") " pod="openstack/cinder-db-create-nh5zs" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.300602 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nh5zs" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.332288 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8d97084-2d8b-44c2-877e-b09211b7d84d-operator-scripts\") pod \"cinder-5eaa-account-create-update-6b2pj\" (UID: \"d8d97084-2d8b-44c2-877e-b09211b7d84d\") " pod="openstack/cinder-5eaa-account-create-update-6b2pj" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.332433 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh28h\" (UniqueName: \"kubernetes.io/projected/d8d97084-2d8b-44c2-877e-b09211b7d84d-kube-api-access-mh28h\") pod \"cinder-5eaa-account-create-update-6b2pj\" (UID: \"d8d97084-2d8b-44c2-877e-b09211b7d84d\") " pod="openstack/cinder-5eaa-account-create-update-6b2pj" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.333746 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8d97084-2d8b-44c2-877e-b09211b7d84d-operator-scripts\") pod \"cinder-5eaa-account-create-update-6b2pj\" (UID: \"d8d97084-2d8b-44c2-877e-b09211b7d84d\") " pod="openstack/cinder-5eaa-account-create-update-6b2pj" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.361606 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh28h\" (UniqueName: \"kubernetes.io/projected/d8d97084-2d8b-44c2-877e-b09211b7d84d-kube-api-access-mh28h\") pod \"cinder-5eaa-account-create-update-6b2pj\" (UID: \"d8d97084-2d8b-44c2-877e-b09211b7d84d\") " pod="openstack/cinder-5eaa-account-create-update-6b2pj" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.382378 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5eaa-account-create-update-6b2pj" Jan 21 16:08:01 crc kubenswrapper[4902]: W0121 16:08:01.843894 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod316e80e8_1286_4be7_b686_90693f8e7c95.slice/crio-b9431160a0f22affe0b0b83a370705ec6edb2cf1c742104612f5012b2c35c1ca WatchSource:0}: Error finding container b9431160a0f22affe0b0b83a370705ec6edb2cf1c742104612f5012b2c35c1ca: Status 404 returned error can't find the container with id b9431160a0f22affe0b0b83a370705ec6edb2cf1c742104612f5012b2c35c1ca Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.847372 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-nh5zs"] Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.924766 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5eaa-account-create-update-6b2pj"] Jan 21 16:08:01 crc kubenswrapper[4902]: W0121 16:08:01.927741 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8d97084_2d8b_44c2_877e_b09211b7d84d.slice/crio-4b272761778b72e881a56869b3b7806f9e06dd2fe05fe2e1e12fa23cbd234279 WatchSource:0}: Error finding container 4b272761778b72e881a56869b3b7806f9e06dd2fe05fe2e1e12fa23cbd234279: Status 404 returned error can't find the container with id 4b272761778b72e881a56869b3b7806f9e06dd2fe05fe2e1e12fa23cbd234279 Jan 21 16:08:02 crc kubenswrapper[4902]: I0121 16:08:02.020230 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5eaa-account-create-update-6b2pj" event={"ID":"d8d97084-2d8b-44c2-877e-b09211b7d84d","Type":"ContainerStarted","Data":"4b272761778b72e881a56869b3b7806f9e06dd2fe05fe2e1e12fa23cbd234279"} Jan 21 16:08:02 crc kubenswrapper[4902]: I0121 16:08:02.023482 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nh5zs" event={"ID":"316e80e8-1286-4be7-b686-90693f8e7c95","Type":"ContainerStarted","Data":"b9431160a0f22affe0b0b83a370705ec6edb2cf1c742104612f5012b2c35c1ca"} Jan 21 16:08:02 crc kubenswrapper[4902]: I0121 16:08:02.045022 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-nh5zs" podStartSLOduration=2.045000066 podStartE2EDuration="2.045000066s" podCreationTimestamp="2026-01-21 16:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:08:02.037970869 +0000 UTC m=+5644.114803898" watchObservedRunningTime="2026-01-21 16:08:02.045000066 +0000 UTC m=+5644.121833095" Jan 21 16:08:03 crc kubenswrapper[4902]: I0121 16:08:03.033067 4902 generic.go:334] "Generic (PLEG): container finished" podID="316e80e8-1286-4be7-b686-90693f8e7c95" containerID="08d576dd917c4a5813c6d9db476bd6fcba6691cafc01f2c3b9a02a013671f644" exitCode=0 Jan 21 16:08:03 crc kubenswrapper[4902]: I0121 16:08:03.033112 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nh5zs" event={"ID":"316e80e8-1286-4be7-b686-90693f8e7c95","Type":"ContainerDied","Data":"08d576dd917c4a5813c6d9db476bd6fcba6691cafc01f2c3b9a02a013671f644"} Jan 21 16:08:03 crc kubenswrapper[4902]: I0121 16:08:03.035815 4902 generic.go:334] "Generic (PLEG): container finished" podID="d8d97084-2d8b-44c2-877e-b09211b7d84d" containerID="f9ff394d565c17472cbe0972635a74048f6673c7d9a12c90517226508f39624b" exitCode=0 Jan 21 16:08:03 crc kubenswrapper[4902]: I0121 16:08:03.035906 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5eaa-account-create-update-6b2pj" event={"ID":"d8d97084-2d8b-44c2-877e-b09211b7d84d","Type":"ContainerDied","Data":"f9ff394d565c17472cbe0972635a74048f6673c7d9a12c90517226508f39624b"} Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.512626 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nh5zs" Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.521604 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5eaa-account-create-update-6b2pj" Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.703874 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/316e80e8-1286-4be7-b686-90693f8e7c95-operator-scripts\") pod \"316e80e8-1286-4be7-b686-90693f8e7c95\" (UID: \"316e80e8-1286-4be7-b686-90693f8e7c95\") " Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.704154 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh28h\" (UniqueName: \"kubernetes.io/projected/d8d97084-2d8b-44c2-877e-b09211b7d84d-kube-api-access-mh28h\") pod \"d8d97084-2d8b-44c2-877e-b09211b7d84d\" (UID: \"d8d97084-2d8b-44c2-877e-b09211b7d84d\") " Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.704186 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqfkj\" (UniqueName: \"kubernetes.io/projected/316e80e8-1286-4be7-b686-90693f8e7c95-kube-api-access-hqfkj\") pod \"316e80e8-1286-4be7-b686-90693f8e7c95\" (UID: \"316e80e8-1286-4be7-b686-90693f8e7c95\") " Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.704849 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/316e80e8-1286-4be7-b686-90693f8e7c95-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "316e80e8-1286-4be7-b686-90693f8e7c95" (UID: "316e80e8-1286-4be7-b686-90693f8e7c95"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.705135 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8d97084-2d8b-44c2-877e-b09211b7d84d-operator-scripts\") pod \"d8d97084-2d8b-44c2-877e-b09211b7d84d\" (UID: \"d8d97084-2d8b-44c2-877e-b09211b7d84d\") " Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.705461 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8d97084-2d8b-44c2-877e-b09211b7d84d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8d97084-2d8b-44c2-877e-b09211b7d84d" (UID: "d8d97084-2d8b-44c2-877e-b09211b7d84d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.705686 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8d97084-2d8b-44c2-877e-b09211b7d84d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.705707 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/316e80e8-1286-4be7-b686-90693f8e7c95-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.709406 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8d97084-2d8b-44c2-877e-b09211b7d84d-kube-api-access-mh28h" (OuterVolumeSpecName: "kube-api-access-mh28h") pod "d8d97084-2d8b-44c2-877e-b09211b7d84d" (UID: "d8d97084-2d8b-44c2-877e-b09211b7d84d"). InnerVolumeSpecName "kube-api-access-mh28h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.710250 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/316e80e8-1286-4be7-b686-90693f8e7c95-kube-api-access-hqfkj" (OuterVolumeSpecName: "kube-api-access-hqfkj") pod "316e80e8-1286-4be7-b686-90693f8e7c95" (UID: "316e80e8-1286-4be7-b686-90693f8e7c95"). InnerVolumeSpecName "kube-api-access-hqfkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.807694 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh28h\" (UniqueName: \"kubernetes.io/projected/d8d97084-2d8b-44c2-877e-b09211b7d84d-kube-api-access-mh28h\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.808080 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqfkj\" (UniqueName: \"kubernetes.io/projected/316e80e8-1286-4be7-b686-90693f8e7c95-kube-api-access-hqfkj\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:05 crc kubenswrapper[4902]: I0121 16:08:05.052655 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nh5zs" event={"ID":"316e80e8-1286-4be7-b686-90693f8e7c95","Type":"ContainerDied","Data":"b9431160a0f22affe0b0b83a370705ec6edb2cf1c742104612f5012b2c35c1ca"} Jan 21 16:08:05 crc kubenswrapper[4902]: I0121 16:08:05.052698 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9431160a0f22affe0b0b83a370705ec6edb2cf1c742104612f5012b2c35c1ca" Jan 21 16:08:05 crc kubenswrapper[4902]: I0121 16:08:05.052678 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nh5zs" Jan 21 16:08:05 crc kubenswrapper[4902]: I0121 16:08:05.056131 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5eaa-account-create-update-6b2pj" event={"ID":"d8d97084-2d8b-44c2-877e-b09211b7d84d","Type":"ContainerDied","Data":"4b272761778b72e881a56869b3b7806f9e06dd2fe05fe2e1e12fa23cbd234279"} Jan 21 16:08:05 crc kubenswrapper[4902]: I0121 16:08:05.056169 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5eaa-account-create-update-6b2pj" Jan 21 16:08:05 crc kubenswrapper[4902]: I0121 16:08:05.056177 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b272761778b72e881a56869b3b7806f9e06dd2fe05fe2e1e12fa23cbd234279" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.338496 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-k7rr4"] Jan 21 16:08:06 crc kubenswrapper[4902]: E0121 16:08:06.338910 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="316e80e8-1286-4be7-b686-90693f8e7c95" containerName="mariadb-database-create" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.338927 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="316e80e8-1286-4be7-b686-90693f8e7c95" containerName="mariadb-database-create" Jan 21 16:08:06 crc kubenswrapper[4902]: E0121 16:08:06.338945 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8d97084-2d8b-44c2-877e-b09211b7d84d" containerName="mariadb-account-create-update" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.338954 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8d97084-2d8b-44c2-877e-b09211b7d84d" containerName="mariadb-account-create-update" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.339193 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d97084-2d8b-44c2-877e-b09211b7d84d" containerName="mariadb-account-create-update" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.339220 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="316e80e8-1286-4be7-b686-90693f8e7c95" containerName="mariadb-database-create" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.340093 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.342117 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.345387 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.346084 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-v844l" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.352723 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-k7rr4"] Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.434958 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-config-data\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.435208 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-combined-ca-bundle\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.435280 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-db-sync-config-data\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.435331 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-scripts\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.435766 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ldfm\" (UniqueName: \"kubernetes.io/projected/610eddf1-f5de-40bb-8946-2092c4edfa9c-kube-api-access-8ldfm\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.435827 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/610eddf1-f5de-40bb-8946-2092c4edfa9c-etc-machine-id\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.537594 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ldfm\" (UniqueName: \"kubernetes.io/projected/610eddf1-f5de-40bb-8946-2092c4edfa9c-kube-api-access-8ldfm\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.537654 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/610eddf1-f5de-40bb-8946-2092c4edfa9c-etc-machine-id\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.537707 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-config-data\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.537744 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-combined-ca-bundle\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.537769 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-db-sync-config-data\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.537794 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-scripts\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.537804 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/610eddf1-f5de-40bb-8946-2092c4edfa9c-etc-machine-id\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.543976 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-scripts\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.544093 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-combined-ca-bundle\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.547031 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-config-data\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.547425 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-db-sync-config-data\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.563594 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ldfm\" (UniqueName: \"kubernetes.io/projected/610eddf1-f5de-40bb-8946-2092c4edfa9c-kube-api-access-8ldfm\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.657639 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:07 crc kubenswrapper[4902]: I0121 16:08:07.126884 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-k7rr4"] Jan 21 16:08:08 crc kubenswrapper[4902]: I0121 16:08:08.081675 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-k7rr4" event={"ID":"610eddf1-f5de-40bb-8946-2092c4edfa9c","Type":"ContainerStarted","Data":"73644d909cc281b656bcc92b7fe668f43e2f43e5a2df8a9a26185cf7ab096d45"} Jan 21 16:08:08 crc kubenswrapper[4902]: I0121 16:08:08.082007 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-k7rr4" event={"ID":"610eddf1-f5de-40bb-8946-2092c4edfa9c","Type":"ContainerStarted","Data":"1fd66fcb7429c5e27c4e572b6ce058e6f6500bca307d127355c5aefd7796184b"} Jan 21 16:08:08 crc kubenswrapper[4902]: I0121 16:08:08.104453 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-k7rr4" podStartSLOduration=2.104427761 podStartE2EDuration="2.104427761s" podCreationTimestamp="2026-01-21 16:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:08:08.095585473 +0000 UTC m=+5650.172418512" watchObservedRunningTime="2026-01-21 16:08:08.104427761 +0000 UTC m=+5650.181260790" Jan 21 16:08:11 crc kubenswrapper[4902]: I0121 16:08:11.111123 4902 generic.go:334] "Generic (PLEG): container finished" podID="610eddf1-f5de-40bb-8946-2092c4edfa9c" containerID="73644d909cc281b656bcc92b7fe668f43e2f43e5a2df8a9a26185cf7ab096d45" exitCode=0 Jan 21 16:08:11 crc kubenswrapper[4902]: I0121 16:08:11.111225 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-k7rr4" event={"ID":"610eddf1-f5de-40bb-8946-2092c4edfa9c","Type":"ContainerDied","Data":"73644d909cc281b656bcc92b7fe668f43e2f43e5a2df8a9a26185cf7ab096d45"} Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.461803 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.651586 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-config-data\") pod \"610eddf1-f5de-40bb-8946-2092c4edfa9c\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.651853 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-combined-ca-bundle\") pod \"610eddf1-f5de-40bb-8946-2092c4edfa9c\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.651950 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-db-sync-config-data\") pod \"610eddf1-f5de-40bb-8946-2092c4edfa9c\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.651970 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ldfm\" (UniqueName: \"kubernetes.io/projected/610eddf1-f5de-40bb-8946-2092c4edfa9c-kube-api-access-8ldfm\") pod \"610eddf1-f5de-40bb-8946-2092c4edfa9c\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.652066 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/610eddf1-f5de-40bb-8946-2092c4edfa9c-etc-machine-id\") pod \"610eddf1-f5de-40bb-8946-2092c4edfa9c\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.652124 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-scripts\") pod \"610eddf1-f5de-40bb-8946-2092c4edfa9c\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.652417 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/610eddf1-f5de-40bb-8946-2092c4edfa9c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "610eddf1-f5de-40bb-8946-2092c4edfa9c" (UID: "610eddf1-f5de-40bb-8946-2092c4edfa9c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.662296 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "610eddf1-f5de-40bb-8946-2092c4edfa9c" (UID: "610eddf1-f5de-40bb-8946-2092c4edfa9c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.662348 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-scripts" (OuterVolumeSpecName: "scripts") pod "610eddf1-f5de-40bb-8946-2092c4edfa9c" (UID: "610eddf1-f5de-40bb-8946-2092c4edfa9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.662545 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/610eddf1-f5de-40bb-8946-2092c4edfa9c-kube-api-access-8ldfm" (OuterVolumeSpecName: "kube-api-access-8ldfm") pod "610eddf1-f5de-40bb-8946-2092c4edfa9c" (UID: "610eddf1-f5de-40bb-8946-2092c4edfa9c"). InnerVolumeSpecName "kube-api-access-8ldfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.675804 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "610eddf1-f5de-40bb-8946-2092c4edfa9c" (UID: "610eddf1-f5de-40bb-8946-2092c4edfa9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.697361 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-config-data" (OuterVolumeSpecName: "config-data") pod "610eddf1-f5de-40bb-8946-2092c4edfa9c" (UID: "610eddf1-f5de-40bb-8946-2092c4edfa9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.754365 4902 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/610eddf1-f5de-40bb-8946-2092c4edfa9c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.754398 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.754407 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.754416 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.754425 4902 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.754434 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ldfm\" (UniqueName: \"kubernetes.io/projected/610eddf1-f5de-40bb-8946-2092c4edfa9c-kube-api-access-8ldfm\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.130237 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-k7rr4" event={"ID":"610eddf1-f5de-40bb-8946-2092c4edfa9c","Type":"ContainerDied","Data":"1fd66fcb7429c5e27c4e572b6ce058e6f6500bca307d127355c5aefd7796184b"} Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.130280 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fd66fcb7429c5e27c4e572b6ce058e6f6500bca307d127355c5aefd7796184b" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.130333 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.471995 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69884d7f9-kfzgg"] Jan 21 16:08:13 crc kubenswrapper[4902]: E0121 16:08:13.472365 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="610eddf1-f5de-40bb-8946-2092c4edfa9c" containerName="cinder-db-sync" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.472376 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="610eddf1-f5de-40bb-8946-2092c4edfa9c" containerName="cinder-db-sync" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.472533 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="610eddf1-f5de-40bb-8946-2092c4edfa9c" containerName="cinder-db-sync" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.475307 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.485954 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69884d7f9-kfzgg"] Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.568614 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-config\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.568926 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-dns-svc\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.569308 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-ovsdbserver-sb\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.569412 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2vq2\" (UniqueName: \"kubernetes.io/projected/5a487ade-04df-42df-b2a4-694f02a2ebdb-kube-api-access-j2vq2\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.569663 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-ovsdbserver-nb\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.646489 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.648613 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.652850 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-v844l" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.653192 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.653433 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.653685 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.667267 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.671065 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-ovsdbserver-sb\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.671136 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2vq2\" (UniqueName: \"kubernetes.io/projected/5a487ade-04df-42df-b2a4-694f02a2ebdb-kube-api-access-j2vq2\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.671250 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-ovsdbserver-nb\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.671296 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-config\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.671322 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-dns-svc\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.672500 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-config\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.672545 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-ovsdbserver-nb\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.673190 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-dns-svc\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.679271 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-ovsdbserver-sb\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.701288 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2vq2\" (UniqueName: \"kubernetes.io/projected/5a487ade-04df-42df-b2a4-694f02a2ebdb-kube-api-access-j2vq2\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.772441 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjx6r\" (UniqueName: \"kubernetes.io/projected/492a4cd7-f76e-408e-9f3e-6cb25b40248b-kube-api-access-cjx6r\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.772522 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-config-data-custom\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.772553 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-scripts\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.772577 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-config-data\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.772634 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/492a4cd7-f76e-408e-9f3e-6cb25b40248b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.772664 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/492a4cd7-f76e-408e-9f3e-6cb25b40248b-logs\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.772758 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.793925 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.874096 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-config-data-custom\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.874140 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-scripts\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.874181 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-config-data\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.874253 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/492a4cd7-f76e-408e-9f3e-6cb25b40248b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.874283 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/492a4cd7-f76e-408e-9f3e-6cb25b40248b-logs\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.874380 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.874424 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjx6r\" (UniqueName: \"kubernetes.io/projected/492a4cd7-f76e-408e-9f3e-6cb25b40248b-kube-api-access-cjx6r\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.874803 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/492a4cd7-f76e-408e-9f3e-6cb25b40248b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.875232 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/492a4cd7-f76e-408e-9f3e-6cb25b40248b-logs\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.878435 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-config-data-custom\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.879652 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-scripts\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.883154 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.888415 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-config-data\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.895514 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjx6r\" (UniqueName: \"kubernetes.io/projected/492a4cd7-f76e-408e-9f3e-6cb25b40248b-kube-api-access-cjx6r\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.967407 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:08:14 crc kubenswrapper[4902]: I0121 16:08:14.317005 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:14 crc kubenswrapper[4902]: I0121 16:08:14.328321 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69884d7f9-kfzgg"] Jan 21 16:08:15 crc kubenswrapper[4902]: I0121 16:08:15.160615 4902 generic.go:334] "Generic (PLEG): container finished" podID="5a487ade-04df-42df-b2a4-694f02a2ebdb" containerID="5cb68b975e1bdae1829713fed46eef25b840bf53e0813c38525f5a6f921ca76c" exitCode=0 Jan 21 16:08:15 crc kubenswrapper[4902]: I0121 16:08:15.160800 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" event={"ID":"5a487ade-04df-42df-b2a4-694f02a2ebdb","Type":"ContainerDied","Data":"5cb68b975e1bdae1829713fed46eef25b840bf53e0813c38525f5a6f921ca76c"} Jan 21 16:08:15 crc kubenswrapper[4902]: I0121 16:08:15.161193 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" event={"ID":"5a487ade-04df-42df-b2a4-694f02a2ebdb","Type":"ContainerStarted","Data":"cce4d19f30fd69fa08a849c8261f82a05dc1b4c6705764be924dca9e7b74f41e"} Jan 21 16:08:15 crc kubenswrapper[4902]: I0121 16:08:15.164477 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"492a4cd7-f76e-408e-9f3e-6cb25b40248b","Type":"ContainerStarted","Data":"86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc"} Jan 21 16:08:15 crc kubenswrapper[4902]: I0121 16:08:15.164521 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"492a4cd7-f76e-408e-9f3e-6cb25b40248b","Type":"ContainerStarted","Data":"3f0d9ffbf203c9cff62aa57b5dff59c75bba53abd9b322c0b3db48b5a5865b5a"} Jan 21 16:08:16 crc kubenswrapper[4902]: I0121 16:08:16.148811 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:16 crc kubenswrapper[4902]: I0121 16:08:16.175550 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" event={"ID":"5a487ade-04df-42df-b2a4-694f02a2ebdb","Type":"ContainerStarted","Data":"951c4e5c6873eb9e83588429fe8aca2e5cbae26eba168613139a62833929e049"} Jan 21 16:08:16 crc kubenswrapper[4902]: I0121 16:08:16.175664 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:16 crc kubenswrapper[4902]: I0121 16:08:16.177854 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"492a4cd7-f76e-408e-9f3e-6cb25b40248b","Type":"ContainerStarted","Data":"c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023"} Jan 21 16:08:16 crc kubenswrapper[4902]: I0121 16:08:16.178230 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 16:08:16 crc kubenswrapper[4902]: I0121 16:08:16.199322 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" podStartSLOduration=3.199306466 podStartE2EDuration="3.199306466s" podCreationTimestamp="2026-01-21 16:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:08:16.194404248 +0000 UTC m=+5658.271237277" watchObservedRunningTime="2026-01-21 16:08:16.199306466 +0000 UTC m=+5658.276139495" Jan 21 16:08:16 crc kubenswrapper[4902]: I0121 16:08:16.219031 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.21901043 podStartE2EDuration="3.21901043s" podCreationTimestamp="2026-01-21 16:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:08:16.211807167 +0000 UTC m=+5658.288640196" watchObservedRunningTime="2026-01-21 16:08:16.21901043 +0000 UTC m=+5658.295843459" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.186209 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="492a4cd7-f76e-408e-9f3e-6cb25b40248b" containerName="cinder-api" containerID="cri-o://c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023" gracePeriod=30 Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.187159 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="492a4cd7-f76e-408e-9f3e-6cb25b40248b" containerName="cinder-api-log" containerID="cri-o://86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc" gracePeriod=30 Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.754086 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.856103 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/492a4cd7-f76e-408e-9f3e-6cb25b40248b-logs\") pod \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.856440 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-combined-ca-bundle\") pod \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.856479 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjx6r\" (UniqueName: \"kubernetes.io/projected/492a4cd7-f76e-408e-9f3e-6cb25b40248b-kube-api-access-cjx6r\") pod \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.856530 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/492a4cd7-f76e-408e-9f3e-6cb25b40248b-logs" (OuterVolumeSpecName: "logs") pod "492a4cd7-f76e-408e-9f3e-6cb25b40248b" (UID: "492a4cd7-f76e-408e-9f3e-6cb25b40248b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.856546 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-config-data-custom\") pod \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.856719 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/492a4cd7-f76e-408e-9f3e-6cb25b40248b-etc-machine-id\") pod \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.856777 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-scripts\") pod \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.856810 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-config-data\") pod \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.856837 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/492a4cd7-f76e-408e-9f3e-6cb25b40248b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "492a4cd7-f76e-408e-9f3e-6cb25b40248b" (UID: "492a4cd7-f76e-408e-9f3e-6cb25b40248b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.857542 4902 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/492a4cd7-f76e-408e-9f3e-6cb25b40248b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.857564 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/492a4cd7-f76e-408e-9f3e-6cb25b40248b-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.862668 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/492a4cd7-f76e-408e-9f3e-6cb25b40248b-kube-api-access-cjx6r" (OuterVolumeSpecName: "kube-api-access-cjx6r") pod "492a4cd7-f76e-408e-9f3e-6cb25b40248b" (UID: "492a4cd7-f76e-408e-9f3e-6cb25b40248b"). InnerVolumeSpecName "kube-api-access-cjx6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.869174 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "492a4cd7-f76e-408e-9f3e-6cb25b40248b" (UID: "492a4cd7-f76e-408e-9f3e-6cb25b40248b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.869205 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-scripts" (OuterVolumeSpecName: "scripts") pod "492a4cd7-f76e-408e-9f3e-6cb25b40248b" (UID: "492a4cd7-f76e-408e-9f3e-6cb25b40248b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.882426 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "492a4cd7-f76e-408e-9f3e-6cb25b40248b" (UID: "492a4cd7-f76e-408e-9f3e-6cb25b40248b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.916759 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-config-data" (OuterVolumeSpecName: "config-data") pod "492a4cd7-f76e-408e-9f3e-6cb25b40248b" (UID: "492a4cd7-f76e-408e-9f3e-6cb25b40248b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.958871 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjx6r\" (UniqueName: \"kubernetes.io/projected/492a4cd7-f76e-408e-9f3e-6cb25b40248b-kube-api-access-cjx6r\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.958905 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.958917 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.958925 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.958933 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.195997 4902 generic.go:334] "Generic (PLEG): container finished" podID="492a4cd7-f76e-408e-9f3e-6cb25b40248b" containerID="c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023" exitCode=0 Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.196033 4902 generic.go:334] "Generic (PLEG): container finished" podID="492a4cd7-f76e-408e-9f3e-6cb25b40248b" containerID="86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc" exitCode=143 Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.196072 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"492a4cd7-f76e-408e-9f3e-6cb25b40248b","Type":"ContainerDied","Data":"c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023"} Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.196102 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"492a4cd7-f76e-408e-9f3e-6cb25b40248b","Type":"ContainerDied","Data":"86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc"} Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.196114 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.196127 4902 scope.go:117] "RemoveContainer" containerID="c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.196115 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"492a4cd7-f76e-408e-9f3e-6cb25b40248b","Type":"ContainerDied","Data":"3f0d9ffbf203c9cff62aa57b5dff59c75bba53abd9b322c0b3db48b5a5865b5a"} Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.227552 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.227623 4902 scope.go:117] "RemoveContainer" containerID="86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.238102 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.250775 4902 scope.go:117] "RemoveContainer" containerID="c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023" Jan 21 16:08:18 crc kubenswrapper[4902]: E0121 16:08:18.251271 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023\": container with ID starting with c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023 not found: ID does not exist" containerID="c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.251315 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023"} err="failed to get container status \"c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023\": rpc error: code = NotFound desc = could not find container \"c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023\": container with ID starting with c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023 not found: ID does not exist" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.251351 4902 scope.go:117] "RemoveContainer" containerID="86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc" Jan 21 16:08:18 crc kubenswrapper[4902]: E0121 16:08:18.251789 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc\": container with ID starting with 86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc not found: ID does not exist" containerID="86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.251819 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc"} err="failed to get container status \"86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc\": rpc error: code = NotFound desc = could not find container \"86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc\": container with ID starting with 86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc not found: ID does not exist" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.251846 4902 scope.go:117] "RemoveContainer" containerID="c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.252116 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023"} err="failed to get container status \"c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023\": rpc error: code = NotFound desc = could not find container \"c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023\": container with ID starting with c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023 not found: ID does not exist" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.252137 4902 scope.go:117] "RemoveContainer" containerID="86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.252382 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc"} err="failed to get container status \"86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc\": rpc error: code = NotFound desc = could not find container \"86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc\": container with ID starting with 86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc not found: ID does not exist" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.263194 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:18 crc kubenswrapper[4902]: E0121 16:08:18.263532 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="492a4cd7-f76e-408e-9f3e-6cb25b40248b" containerName="cinder-api" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.263548 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="492a4cd7-f76e-408e-9f3e-6cb25b40248b" containerName="cinder-api" Jan 21 16:08:18 crc kubenswrapper[4902]: E0121 16:08:18.263586 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="492a4cd7-f76e-408e-9f3e-6cb25b40248b" containerName="cinder-api-log" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.263592 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="492a4cd7-f76e-408e-9f3e-6cb25b40248b" containerName="cinder-api-log" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.263740 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="492a4cd7-f76e-408e-9f3e-6cb25b40248b" containerName="cinder-api" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.263761 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="492a4cd7-f76e-408e-9f3e-6cb25b40248b" containerName="cinder-api-log" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.264629 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.267183 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-v844l" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.267769 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.268035 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.268232 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.268386 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.268551 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.281437 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.311664 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="492a4cd7-f76e-408e-9f3e-6cb25b40248b" path="/var/lib/kubelet/pods/492a4cd7-f76e-408e-9f3e-6cb25b40248b/volumes" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.365625 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.365668 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-config-data-custom\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.365795 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.365826 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-logs\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.365871 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-config-data\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.365892 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.365988 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-scripts\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.366072 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4vqf\" (UniqueName: \"kubernetes.io/projected/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-kube-api-access-z4vqf\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.366170 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.468198 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.468242 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-config-data-custom\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.468311 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.468337 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-logs\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.468368 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-config-data\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.468383 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.468402 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-scripts\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.468422 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4vqf\" (UniqueName: \"kubernetes.io/projected/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-kube-api-access-z4vqf\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.468450 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.469009 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.469426 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-logs\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.472242 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.474395 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-scripts\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.475007 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.475074 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-config-data-custom\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.475747 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.476493 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-config-data\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.489961 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4vqf\" (UniqueName: \"kubernetes.io/projected/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-kube-api-access-z4vqf\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.581570 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:08:19 crc kubenswrapper[4902]: I0121 16:08:19.087924 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:19 crc kubenswrapper[4902]: W0121 16:08:19.093826 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f00f389_2d9c_443a_bf45_f1ddb6cea29c.slice/crio-99bcc40edef719f6b32b7ecb95b8438fb7f37784a97bb90751ee8f29eb800619 WatchSource:0}: Error finding container 99bcc40edef719f6b32b7ecb95b8438fb7f37784a97bb90751ee8f29eb800619: Status 404 returned error can't find the container with id 99bcc40edef719f6b32b7ecb95b8438fb7f37784a97bb90751ee8f29eb800619 Jan 21 16:08:19 crc kubenswrapper[4902]: I0121 16:08:19.207484 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5f00f389-2d9c-443a-bf45-f1ddb6cea29c","Type":"ContainerStarted","Data":"99bcc40edef719f6b32b7ecb95b8438fb7f37784a97bb90751ee8f29eb800619"} Jan 21 16:08:20 crc kubenswrapper[4902]: I0121 16:08:20.222145 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5f00f389-2d9c-443a-bf45-f1ddb6cea29c","Type":"ContainerStarted","Data":"c57e300c081d7900319e10d0ff1145e3f13d8d1a05398598894df17dd6db366b"} Jan 21 16:08:21 crc kubenswrapper[4902]: I0121 16:08:21.235589 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5f00f389-2d9c-443a-bf45-f1ddb6cea29c","Type":"ContainerStarted","Data":"528739ef35dc7389ee33f73742ca6fe14e22e7e82f89d0a372308467295c621a"} Jan 21 16:08:21 crc kubenswrapper[4902]: I0121 16:08:21.236085 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 16:08:21 crc kubenswrapper[4902]: I0121 16:08:21.269023 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.268993902 podStartE2EDuration="3.268993902s" podCreationTimestamp="2026-01-21 16:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:08:21.259085593 +0000 UTC m=+5663.335918652" watchObservedRunningTime="2026-01-21 16:08:21.268993902 +0000 UTC m=+5663.345826951" Jan 21 16:08:23 crc kubenswrapper[4902]: I0121 16:08:23.797386 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:23 crc kubenswrapper[4902]: I0121 16:08:23.894940 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fc6cc5b55-79wth"] Jan 21 16:08:23 crc kubenswrapper[4902]: I0121 16:08:23.895199 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" podUID="111bf0bc-8088-42d9-bf09-396b7d087ae8" containerName="dnsmasq-dns" containerID="cri-o://93731fa0d5c4e506f4bd5fe81d39edaa6cbe9eb7e4608fe707e85adf923a410f" gracePeriod=10 Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.287207 4902 generic.go:334] "Generic (PLEG): container finished" podID="111bf0bc-8088-42d9-bf09-396b7d087ae8" containerID="93731fa0d5c4e506f4bd5fe81d39edaa6cbe9eb7e4608fe707e85adf923a410f" exitCode=0 Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.287528 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" event={"ID":"111bf0bc-8088-42d9-bf09-396b7d087ae8","Type":"ContainerDied","Data":"93731fa0d5c4e506f4bd5fe81d39edaa6cbe9eb7e4608fe707e85adf923a410f"} Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.538349 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.694717 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-dns-svc\") pod \"111bf0bc-8088-42d9-bf09-396b7d087ae8\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.694857 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plr2q\" (UniqueName: \"kubernetes.io/projected/111bf0bc-8088-42d9-bf09-396b7d087ae8-kube-api-access-plr2q\") pod \"111bf0bc-8088-42d9-bf09-396b7d087ae8\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.694928 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-ovsdbserver-nb\") pod \"111bf0bc-8088-42d9-bf09-396b7d087ae8\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.694983 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-config\") pod \"111bf0bc-8088-42d9-bf09-396b7d087ae8\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.695023 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-ovsdbserver-sb\") pod \"111bf0bc-8088-42d9-bf09-396b7d087ae8\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.706156 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/111bf0bc-8088-42d9-bf09-396b7d087ae8-kube-api-access-plr2q" (OuterVolumeSpecName: "kube-api-access-plr2q") pod "111bf0bc-8088-42d9-bf09-396b7d087ae8" (UID: "111bf0bc-8088-42d9-bf09-396b7d087ae8"). InnerVolumeSpecName "kube-api-access-plr2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.744352 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "111bf0bc-8088-42d9-bf09-396b7d087ae8" (UID: "111bf0bc-8088-42d9-bf09-396b7d087ae8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.749029 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "111bf0bc-8088-42d9-bf09-396b7d087ae8" (UID: "111bf0bc-8088-42d9-bf09-396b7d087ae8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.759612 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "111bf0bc-8088-42d9-bf09-396b7d087ae8" (UID: "111bf0bc-8088-42d9-bf09-396b7d087ae8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.772759 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-config" (OuterVolumeSpecName: "config") pod "111bf0bc-8088-42d9-bf09-396b7d087ae8" (UID: "111bf0bc-8088-42d9-bf09-396b7d087ae8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.796556 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.796589 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plr2q\" (UniqueName: \"kubernetes.io/projected/111bf0bc-8088-42d9-bf09-396b7d087ae8-kube-api-access-plr2q\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.796603 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.796612 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.796621 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:25 crc kubenswrapper[4902]: I0121 16:08:25.296808 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" event={"ID":"111bf0bc-8088-42d9-bf09-396b7d087ae8","Type":"ContainerDied","Data":"755a6256ef5a838eba2c3bd36413f341c71947696922ab6ab33e27a445898c69"} Jan 21 16:08:25 crc kubenswrapper[4902]: I0121 16:08:25.296859 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:08:25 crc kubenswrapper[4902]: I0121 16:08:25.296870 4902 scope.go:117] "RemoveContainer" containerID="93731fa0d5c4e506f4bd5fe81d39edaa6cbe9eb7e4608fe707e85adf923a410f" Jan 21 16:08:25 crc kubenswrapper[4902]: I0121 16:08:25.317956 4902 scope.go:117] "RemoveContainer" containerID="f7e84713417d76194209c0593e58d711f067795272f7e92b6fd9b97ab7a3b30b" Jan 21 16:08:25 crc kubenswrapper[4902]: I0121 16:08:25.338991 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fc6cc5b55-79wth"] Jan 21 16:08:25 crc kubenswrapper[4902]: I0121 16:08:25.346938 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fc6cc5b55-79wth"] Jan 21 16:08:26 crc kubenswrapper[4902]: I0121 16:08:26.308232 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="111bf0bc-8088-42d9-bf09-396b7d087ae8" path="/var/lib/kubelet/pods/111bf0bc-8088-42d9-bf09-396b7d087ae8/volumes" Jan 21 16:08:30 crc kubenswrapper[4902]: I0121 16:08:30.387839 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.669986 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:08:47 crc kubenswrapper[4902]: E0121 16:08:47.670998 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="111bf0bc-8088-42d9-bf09-396b7d087ae8" containerName="init" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.671013 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="111bf0bc-8088-42d9-bf09-396b7d087ae8" containerName="init" Jan 21 16:08:47 crc kubenswrapper[4902]: E0121 16:08:47.671034 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="111bf0bc-8088-42d9-bf09-396b7d087ae8" containerName="dnsmasq-dns" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.671061 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="111bf0bc-8088-42d9-bf09-396b7d087ae8" containerName="dnsmasq-dns" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.671266 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="111bf0bc-8088-42d9-bf09-396b7d087ae8" containerName="dnsmasq-dns" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.672396 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.675683 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.693858 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.822497 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.822572 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-scripts\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.822595 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-config-data\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.822659 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gqz7\" (UniqueName: \"kubernetes.io/projected/4621cb0e-ad03-4a82-89a0-a14392def1e7-kube-api-access-2gqz7\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.823182 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4621cb0e-ad03-4a82-89a0-a14392def1e7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.823328 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.925354 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.925739 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-scripts\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.925898 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-config-data\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.926215 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gqz7\" (UniqueName: \"kubernetes.io/projected/4621cb0e-ad03-4a82-89a0-a14392def1e7-kube-api-access-2gqz7\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.926469 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4621cb0e-ad03-4a82-89a0-a14392def1e7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.926638 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.926648 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4621cb0e-ad03-4a82-89a0-a14392def1e7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.933937 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-scripts\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.934552 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-config-data\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.938509 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.938947 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.944895 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gqz7\" (UniqueName: \"kubernetes.io/projected/4621cb0e-ad03-4a82-89a0-a14392def1e7-kube-api-access-2gqz7\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.993116 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:08:48 crc kubenswrapper[4902]: I0121 16:08:48.451771 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:08:48 crc kubenswrapper[4902]: I0121 16:08:48.498620 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4621cb0e-ad03-4a82-89a0-a14392def1e7","Type":"ContainerStarted","Data":"ce33e9839d5cefb904a6f325f236da539af4c95c84835eb4cf854486a05a8ed2"} Jan 21 16:08:48 crc kubenswrapper[4902]: I0121 16:08:48.881795 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:48 crc kubenswrapper[4902]: I0121 16:08:48.882124 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="5f00f389-2d9c-443a-bf45-f1ddb6cea29c" containerName="cinder-api-log" containerID="cri-o://c57e300c081d7900319e10d0ff1145e3f13d8d1a05398598894df17dd6db366b" gracePeriod=30 Jan 21 16:08:48 crc kubenswrapper[4902]: I0121 16:08:48.882585 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="5f00f389-2d9c-443a-bf45-f1ddb6cea29c" containerName="cinder-api" containerID="cri-o://528739ef35dc7389ee33f73742ca6fe14e22e7e82f89d0a372308467295c621a" gracePeriod=30 Jan 21 16:08:49 crc kubenswrapper[4902]: I0121 16:08:49.508796 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4621cb0e-ad03-4a82-89a0-a14392def1e7","Type":"ContainerStarted","Data":"4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e"} Jan 21 16:08:49 crc kubenswrapper[4902]: I0121 16:08:49.511367 4902 generic.go:334] "Generic (PLEG): container finished" podID="5f00f389-2d9c-443a-bf45-f1ddb6cea29c" containerID="c57e300c081d7900319e10d0ff1145e3f13d8d1a05398598894df17dd6db366b" exitCode=143 Jan 21 16:08:49 crc kubenswrapper[4902]: I0121 16:08:49.511393 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5f00f389-2d9c-443a-bf45-f1ddb6cea29c","Type":"ContainerDied","Data":"c57e300c081d7900319e10d0ff1145e3f13d8d1a05398598894df17dd6db366b"} Jan 21 16:08:50 crc kubenswrapper[4902]: I0121 16:08:50.521202 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4621cb0e-ad03-4a82-89a0-a14392def1e7","Type":"ContainerStarted","Data":"c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed"} Jan 21 16:08:50 crc kubenswrapper[4902]: I0121 16:08:50.542467 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.5424485900000002 podStartE2EDuration="3.54244859s" podCreationTimestamp="2026-01-21 16:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:08:50.536628077 +0000 UTC m=+5692.613461106" watchObservedRunningTime="2026-01-21 16:08:50.54244859 +0000 UTC m=+5692.619281619" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.543736 4902 generic.go:334] "Generic (PLEG): container finished" podID="5f00f389-2d9c-443a-bf45-f1ddb6cea29c" containerID="528739ef35dc7389ee33f73742ca6fe14e22e7e82f89d0a372308467295c621a" exitCode=0 Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.543814 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5f00f389-2d9c-443a-bf45-f1ddb6cea29c","Type":"ContainerDied","Data":"528739ef35dc7389ee33f73742ca6fe14e22e7e82f89d0a372308467295c621a"} Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.630639 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.796732 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-combined-ca-bundle\") pod \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.796787 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-internal-tls-certs\") pod \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.796815 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-logs\") pod \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.796853 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4vqf\" (UniqueName: \"kubernetes.io/projected/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-kube-api-access-z4vqf\") pod \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.796890 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-config-data-custom\") pod \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.796937 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-public-tls-certs\") pod \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.797477 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-logs" (OuterVolumeSpecName: "logs") pod "5f00f389-2d9c-443a-bf45-f1ddb6cea29c" (UID: "5f00f389-2d9c-443a-bf45-f1ddb6cea29c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.797073 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-scripts\") pod \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.797813 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-etc-machine-id\") pod \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.797849 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-config-data\") pod \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.797861 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5f00f389-2d9c-443a-bf45-f1ddb6cea29c" (UID: "5f00f389-2d9c-443a-bf45-f1ddb6cea29c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.798283 4902 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.798299 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.802985 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-kube-api-access-z4vqf" (OuterVolumeSpecName: "kube-api-access-z4vqf") pod "5f00f389-2d9c-443a-bf45-f1ddb6cea29c" (UID: "5f00f389-2d9c-443a-bf45-f1ddb6cea29c"). InnerVolumeSpecName "kube-api-access-z4vqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.803265 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-scripts" (OuterVolumeSpecName: "scripts") pod "5f00f389-2d9c-443a-bf45-f1ddb6cea29c" (UID: "5f00f389-2d9c-443a-bf45-f1ddb6cea29c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.815290 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5f00f389-2d9c-443a-bf45-f1ddb6cea29c" (UID: "5f00f389-2d9c-443a-bf45-f1ddb6cea29c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.831390 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f00f389-2d9c-443a-bf45-f1ddb6cea29c" (UID: "5f00f389-2d9c-443a-bf45-f1ddb6cea29c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.857239 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5f00f389-2d9c-443a-bf45-f1ddb6cea29c" (UID: "5f00f389-2d9c-443a-bf45-f1ddb6cea29c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.857671 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-config-data" (OuterVolumeSpecName: "config-data") pod "5f00f389-2d9c-443a-bf45-f1ddb6cea29c" (UID: "5f00f389-2d9c-443a-bf45-f1ddb6cea29c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.858800 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5f00f389-2d9c-443a-bf45-f1ddb6cea29c" (UID: "5f00f389-2d9c-443a-bf45-f1ddb6cea29c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.899555 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.899595 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.899606 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.899616 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4vqf\" (UniqueName: \"kubernetes.io/projected/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-kube-api-access-z4vqf\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.899625 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.899632 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.899639 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.993451 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.556708 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5f00f389-2d9c-443a-bf45-f1ddb6cea29c","Type":"ContainerDied","Data":"99bcc40edef719f6b32b7ecb95b8438fb7f37784a97bb90751ee8f29eb800619"} Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.556795 4902 scope.go:117] "RemoveContainer" containerID="528739ef35dc7389ee33f73742ca6fe14e22e7e82f89d0a372308467295c621a" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.556802 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.603779 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.604649 4902 scope.go:117] "RemoveContainer" containerID="c57e300c081d7900319e10d0ff1145e3f13d8d1a05398598894df17dd6db366b" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.622545 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.638803 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:53 crc kubenswrapper[4902]: E0121 16:08:53.639231 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f00f389-2d9c-443a-bf45-f1ddb6cea29c" containerName="cinder-api" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.639256 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f00f389-2d9c-443a-bf45-f1ddb6cea29c" containerName="cinder-api" Jan 21 16:08:53 crc kubenswrapper[4902]: E0121 16:08:53.639279 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f00f389-2d9c-443a-bf45-f1ddb6cea29c" containerName="cinder-api-log" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.639289 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f00f389-2d9c-443a-bf45-f1ddb6cea29c" containerName="cinder-api-log" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.639523 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f00f389-2d9c-443a-bf45-f1ddb6cea29c" containerName="cinder-api" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.639563 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f00f389-2d9c-443a-bf45-f1ddb6cea29c" containerName="cinder-api-log" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.640680 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.644617 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.645016 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.645236 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.653932 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.721368 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.721451 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-config-data-custom\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.721470 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.721486 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d9842a-4646-47c5-a81c-18e641f7617f-logs\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.721518 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-scripts\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.721537 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24d9842a-4646-47c5-a81c-18e641f7617f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.721561 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.721648 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-config-data\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.721690 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np6k4\" (UniqueName: \"kubernetes.io/projected/24d9842a-4646-47c5-a81c-18e641f7617f-kube-api-access-np6k4\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.823208 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.823297 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-config-data-custom\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.823326 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.823348 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d9842a-4646-47c5-a81c-18e641f7617f-logs\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.823387 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-scripts\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.823414 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24d9842a-4646-47c5-a81c-18e641f7617f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.823445 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.823492 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24d9842a-4646-47c5-a81c-18e641f7617f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.823502 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-config-data\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.823590 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np6k4\" (UniqueName: \"kubernetes.io/projected/24d9842a-4646-47c5-a81c-18e641f7617f-kube-api-access-np6k4\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.823991 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d9842a-4646-47c5-a81c-18e641f7617f-logs\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.828280 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.828601 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-config-data-custom\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.828637 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.828663 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.830262 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-config-data\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.844952 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-scripts\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.851935 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np6k4\" (UniqueName: \"kubernetes.io/projected/24d9842a-4646-47c5-a81c-18e641f7617f-kube-api-access-np6k4\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.970542 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:08:54 crc kubenswrapper[4902]: I0121 16:08:54.265399 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:54 crc kubenswrapper[4902]: W0121 16:08:54.268285 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24d9842a_4646_47c5_a81c_18e641f7617f.slice/crio-4337578762bf90751771f3dad5c9463114af7c4ebb57c09775cf4b9177b9af71 WatchSource:0}: Error finding container 4337578762bf90751771f3dad5c9463114af7c4ebb57c09775cf4b9177b9af71: Status 404 returned error can't find the container with id 4337578762bf90751771f3dad5c9463114af7c4ebb57c09775cf4b9177b9af71 Jan 21 16:08:54 crc kubenswrapper[4902]: I0121 16:08:54.305637 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f00f389-2d9c-443a-bf45-f1ddb6cea29c" path="/var/lib/kubelet/pods/5f00f389-2d9c-443a-bf45-f1ddb6cea29c/volumes" Jan 21 16:08:54 crc kubenswrapper[4902]: I0121 16:08:54.567489 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"24d9842a-4646-47c5-a81c-18e641f7617f","Type":"ContainerStarted","Data":"4337578762bf90751771f3dad5c9463114af7c4ebb57c09775cf4b9177b9af71"} Jan 21 16:08:55 crc kubenswrapper[4902]: I0121 16:08:55.578137 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"24d9842a-4646-47c5-a81c-18e641f7617f","Type":"ContainerStarted","Data":"5b34d4ea275de8ef636bbd3abf5ab38448026e43d127208c66d9478d2070b5b4"} Jan 21 16:08:55 crc kubenswrapper[4902]: I0121 16:08:55.578424 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"24d9842a-4646-47c5-a81c-18e641f7617f","Type":"ContainerStarted","Data":"2e02282ffcfd24598a8f07268602eacd34c31aaac7f5dadcaa89f6a4b3058400"} Jan 21 16:08:55 crc kubenswrapper[4902]: I0121 16:08:55.578445 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 16:08:55 crc kubenswrapper[4902]: I0121 16:08:55.601231 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.601211908 podStartE2EDuration="2.601211908s" podCreationTimestamp="2026-01-21 16:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:08:55.600326443 +0000 UTC m=+5697.677159472" watchObservedRunningTime="2026-01-21 16:08:55.601211908 +0000 UTC m=+5697.678044937" Jan 21 16:08:58 crc kubenswrapper[4902]: I0121 16:08:58.202634 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 16:08:58 crc kubenswrapper[4902]: I0121 16:08:58.261935 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:08:58 crc kubenswrapper[4902]: I0121 16:08:58.602447 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4621cb0e-ad03-4a82-89a0-a14392def1e7" containerName="cinder-scheduler" containerID="cri-o://4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e" gracePeriod=30 Jan 21 16:08:58 crc kubenswrapper[4902]: I0121 16:08:58.603313 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4621cb0e-ad03-4a82-89a0-a14392def1e7" containerName="probe" containerID="cri-o://c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed" gracePeriod=30 Jan 21 16:08:59 crc kubenswrapper[4902]: I0121 16:08:59.614486 4902 generic.go:334] "Generic (PLEG): container finished" podID="4621cb0e-ad03-4a82-89a0-a14392def1e7" containerID="c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed" exitCode=0 Jan 21 16:08:59 crc kubenswrapper[4902]: I0121 16:08:59.614840 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4621cb0e-ad03-4a82-89a0-a14392def1e7","Type":"ContainerDied","Data":"c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed"} Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.164552 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.266395 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-config-data\") pod \"4621cb0e-ad03-4a82-89a0-a14392def1e7\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.266449 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gqz7\" (UniqueName: \"kubernetes.io/projected/4621cb0e-ad03-4a82-89a0-a14392def1e7-kube-api-access-2gqz7\") pod \"4621cb0e-ad03-4a82-89a0-a14392def1e7\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.266695 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-config-data-custom\") pod \"4621cb0e-ad03-4a82-89a0-a14392def1e7\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.266732 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-combined-ca-bundle\") pod \"4621cb0e-ad03-4a82-89a0-a14392def1e7\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.266788 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-scripts\") pod \"4621cb0e-ad03-4a82-89a0-a14392def1e7\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.266848 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4621cb0e-ad03-4a82-89a0-a14392def1e7-etc-machine-id\") pod \"4621cb0e-ad03-4a82-89a0-a14392def1e7\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.266985 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4621cb0e-ad03-4a82-89a0-a14392def1e7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4621cb0e-ad03-4a82-89a0-a14392def1e7" (UID: "4621cb0e-ad03-4a82-89a0-a14392def1e7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.267357 4902 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4621cb0e-ad03-4a82-89a0-a14392def1e7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.272844 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-scripts" (OuterVolumeSpecName: "scripts") pod "4621cb0e-ad03-4a82-89a0-a14392def1e7" (UID: "4621cb0e-ad03-4a82-89a0-a14392def1e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.274855 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4621cb0e-ad03-4a82-89a0-a14392def1e7" (UID: "4621cb0e-ad03-4a82-89a0-a14392def1e7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.275065 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4621cb0e-ad03-4a82-89a0-a14392def1e7-kube-api-access-2gqz7" (OuterVolumeSpecName: "kube-api-access-2gqz7") pod "4621cb0e-ad03-4a82-89a0-a14392def1e7" (UID: "4621cb0e-ad03-4a82-89a0-a14392def1e7"). InnerVolumeSpecName "kube-api-access-2gqz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.345306 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4621cb0e-ad03-4a82-89a0-a14392def1e7" (UID: "4621cb0e-ad03-4a82-89a0-a14392def1e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.371032 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.371096 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.371114 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.371129 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gqz7\" (UniqueName: \"kubernetes.io/projected/4621cb0e-ad03-4a82-89a0-a14392def1e7-kube-api-access-2gqz7\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.374230 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-config-data" (OuterVolumeSpecName: "config-data") pod "4621cb0e-ad03-4a82-89a0-a14392def1e7" (UID: "4621cb0e-ad03-4a82-89a0-a14392def1e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.473528 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.629957 4902 generic.go:334] "Generic (PLEG): container finished" podID="4621cb0e-ad03-4a82-89a0-a14392def1e7" containerID="4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e" exitCode=0 Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.630008 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4621cb0e-ad03-4a82-89a0-a14392def1e7","Type":"ContainerDied","Data":"4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e"} Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.630055 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4621cb0e-ad03-4a82-89a0-a14392def1e7","Type":"ContainerDied","Data":"ce33e9839d5cefb904a6f325f236da539af4c95c84835eb4cf854486a05a8ed2"} Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.630077 4902 scope.go:117] "RemoveContainer" containerID="c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.630201 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.675747 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.710127 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.713198 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:09:00 crc kubenswrapper[4902]: E0121 16:09:00.713637 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4621cb0e-ad03-4a82-89a0-a14392def1e7" containerName="probe" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.713654 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4621cb0e-ad03-4a82-89a0-a14392def1e7" containerName="probe" Jan 21 16:09:00 crc kubenswrapper[4902]: E0121 16:09:00.713670 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4621cb0e-ad03-4a82-89a0-a14392def1e7" containerName="cinder-scheduler" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.713679 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4621cb0e-ad03-4a82-89a0-a14392def1e7" containerName="cinder-scheduler" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.713836 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="4621cb0e-ad03-4a82-89a0-a14392def1e7" containerName="probe" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.713853 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="4621cb0e-ad03-4a82-89a0-a14392def1e7" containerName="cinder-scheduler" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.714723 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.717452 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.725388 4902 scope.go:117] "RemoveContainer" containerID="4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.733566 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.772455 4902 scope.go:117] "RemoveContainer" containerID="c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed" Jan 21 16:09:00 crc kubenswrapper[4902]: E0121 16:09:00.773013 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed\": container with ID starting with c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed not found: ID does not exist" containerID="c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.773109 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed"} err="failed to get container status \"c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed\": rpc error: code = NotFound desc = could not find container \"c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed\": container with ID starting with c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed not found: ID does not exist" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.773150 4902 scope.go:117] "RemoveContainer" containerID="4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e" Jan 21 16:09:00 crc kubenswrapper[4902]: E0121 16:09:00.774137 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e\": container with ID starting with 4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e not found: ID does not exist" containerID="4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.774173 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e"} err="failed to get container status \"4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e\": rpc error: code = NotFound desc = could not find container \"4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e\": container with ID starting with 4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e not found: ID does not exist" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.790759 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16354b62-7b74-468c-8953-3a41b1dc1a66-scripts\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.790848 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpsfq\" (UniqueName: \"kubernetes.io/projected/16354b62-7b74-468c-8953-3a41b1dc1a66-kube-api-access-jpsfq\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.790878 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16354b62-7b74-468c-8953-3a41b1dc1a66-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.790923 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16354b62-7b74-468c-8953-3a41b1dc1a66-config-data\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.790939 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16354b62-7b74-468c-8953-3a41b1dc1a66-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.791000 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/16354b62-7b74-468c-8953-3a41b1dc1a66-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.892598 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16354b62-7b74-468c-8953-3a41b1dc1a66-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.892670 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16354b62-7b74-468c-8953-3a41b1dc1a66-config-data\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.892695 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16354b62-7b74-468c-8953-3a41b1dc1a66-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.892725 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/16354b62-7b74-468c-8953-3a41b1dc1a66-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.892855 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16354b62-7b74-468c-8953-3a41b1dc1a66-scripts\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.892899 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpsfq\" (UniqueName: \"kubernetes.io/projected/16354b62-7b74-468c-8953-3a41b1dc1a66-kube-api-access-jpsfq\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.893335 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/16354b62-7b74-468c-8953-3a41b1dc1a66-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.898717 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16354b62-7b74-468c-8953-3a41b1dc1a66-scripts\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.899249 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16354b62-7b74-468c-8953-3a41b1dc1a66-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.899363 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16354b62-7b74-468c-8953-3a41b1dc1a66-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.899495 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16354b62-7b74-468c-8953-3a41b1dc1a66-config-data\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.908873 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpsfq\" (UniqueName: \"kubernetes.io/projected/16354b62-7b74-468c-8953-3a41b1dc1a66-kube-api-access-jpsfq\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:01 crc kubenswrapper[4902]: I0121 16:09:01.040303 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:09:01 crc kubenswrapper[4902]: I0121 16:09:01.457581 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:09:01 crc kubenswrapper[4902]: W0121 16:09:01.461960 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16354b62_7b74_468c_8953_3a41b1dc1a66.slice/crio-197dd8604d46e846f0b78367c2171681ce0ac61cbf9e2325c5e50dbb33d3b228 WatchSource:0}: Error finding container 197dd8604d46e846f0b78367c2171681ce0ac61cbf9e2325c5e50dbb33d3b228: Status 404 returned error can't find the container with id 197dd8604d46e846f0b78367c2171681ce0ac61cbf9e2325c5e50dbb33d3b228 Jan 21 16:09:01 crc kubenswrapper[4902]: I0121 16:09:01.640792 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"16354b62-7b74-468c-8953-3a41b1dc1a66","Type":"ContainerStarted","Data":"197dd8604d46e846f0b78367c2171681ce0ac61cbf9e2325c5e50dbb33d3b228"} Jan 21 16:09:02 crc kubenswrapper[4902]: I0121 16:09:02.313837 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4621cb0e-ad03-4a82-89a0-a14392def1e7" path="/var/lib/kubelet/pods/4621cb0e-ad03-4a82-89a0-a14392def1e7/volumes" Jan 21 16:09:02 crc kubenswrapper[4902]: I0121 16:09:02.656134 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"16354b62-7b74-468c-8953-3a41b1dc1a66","Type":"ContainerStarted","Data":"7eaa2ec2402e0110ed8bb33b22d716d250c245e0dca827e09a9df521af3ee8c3"} Jan 21 16:09:03 crc kubenswrapper[4902]: I0121 16:09:03.666634 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"16354b62-7b74-468c-8953-3a41b1dc1a66","Type":"ContainerStarted","Data":"a258ff3809e0aff102cc97afbd32d3d0546a647f066bf1dadb98911386074ac9"} Jan 21 16:09:03 crc kubenswrapper[4902]: I0121 16:09:03.694863 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.694840748 podStartE2EDuration="3.694840748s" podCreationTimestamp="2026-01-21 16:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:03.685581557 +0000 UTC m=+5705.762414596" watchObservedRunningTime="2026-01-21 16:09:03.694840748 +0000 UTC m=+5705.771673797" Jan 21 16:09:06 crc kubenswrapper[4902]: I0121 16:09:06.040676 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 21 16:09:06 crc kubenswrapper[4902]: I0121 16:09:06.041256 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 16:09:11 crc kubenswrapper[4902]: I0121 16:09:11.238104 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.115815 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-47gxx"] Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.117270 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-47gxx" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.127506 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-47gxx"] Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.143555 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8cdq\" (UniqueName: \"kubernetes.io/projected/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95-kube-api-access-z8cdq\") pod \"glance-db-create-47gxx\" (UID: \"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95\") " pod="openstack/glance-db-create-47gxx" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.143643 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95-operator-scripts\") pod \"glance-db-create-47gxx\" (UID: \"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95\") " pod="openstack/glance-db-create-47gxx" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.229735 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-acb5-account-create-update-v87vq"] Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.230952 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-acb5-account-create-update-v87vq" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.233395 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.240925 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-acb5-account-create-update-v87vq"] Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.244695 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8cdq\" (UniqueName: \"kubernetes.io/projected/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95-kube-api-access-z8cdq\") pod \"glance-db-create-47gxx\" (UID: \"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95\") " pod="openstack/glance-db-create-47gxx" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.244771 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsfzn\" (UniqueName: \"kubernetes.io/projected/91fe5022-2b6f-46b9-9275-c8a809b32808-kube-api-access-nsfzn\") pod \"glance-acb5-account-create-update-v87vq\" (UID: \"91fe5022-2b6f-46b9-9275-c8a809b32808\") " pod="openstack/glance-acb5-account-create-update-v87vq" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.244851 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95-operator-scripts\") pod \"glance-db-create-47gxx\" (UID: \"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95\") " pod="openstack/glance-db-create-47gxx" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.244938 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91fe5022-2b6f-46b9-9275-c8a809b32808-operator-scripts\") pod \"glance-acb5-account-create-update-v87vq\" (UID: \"91fe5022-2b6f-46b9-9275-c8a809b32808\") " pod="openstack/glance-acb5-account-create-update-v87vq" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.246237 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95-operator-scripts\") pod \"glance-db-create-47gxx\" (UID: \"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95\") " pod="openstack/glance-db-create-47gxx" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.268658 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8cdq\" (UniqueName: \"kubernetes.io/projected/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95-kube-api-access-z8cdq\") pod \"glance-db-create-47gxx\" (UID: \"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95\") " pod="openstack/glance-db-create-47gxx" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.349575 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91fe5022-2b6f-46b9-9275-c8a809b32808-operator-scripts\") pod \"glance-acb5-account-create-update-v87vq\" (UID: \"91fe5022-2b6f-46b9-9275-c8a809b32808\") " pod="openstack/glance-acb5-account-create-update-v87vq" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.349766 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsfzn\" (UniqueName: \"kubernetes.io/projected/91fe5022-2b6f-46b9-9275-c8a809b32808-kube-api-access-nsfzn\") pod \"glance-acb5-account-create-update-v87vq\" (UID: \"91fe5022-2b6f-46b9-9275-c8a809b32808\") " pod="openstack/glance-acb5-account-create-update-v87vq" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.350434 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91fe5022-2b6f-46b9-9275-c8a809b32808-operator-scripts\") pod \"glance-acb5-account-create-update-v87vq\" (UID: \"91fe5022-2b6f-46b9-9275-c8a809b32808\") " pod="openstack/glance-acb5-account-create-update-v87vq" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.368526 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsfzn\" (UniqueName: \"kubernetes.io/projected/91fe5022-2b6f-46b9-9275-c8a809b32808-kube-api-access-nsfzn\") pod \"glance-acb5-account-create-update-v87vq\" (UID: \"91fe5022-2b6f-46b9-9275-c8a809b32808\") " pod="openstack/glance-acb5-account-create-update-v87vq" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.439577 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-47gxx" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.548783 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-acb5-account-create-update-v87vq" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.920933 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-47gxx"] Jan 21 16:09:14 crc kubenswrapper[4902]: W0121 16:09:14.928833 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5062b64_8c2a_46ee_ab92_3eb4d6e3fe95.slice/crio-564325e9036ee60992baa8a7e61c26741d72e356091de1d9184d1fc8bf8a0e9d WatchSource:0}: Error finding container 564325e9036ee60992baa8a7e61c26741d72e356091de1d9184d1fc8bf8a0e9d: Status 404 returned error can't find the container with id 564325e9036ee60992baa8a7e61c26741d72e356091de1d9184d1fc8bf8a0e9d Jan 21 16:09:15 crc kubenswrapper[4902]: I0121 16:09:15.040985 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-acb5-account-create-update-v87vq"] Jan 21 16:09:15 crc kubenswrapper[4902]: W0121 16:09:15.056130 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91fe5022_2b6f_46b9_9275_c8a809b32808.slice/crio-73998dc13065700af19fb284652ddcc2ebf55bd058a5a03ed578177fad335b03 WatchSource:0}: Error finding container 73998dc13065700af19fb284652ddcc2ebf55bd058a5a03ed578177fad335b03: Status 404 returned error can't find the container with id 73998dc13065700af19fb284652ddcc2ebf55bd058a5a03ed578177fad335b03 Jan 21 16:09:15 crc kubenswrapper[4902]: I0121 16:09:15.778853 4902 generic.go:334] "Generic (PLEG): container finished" podID="f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95" containerID="896306bd2b1df34ec4addf4110626bc7531717802d050ed131267e70790b5a08" exitCode=0 Jan 21 16:09:15 crc kubenswrapper[4902]: I0121 16:09:15.778929 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-47gxx" event={"ID":"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95","Type":"ContainerDied","Data":"896306bd2b1df34ec4addf4110626bc7531717802d050ed131267e70790b5a08"} Jan 21 16:09:15 crc kubenswrapper[4902]: I0121 16:09:15.779372 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-47gxx" event={"ID":"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95","Type":"ContainerStarted","Data":"564325e9036ee60992baa8a7e61c26741d72e356091de1d9184d1fc8bf8a0e9d"} Jan 21 16:09:15 crc kubenswrapper[4902]: I0121 16:09:15.781983 4902 generic.go:334] "Generic (PLEG): container finished" podID="91fe5022-2b6f-46b9-9275-c8a809b32808" containerID="fa806723dfd7c0c4b6154749911e6912458d2480fc0fa40932f24e709061ffad" exitCode=0 Jan 21 16:09:15 crc kubenswrapper[4902]: I0121 16:09:15.782059 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-acb5-account-create-update-v87vq" event={"ID":"91fe5022-2b6f-46b9-9275-c8a809b32808","Type":"ContainerDied","Data":"fa806723dfd7c0c4b6154749911e6912458d2480fc0fa40932f24e709061ffad"} Jan 21 16:09:15 crc kubenswrapper[4902]: I0121 16:09:15.782094 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-acb5-account-create-update-v87vq" event={"ID":"91fe5022-2b6f-46b9-9275-c8a809b32808","Type":"ContainerStarted","Data":"73998dc13065700af19fb284652ddcc2ebf55bd058a5a03ed578177fad335b03"} Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.283708 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-acb5-account-create-update-v87vq" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.289678 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-47gxx" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.301769 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91fe5022-2b6f-46b9-9275-c8a809b32808-operator-scripts\") pod \"91fe5022-2b6f-46b9-9275-c8a809b32808\" (UID: \"91fe5022-2b6f-46b9-9275-c8a809b32808\") " Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.302025 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsfzn\" (UniqueName: \"kubernetes.io/projected/91fe5022-2b6f-46b9-9275-c8a809b32808-kube-api-access-nsfzn\") pod \"91fe5022-2b6f-46b9-9275-c8a809b32808\" (UID: \"91fe5022-2b6f-46b9-9275-c8a809b32808\") " Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.302592 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91fe5022-2b6f-46b9-9275-c8a809b32808-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "91fe5022-2b6f-46b9-9275-c8a809b32808" (UID: "91fe5022-2b6f-46b9-9275-c8a809b32808"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.310034 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91fe5022-2b6f-46b9-9275-c8a809b32808-kube-api-access-nsfzn" (OuterVolumeSpecName: "kube-api-access-nsfzn") pod "91fe5022-2b6f-46b9-9275-c8a809b32808" (UID: "91fe5022-2b6f-46b9-9275-c8a809b32808"). InnerVolumeSpecName "kube-api-access-nsfzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.403768 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8cdq\" (UniqueName: \"kubernetes.io/projected/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95-kube-api-access-z8cdq\") pod \"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95\" (UID: \"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95\") " Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.403818 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95-operator-scripts\") pod \"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95\" (UID: \"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95\") " Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.404202 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91fe5022-2b6f-46b9-9275-c8a809b32808-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.404222 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsfzn\" (UniqueName: \"kubernetes.io/projected/91fe5022-2b6f-46b9-9275-c8a809b32808-kube-api-access-nsfzn\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.404409 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95" (UID: "f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.406319 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95-kube-api-access-z8cdq" (OuterVolumeSpecName: "kube-api-access-z8cdq") pod "f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95" (UID: "f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95"). InnerVolumeSpecName "kube-api-access-z8cdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.505605 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8cdq\" (UniqueName: \"kubernetes.io/projected/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95-kube-api-access-z8cdq\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.505639 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.815867 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-47gxx" event={"ID":"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95","Type":"ContainerDied","Data":"564325e9036ee60992baa8a7e61c26741d72e356091de1d9184d1fc8bf8a0e9d"} Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.816363 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="564325e9036ee60992baa8a7e61c26741d72e356091de1d9184d1fc8bf8a0e9d" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.815892 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-47gxx" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.817724 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-acb5-account-create-update-v87vq" event={"ID":"91fe5022-2b6f-46b9-9275-c8a809b32808","Type":"ContainerDied","Data":"73998dc13065700af19fb284652ddcc2ebf55bd058a5a03ed578177fad335b03"} Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.817760 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73998dc13065700af19fb284652ddcc2ebf55bd058a5a03ed578177fad335b03" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.817826 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-acb5-account-create-update-v87vq" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.403211 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-8xw4q"] Jan 21 16:09:19 crc kubenswrapper[4902]: E0121 16:09:19.403557 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95" containerName="mariadb-database-create" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.403568 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95" containerName="mariadb-database-create" Jan 21 16:09:19 crc kubenswrapper[4902]: E0121 16:09:19.403584 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91fe5022-2b6f-46b9-9275-c8a809b32808" containerName="mariadb-account-create-update" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.403590 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="91fe5022-2b6f-46b9-9275-c8a809b32808" containerName="mariadb-account-create-update" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.405418 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95" containerName="mariadb-database-create" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.405468 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="91fe5022-2b6f-46b9-9275-c8a809b32808" containerName="mariadb-account-create-update" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.407563 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.409294 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mn7jp" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.409527 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.418032 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8xw4q"] Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.436860 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgs4d\" (UniqueName: \"kubernetes.io/projected/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-kube-api-access-pgs4d\") pod \"glance-db-sync-8xw4q\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.436912 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-combined-ca-bundle\") pod \"glance-db-sync-8xw4q\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.436995 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-db-sync-config-data\") pod \"glance-db-sync-8xw4q\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.437106 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-config-data\") pod \"glance-db-sync-8xw4q\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.539556 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-db-sync-config-data\") pod \"glance-db-sync-8xw4q\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.539669 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-config-data\") pod \"glance-db-sync-8xw4q\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.539728 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgs4d\" (UniqueName: \"kubernetes.io/projected/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-kube-api-access-pgs4d\") pod \"glance-db-sync-8xw4q\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.539751 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-combined-ca-bundle\") pod \"glance-db-sync-8xw4q\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.545872 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-combined-ca-bundle\") pod \"glance-db-sync-8xw4q\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.546812 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-config-data\") pod \"glance-db-sync-8xw4q\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.549531 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-db-sync-config-data\") pod \"glance-db-sync-8xw4q\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.558787 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgs4d\" (UniqueName: \"kubernetes.io/projected/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-kube-api-access-pgs4d\") pod \"glance-db-sync-8xw4q\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.724005 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:20 crc kubenswrapper[4902]: I0121 16:09:20.224353 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8xw4q"] Jan 21 16:09:20 crc kubenswrapper[4902]: I0121 16:09:20.859655 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8xw4q" event={"ID":"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3","Type":"ContainerStarted","Data":"203c5f96aeff362658b5520a6e9eab7da26f8f63fd730b8b01fac5d263703aa2"} Jan 21 16:09:20 crc kubenswrapper[4902]: I0121 16:09:20.859996 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8xw4q" event={"ID":"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3","Type":"ContainerStarted","Data":"8979e9039ffe0d36c13a9987e7d17cd697d78bcf90b7adfceedd998dffa5e223"} Jan 21 16:09:20 crc kubenswrapper[4902]: I0121 16:09:20.882125 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-8xw4q" podStartSLOduration=1.8821052950000001 podStartE2EDuration="1.882105295s" podCreationTimestamp="2026-01-21 16:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:20.872594888 +0000 UTC m=+5722.949427927" watchObservedRunningTime="2026-01-21 16:09:20.882105295 +0000 UTC m=+5722.958938324" Jan 21 16:09:23 crc kubenswrapper[4902]: E0121 16:09:23.890469 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ba2b6d5_88af_4c5d_93dd_21ed05fe3ba3.slice/crio-conmon-203c5f96aeff362658b5520a6e9eab7da26f8f63fd730b8b01fac5d263703aa2.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:09:23 crc kubenswrapper[4902]: I0121 16:09:23.890992 4902 generic.go:334] "Generic (PLEG): container finished" podID="8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3" containerID="203c5f96aeff362658b5520a6e9eab7da26f8f63fd730b8b01fac5d263703aa2" exitCode=0 Jan 21 16:09:23 crc kubenswrapper[4902]: I0121 16:09:23.891017 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8xw4q" event={"ID":"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3","Type":"ContainerDied","Data":"203c5f96aeff362658b5520a6e9eab7da26f8f63fd730b8b01fac5d263703aa2"} Jan 21 16:09:24 crc kubenswrapper[4902]: I0121 16:09:24.545280 4902 scope.go:117] "RemoveContainer" containerID="0b56fe28c730faebb9b858e50e97ecef1625af2c756c8684ae0d499694f95667" Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.344677 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.466179 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-db-sync-config-data\") pod \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.466265 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgs4d\" (UniqueName: \"kubernetes.io/projected/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-kube-api-access-pgs4d\") pod \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.466375 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-combined-ca-bundle\") pod \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.466424 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-config-data\") pod \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.472557 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-kube-api-access-pgs4d" (OuterVolumeSpecName: "kube-api-access-pgs4d") pod "8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3" (UID: "8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3"). InnerVolumeSpecName "kube-api-access-pgs4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.478280 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3" (UID: "8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.491618 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3" (UID: "8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.543335 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-config-data" (OuterVolumeSpecName: "config-data") pod "8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3" (UID: "8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.568201 4902 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.568238 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgs4d\" (UniqueName: \"kubernetes.io/projected/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-kube-api-access-pgs4d\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.568255 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.568267 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.909890 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8xw4q" event={"ID":"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3","Type":"ContainerDied","Data":"8979e9039ffe0d36c13a9987e7d17cd697d78bcf90b7adfceedd998dffa5e223"} Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.909937 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8979e9039ffe0d36c13a9987e7d17cd697d78bcf90b7adfceedd998dffa5e223" Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.909993 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.183586 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:09:26 crc kubenswrapper[4902]: E0121 16:09:26.183930 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3" containerName="glance-db-sync" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.183946 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3" containerName="glance-db-sync" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.184136 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3" containerName="glance-db-sync" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.184934 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.190999 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.192745 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.192761 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mn7jp" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.199836 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.285554 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bdc9ddbfc-5j79v"] Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.287309 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.312886 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bdc9ddbfc-5j79v"] Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.378578 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.380863 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.383411 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.383785 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-ovsdbserver-nb\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.383811 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-ovsdbserver-sb\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.383852 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-logs\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.383877 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d895a439-2fd1-43e5-ae5b-37c1b855a857-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.383899 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.383921 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2cj4\" (UniqueName: \"kubernetes.io/projected/f87b7e66-2e90-42f0-babb-fc5013fa6077-kube-api-access-s2cj4\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.383952 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-dns-svc\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.384015 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-scripts\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.384033 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.384126 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-config\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.384164 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d895a439-2fd1-43e5-ae5b-37c1b855a857-logs\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.384179 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.384240 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.384349 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8tn8\" (UniqueName: \"kubernetes.io/projected/d895a439-2fd1-43e5-ae5b-37c1b855a857-kube-api-access-v8tn8\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.384408 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.384453 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-config-data\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.384490 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z89k5\" (UniqueName: \"kubernetes.io/projected/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-kube-api-access-z89k5\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.398336 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486053 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-config-data\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486112 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z89k5\" (UniqueName: \"kubernetes.io/projected/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-kube-api-access-z89k5\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486136 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-ovsdbserver-nb\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486283 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-ovsdbserver-sb\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486327 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-logs\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486389 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d895a439-2fd1-43e5-ae5b-37c1b855a857-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486437 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486476 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2cj4\" (UniqueName: \"kubernetes.io/projected/f87b7e66-2e90-42f0-babb-fc5013fa6077-kube-api-access-s2cj4\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486531 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-dns-svc\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486579 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-scripts\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486611 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486686 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-config\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486743 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d895a439-2fd1-43e5-ae5b-37c1b855a857-logs\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486759 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486774 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486839 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8tn8\" (UniqueName: \"kubernetes.io/projected/d895a439-2fd1-43e5-ae5b-37c1b855a857-kube-api-access-v8tn8\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486877 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.487140 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-ovsdbserver-sb\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.487931 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.488208 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d895a439-2fd1-43e5-ae5b-37c1b855a857-logs\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.488282 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-logs\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.488538 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-dns-svc\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.488719 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d895a439-2fd1-43e5-ae5b-37c1b855a857-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.488814 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-config\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.490775 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-ovsdbserver-nb\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.492883 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.494197 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-scripts\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.494720 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-config-data\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.497276 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.503605 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.504204 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z89k5\" (UniqueName: \"kubernetes.io/projected/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-kube-api-access-z89k5\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.506829 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.509231 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2cj4\" (UniqueName: \"kubernetes.io/projected/f87b7e66-2e90-42f0-babb-fc5013fa6077-kube-api-access-s2cj4\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.511717 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8tn8\" (UniqueName: \"kubernetes.io/projected/d895a439-2fd1-43e5-ae5b-37c1b855a857-kube-api-access-v8tn8\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.606288 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.700513 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:27 crc kubenswrapper[4902]: I0121 16:09:26.802808 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:09:27 crc kubenswrapper[4902]: I0121 16:09:27.356460 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:09:27 crc kubenswrapper[4902]: I0121 16:09:27.820314 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bdc9ddbfc-5j79v"] Jan 21 16:09:27 crc kubenswrapper[4902]: I0121 16:09:27.944620 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" event={"ID":"f87b7e66-2e90-42f0-babb-fc5013fa6077","Type":"ContainerStarted","Data":"00e6bf2e91ac88d92bb8b5aa081d62fc62c975ee1e0acebbcdf006224895188a"} Jan 21 16:09:28 crc kubenswrapper[4902]: I0121 16:09:28.003607 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:09:28 crc kubenswrapper[4902]: I0121 16:09:28.119266 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:09:28 crc kubenswrapper[4902]: W0121 16:09:28.128837 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89e7fd5c_9bb9_4f38_98d2_9cbfb20480d7.slice/crio-1d32a28fbd7e96a1a583185b8a4ff7c76c3be27c2af3b3ef55f445d5eb9b0c25 WatchSource:0}: Error finding container 1d32a28fbd7e96a1a583185b8a4ff7c76c3be27c2af3b3ef55f445d5eb9b0c25: Status 404 returned error can't find the container with id 1d32a28fbd7e96a1a583185b8a4ff7c76c3be27c2af3b3ef55f445d5eb9b0c25 Jan 21 16:09:28 crc kubenswrapper[4902]: I0121 16:09:28.335070 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:09:28 crc kubenswrapper[4902]: I0121 16:09:28.975947 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d895a439-2fd1-43e5-ae5b-37c1b855a857","Type":"ContainerStarted","Data":"dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28"} Jan 21 16:09:28 crc kubenswrapper[4902]: I0121 16:09:28.976317 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d895a439-2fd1-43e5-ae5b-37c1b855a857","Type":"ContainerStarted","Data":"e143ea804eb357383be65a4f9299d35f13390bc0106b3c7a48e5ed3ee751c488"} Jan 21 16:09:28 crc kubenswrapper[4902]: I0121 16:09:28.980011 4902 generic.go:334] "Generic (PLEG): container finished" podID="f87b7e66-2e90-42f0-babb-fc5013fa6077" containerID="9b90b5e9976ff404939d97898e5d6de981b873dcf40445c1eff2bc3716325545" exitCode=0 Jan 21 16:09:28 crc kubenswrapper[4902]: I0121 16:09:28.980091 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" event={"ID":"f87b7e66-2e90-42f0-babb-fc5013fa6077","Type":"ContainerDied","Data":"9b90b5e9976ff404939d97898e5d6de981b873dcf40445c1eff2bc3716325545"} Jan 21 16:09:28 crc kubenswrapper[4902]: I0121 16:09:28.986644 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7","Type":"ContainerStarted","Data":"2391defccba80f13530a209fd1975b89ba247557a50efabc95271e6deef454d1"} Jan 21 16:09:28 crc kubenswrapper[4902]: I0121 16:09:28.986679 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7","Type":"ContainerStarted","Data":"1d32a28fbd7e96a1a583185b8a4ff7c76c3be27c2af3b3ef55f445d5eb9b0c25"} Jan 21 16:09:29 crc kubenswrapper[4902]: I0121 16:09:29.994791 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d895a439-2fd1-43e5-ae5b-37c1b855a857","Type":"ContainerStarted","Data":"ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1"} Jan 21 16:09:29 crc kubenswrapper[4902]: I0121 16:09:29.994933 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d895a439-2fd1-43e5-ae5b-37c1b855a857" containerName="glance-log" containerID="cri-o://dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28" gracePeriod=30 Jan 21 16:09:29 crc kubenswrapper[4902]: I0121 16:09:29.994976 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d895a439-2fd1-43e5-ae5b-37c1b855a857" containerName="glance-httpd" containerID="cri-o://ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1" gracePeriod=30 Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.000638 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" event={"ID":"f87b7e66-2e90-42f0-babb-fc5013fa6077","Type":"ContainerStarted","Data":"e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7"} Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.000993 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.003832 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7","Type":"ContainerStarted","Data":"e2bfe2033a458018f9859bed675794dcb04f5f1714eb354b1bacd9ba8be6fe42"} Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.003976 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" containerName="glance-log" containerID="cri-o://2391defccba80f13530a209fd1975b89ba247557a50efabc95271e6deef454d1" gracePeriod=30 Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.004112 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" containerName="glance-httpd" containerID="cri-o://e2bfe2033a458018f9859bed675794dcb04f5f1714eb354b1bacd9ba8be6fe42" gracePeriod=30 Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.036868 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.036847101 podStartE2EDuration="4.036847101s" podCreationTimestamp="2026-01-21 16:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:30.013963958 +0000 UTC m=+5732.090796997" watchObservedRunningTime="2026-01-21 16:09:30.036847101 +0000 UTC m=+5732.113680130" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.037198 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.037192921 podStartE2EDuration="4.037192921s" podCreationTimestamp="2026-01-21 16:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:30.033414555 +0000 UTC m=+5732.110247604" watchObservedRunningTime="2026-01-21 16:09:30.037192921 +0000 UTC m=+5732.114025950" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.052113 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" podStartSLOduration=4.05209098 podStartE2EDuration="4.05209098s" podCreationTimestamp="2026-01-21 16:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:30.050607178 +0000 UTC m=+5732.127440207" watchObservedRunningTime="2026-01-21 16:09:30.05209098 +0000 UTC m=+5732.128924009" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.748015 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.888787 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d895a439-2fd1-43e5-ae5b-37c1b855a857-logs\") pod \"d895a439-2fd1-43e5-ae5b-37c1b855a857\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.889035 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d895a439-2fd1-43e5-ae5b-37c1b855a857-httpd-run\") pod \"d895a439-2fd1-43e5-ae5b-37c1b855a857\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.889163 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-config-data\") pod \"d895a439-2fd1-43e5-ae5b-37c1b855a857\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.889199 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-scripts\") pod \"d895a439-2fd1-43e5-ae5b-37c1b855a857\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.889259 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-combined-ca-bundle\") pod \"d895a439-2fd1-43e5-ae5b-37c1b855a857\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.889320 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8tn8\" (UniqueName: \"kubernetes.io/projected/d895a439-2fd1-43e5-ae5b-37c1b855a857-kube-api-access-v8tn8\") pod \"d895a439-2fd1-43e5-ae5b-37c1b855a857\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.889579 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d895a439-2fd1-43e5-ae5b-37c1b855a857-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d895a439-2fd1-43e5-ae5b-37c1b855a857" (UID: "d895a439-2fd1-43e5-ae5b-37c1b855a857"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.889816 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d895a439-2fd1-43e5-ae5b-37c1b855a857-logs" (OuterVolumeSpecName: "logs") pod "d895a439-2fd1-43e5-ae5b-37c1b855a857" (UID: "d895a439-2fd1-43e5-ae5b-37c1b855a857"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.890560 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d895a439-2fd1-43e5-ae5b-37c1b855a857-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.890586 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d895a439-2fd1-43e5-ae5b-37c1b855a857-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.893981 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-scripts" (OuterVolumeSpecName: "scripts") pod "d895a439-2fd1-43e5-ae5b-37c1b855a857" (UID: "d895a439-2fd1-43e5-ae5b-37c1b855a857"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.894205 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d895a439-2fd1-43e5-ae5b-37c1b855a857-kube-api-access-v8tn8" (OuterVolumeSpecName: "kube-api-access-v8tn8") pod "d895a439-2fd1-43e5-ae5b-37c1b855a857" (UID: "d895a439-2fd1-43e5-ae5b-37c1b855a857"). InnerVolumeSpecName "kube-api-access-v8tn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.916663 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d895a439-2fd1-43e5-ae5b-37c1b855a857" (UID: "d895a439-2fd1-43e5-ae5b-37c1b855a857"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.938533 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-config-data" (OuterVolumeSpecName: "config-data") pod "d895a439-2fd1-43e5-ae5b-37c1b855a857" (UID: "d895a439-2fd1-43e5-ae5b-37c1b855a857"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.991802 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.991836 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.991847 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.991859 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8tn8\" (UniqueName: \"kubernetes.io/projected/d895a439-2fd1-43e5-ae5b-37c1b855a857-kube-api-access-v8tn8\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.017123 4902 generic.go:334] "Generic (PLEG): container finished" podID="d895a439-2fd1-43e5-ae5b-37c1b855a857" containerID="ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1" exitCode=0 Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.017166 4902 generic.go:334] "Generic (PLEG): container finished" podID="d895a439-2fd1-43e5-ae5b-37c1b855a857" containerID="dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28" exitCode=143 Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.017279 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.020457 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d895a439-2fd1-43e5-ae5b-37c1b855a857","Type":"ContainerDied","Data":"ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1"} Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.020511 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d895a439-2fd1-43e5-ae5b-37c1b855a857","Type":"ContainerDied","Data":"dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28"} Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.020530 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d895a439-2fd1-43e5-ae5b-37c1b855a857","Type":"ContainerDied","Data":"e143ea804eb357383be65a4f9299d35f13390bc0106b3c7a48e5ed3ee751c488"} Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.020551 4902 scope.go:117] "RemoveContainer" containerID="ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.024432 4902 generic.go:334] "Generic (PLEG): container finished" podID="89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" containerID="e2bfe2033a458018f9859bed675794dcb04f5f1714eb354b1bacd9ba8be6fe42" exitCode=0 Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.024461 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7","Type":"ContainerDied","Data":"e2bfe2033a458018f9859bed675794dcb04f5f1714eb354b1bacd9ba8be6fe42"} Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.024507 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7","Type":"ContainerDied","Data":"2391defccba80f13530a209fd1975b89ba247557a50efabc95271e6deef454d1"} Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.024469 4902 generic.go:334] "Generic (PLEG): container finished" podID="89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" containerID="2391defccba80f13530a209fd1975b89ba247557a50efabc95271e6deef454d1" exitCode=143 Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.072523 4902 scope.go:117] "RemoveContainer" containerID="dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.075217 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.090110 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.092540 4902 scope.go:117] "RemoveContainer" containerID="ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1" Jan 21 16:09:31 crc kubenswrapper[4902]: E0121 16:09:31.093446 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1\": container with ID starting with ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1 not found: ID does not exist" containerID="ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.093488 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1"} err="failed to get container status \"ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1\": rpc error: code = NotFound desc = could not find container \"ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1\": container with ID starting with ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1 not found: ID does not exist" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.093514 4902 scope.go:117] "RemoveContainer" containerID="dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28" Jan 21 16:09:31 crc kubenswrapper[4902]: E0121 16:09:31.093755 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28\": container with ID starting with dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28 not found: ID does not exist" containerID="dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.093788 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28"} err="failed to get container status \"dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28\": rpc error: code = NotFound desc = could not find container \"dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28\": container with ID starting with dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28 not found: ID does not exist" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.093807 4902 scope.go:117] "RemoveContainer" containerID="ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.094074 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1"} err="failed to get container status \"ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1\": rpc error: code = NotFound desc = could not find container \"ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1\": container with ID starting with ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1 not found: ID does not exist" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.094106 4902 scope.go:117] "RemoveContainer" containerID="dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.094288 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28"} err="failed to get container status \"dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28\": rpc error: code = NotFound desc = could not find container \"dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28\": container with ID starting with dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28 not found: ID does not exist" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.103131 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:09:31 crc kubenswrapper[4902]: E0121 16:09:31.103650 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d895a439-2fd1-43e5-ae5b-37c1b855a857" containerName="glance-log" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.103670 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d895a439-2fd1-43e5-ae5b-37c1b855a857" containerName="glance-log" Jan 21 16:09:31 crc kubenswrapper[4902]: E0121 16:09:31.103696 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d895a439-2fd1-43e5-ae5b-37c1b855a857" containerName="glance-httpd" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.103705 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d895a439-2fd1-43e5-ae5b-37c1b855a857" containerName="glance-httpd" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.103932 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d895a439-2fd1-43e5-ae5b-37c1b855a857" containerName="glance-log" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.103960 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d895a439-2fd1-43e5-ae5b-37c1b855a857" containerName="glance-httpd" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.107351 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.109415 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.109566 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.119116 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.195097 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm8hg\" (UniqueName: \"kubernetes.io/projected/8a90211b-865e-43ee-a4d2-4435d5377cac-kube-api-access-jm8hg\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.195178 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.195235 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a90211b-865e-43ee-a4d2-4435d5377cac-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.195269 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.195314 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a90211b-865e-43ee-a4d2-4435d5377cac-logs\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.195343 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.195384 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.296151 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a90211b-865e-43ee-a4d2-4435d5377cac-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.296201 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.296229 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a90211b-865e-43ee-a4d2-4435d5377cac-logs\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.296256 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.296287 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.296352 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm8hg\" (UniqueName: \"kubernetes.io/projected/8a90211b-865e-43ee-a4d2-4435d5377cac-kube-api-access-jm8hg\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.296383 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.296849 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a90211b-865e-43ee-a4d2-4435d5377cac-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.297328 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a90211b-865e-43ee-a4d2-4435d5377cac-logs\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.301393 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.301906 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.316990 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm8hg\" (UniqueName: \"kubernetes.io/projected/8a90211b-865e-43ee-a4d2-4435d5377cac-kube-api-access-jm8hg\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.317696 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.320217 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.409999 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.479202 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.499652 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-combined-ca-bundle\") pod \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.499811 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z89k5\" (UniqueName: \"kubernetes.io/projected/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-kube-api-access-z89k5\") pod \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.499856 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-httpd-run\") pod \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.499898 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-config-data\") pod \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.499933 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-scripts\") pod \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.499960 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-logs\") pod \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.500466 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" (UID: "89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.500621 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-logs" (OuterVolumeSpecName: "logs") pod "89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" (UID: "89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.506386 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-kube-api-access-z89k5" (OuterVolumeSpecName: "kube-api-access-z89k5") pod "89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" (UID: "89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7"). InnerVolumeSpecName "kube-api-access-z89k5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.530377 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-scripts" (OuterVolumeSpecName: "scripts") pod "89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" (UID: "89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.584228 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" (UID: "89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.602442 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z89k5\" (UniqueName: \"kubernetes.io/projected/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-kube-api-access-z89k5\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.602472 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.602484 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.602493 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.602500 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.652196 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-config-data" (OuterVolumeSpecName: "config-data") pod "89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" (UID: "89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.707564 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.035197 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7","Type":"ContainerDied","Data":"1d32a28fbd7e96a1a583185b8a4ff7c76c3be27c2af3b3ef55f445d5eb9b0c25"} Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.035247 4902 scope.go:117] "RemoveContainer" containerID="e2bfe2033a458018f9859bed675794dcb04f5f1714eb354b1bacd9ba8be6fe42" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.035499 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.063882 4902 scope.go:117] "RemoveContainer" containerID="2391defccba80f13530a209fd1975b89ba247557a50efabc95271e6deef454d1" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.068736 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.075791 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.104392 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:09:32 crc kubenswrapper[4902]: E0121 16:09:32.107440 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" containerName="glance-log" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.107478 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" containerName="glance-log" Jan 21 16:09:32 crc kubenswrapper[4902]: E0121 16:09:32.107522 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" containerName="glance-httpd" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.107535 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" containerName="glance-httpd" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.107877 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" containerName="glance-log" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.107896 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" containerName="glance-httpd" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.109793 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.113073 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.113223 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.116861 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.202586 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:09:32 crc kubenswrapper[4902]: W0121 16:09:32.207325 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a90211b_865e_43ee_a4d2_4435d5377cac.slice/crio-9465028a66213606555e0f8ddd61e53e1a204236d21e0dbf53c9bae174755deb WatchSource:0}: Error finding container 9465028a66213606555e0f8ddd61e53e1a204236d21e0dbf53c9bae174755deb: Status 404 returned error can't find the container with id 9465028a66213606555e0f8ddd61e53e1a204236d21e0dbf53c9bae174755deb Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.219359 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkkjx\" (UniqueName: \"kubernetes.io/projected/621700c2-adff-4cf1-81a4-fb0213e5e919-kube-api-access-kkkjx\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.221275 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-config-data\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.221443 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/621700c2-adff-4cf1-81a4-fb0213e5e919-logs\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.221566 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.221629 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.221745 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/621700c2-adff-4cf1-81a4-fb0213e5e919-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.221830 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-scripts\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.307875 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" path="/var/lib/kubelet/pods/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7/volumes" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.308631 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d895a439-2fd1-43e5-ae5b-37c1b855a857" path="/var/lib/kubelet/pods/d895a439-2fd1-43e5-ae5b-37c1b855a857/volumes" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.323167 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.323933 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.324013 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/621700c2-adff-4cf1-81a4-fb0213e5e919-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.324103 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-scripts\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.324459 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/621700c2-adff-4cf1-81a4-fb0213e5e919-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.324561 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkkjx\" (UniqueName: \"kubernetes.io/projected/621700c2-adff-4cf1-81a4-fb0213e5e919-kube-api-access-kkkjx\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.324604 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-config-data\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.324682 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/621700c2-adff-4cf1-81a4-fb0213e5e919-logs\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.325125 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/621700c2-adff-4cf1-81a4-fb0213e5e919-logs\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.327424 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-scripts\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.328439 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.329895 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.338340 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-config-data\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.341948 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkkjx\" (UniqueName: \"kubernetes.io/projected/621700c2-adff-4cf1-81a4-fb0213e5e919-kube-api-access-kkkjx\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.468266 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:09:33 crc kubenswrapper[4902]: I0121 16:09:33.051233 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8a90211b-865e-43ee-a4d2-4435d5377cac","Type":"ContainerStarted","Data":"b5f9108bd4e377347ea43cf1022065cb061fb7505fcb4f124adde97f4fd9fe0c"} Jan 21 16:09:33 crc kubenswrapper[4902]: I0121 16:09:33.051752 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8a90211b-865e-43ee-a4d2-4435d5377cac","Type":"ContainerStarted","Data":"9465028a66213606555e0f8ddd61e53e1a204236d21e0dbf53c9bae174755deb"} Jan 21 16:09:33 crc kubenswrapper[4902]: I0121 16:09:33.067521 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:09:33 crc kubenswrapper[4902]: W0121 16:09:33.074774 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod621700c2_adff_4cf1_81a4_fb0213e5e919.slice/crio-fb6c969ddf6477f474e95f9c5c6fde9452e3279bb465fbf4b3d1c7ae5b80a349 WatchSource:0}: Error finding container fb6c969ddf6477f474e95f9c5c6fde9452e3279bb465fbf4b3d1c7ae5b80a349: Status 404 returned error can't find the container with id fb6c969ddf6477f474e95f9c5c6fde9452e3279bb465fbf4b3d1c7ae5b80a349 Jan 21 16:09:34 crc kubenswrapper[4902]: I0121 16:09:34.062923 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"621700c2-adff-4cf1-81a4-fb0213e5e919","Type":"ContainerStarted","Data":"d703f5632f2cbf952b8d8487e251807ade66f1d024b3d48fde5f54990b973dc3"} Jan 21 16:09:34 crc kubenswrapper[4902]: I0121 16:09:34.063277 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"621700c2-adff-4cf1-81a4-fb0213e5e919","Type":"ContainerStarted","Data":"fb6c969ddf6477f474e95f9c5c6fde9452e3279bb465fbf4b3d1c7ae5b80a349"} Jan 21 16:09:34 crc kubenswrapper[4902]: I0121 16:09:34.067475 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8a90211b-865e-43ee-a4d2-4435d5377cac","Type":"ContainerStarted","Data":"59aec4d7b002f6bac7cebbdd58347eb07bbd6d976ee19de283329b9b2320f207"} Jan 21 16:09:34 crc kubenswrapper[4902]: I0121 16:09:34.091272 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.091253739 podStartE2EDuration="3.091253739s" podCreationTimestamp="2026-01-21 16:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:34.085968021 +0000 UTC m=+5736.162801060" watchObservedRunningTime="2026-01-21 16:09:34.091253739 +0000 UTC m=+5736.168086768" Jan 21 16:09:35 crc kubenswrapper[4902]: I0121 16:09:35.078433 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"621700c2-adff-4cf1-81a4-fb0213e5e919","Type":"ContainerStarted","Data":"736f3facc63619fff931156c32623cacaeb743514ad4d9bc998e592c1498cea3"} Jan 21 16:09:35 crc kubenswrapper[4902]: I0121 16:09:35.113525 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.113504672 podStartE2EDuration="3.113504672s" podCreationTimestamp="2026-01-21 16:09:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:35.108737058 +0000 UTC m=+5737.185570097" watchObservedRunningTime="2026-01-21 16:09:35.113504672 +0000 UTC m=+5737.190337701" Jan 21 16:09:36 crc kubenswrapper[4902]: I0121 16:09:36.611322 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:36 crc kubenswrapper[4902]: I0121 16:09:36.679731 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69884d7f9-kfzgg"] Jan 21 16:09:36 crc kubenswrapper[4902]: I0121 16:09:36.679980 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" podUID="5a487ade-04df-42df-b2a4-694f02a2ebdb" containerName="dnsmasq-dns" containerID="cri-o://951c4e5c6873eb9e83588429fe8aca2e5cbae26eba168613139a62833929e049" gracePeriod=10 Jan 21 16:09:37 crc kubenswrapper[4902]: I0121 16:09:37.103310 4902 generic.go:334] "Generic (PLEG): container finished" podID="5a487ade-04df-42df-b2a4-694f02a2ebdb" containerID="951c4e5c6873eb9e83588429fe8aca2e5cbae26eba168613139a62833929e049" exitCode=0 Jan 21 16:09:37 crc kubenswrapper[4902]: I0121 16:09:37.103383 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" event={"ID":"5a487ade-04df-42df-b2a4-694f02a2ebdb","Type":"ContainerDied","Data":"951c4e5c6873eb9e83588429fe8aca2e5cbae26eba168613139a62833929e049"} Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.419838 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.551905 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-dns-svc\") pod \"5a487ade-04df-42df-b2a4-694f02a2ebdb\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.552034 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-ovsdbserver-sb\") pod \"5a487ade-04df-42df-b2a4-694f02a2ebdb\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.552092 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2vq2\" (UniqueName: \"kubernetes.io/projected/5a487ade-04df-42df-b2a4-694f02a2ebdb-kube-api-access-j2vq2\") pod \"5a487ade-04df-42df-b2a4-694f02a2ebdb\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.552148 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-ovsdbserver-nb\") pod \"5a487ade-04df-42df-b2a4-694f02a2ebdb\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.552202 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-config\") pod \"5a487ade-04df-42df-b2a4-694f02a2ebdb\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.562019 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a487ade-04df-42df-b2a4-694f02a2ebdb-kube-api-access-j2vq2" (OuterVolumeSpecName: "kube-api-access-j2vq2") pod "5a487ade-04df-42df-b2a4-694f02a2ebdb" (UID: "5a487ade-04df-42df-b2a4-694f02a2ebdb"). InnerVolumeSpecName "kube-api-access-j2vq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.601725 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5a487ade-04df-42df-b2a4-694f02a2ebdb" (UID: "5a487ade-04df-42df-b2a4-694f02a2ebdb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.602348 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5a487ade-04df-42df-b2a4-694f02a2ebdb" (UID: "5a487ade-04df-42df-b2a4-694f02a2ebdb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.603694 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-config" (OuterVolumeSpecName: "config") pod "5a487ade-04df-42df-b2a4-694f02a2ebdb" (UID: "5a487ade-04df-42df-b2a4-694f02a2ebdb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.605152 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5a487ade-04df-42df-b2a4-694f02a2ebdb" (UID: "5a487ade-04df-42df-b2a4-694f02a2ebdb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.654404 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.654445 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.654462 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2vq2\" (UniqueName: \"kubernetes.io/projected/5a487ade-04df-42df-b2a4-694f02a2ebdb-kube-api-access-j2vq2\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.654474 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.654486 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:39 crc kubenswrapper[4902]: I0121 16:09:39.126549 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" event={"ID":"5a487ade-04df-42df-b2a4-694f02a2ebdb","Type":"ContainerDied","Data":"cce4d19f30fd69fa08a849c8261f82a05dc1b4c6705764be924dca9e7b74f41e"} Jan 21 16:09:39 crc kubenswrapper[4902]: I0121 16:09:39.126618 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:09:39 crc kubenswrapper[4902]: I0121 16:09:39.126652 4902 scope.go:117] "RemoveContainer" containerID="951c4e5c6873eb9e83588429fe8aca2e5cbae26eba168613139a62833929e049" Jan 21 16:09:39 crc kubenswrapper[4902]: I0121 16:09:39.157885 4902 scope.go:117] "RemoveContainer" containerID="5cb68b975e1bdae1829713fed46eef25b840bf53e0813c38525f5a6f921ca76c" Jan 21 16:09:39 crc kubenswrapper[4902]: I0121 16:09:39.173003 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69884d7f9-kfzgg"] Jan 21 16:09:39 crc kubenswrapper[4902]: I0121 16:09:39.181447 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69884d7f9-kfzgg"] Jan 21 16:09:40 crc kubenswrapper[4902]: I0121 16:09:40.310176 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a487ade-04df-42df-b2a4-694f02a2ebdb" path="/var/lib/kubelet/pods/5a487ade-04df-42df-b2a4-694f02a2ebdb/volumes" Jan 21 16:09:41 crc kubenswrapper[4902]: I0121 16:09:41.480327 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:41 crc kubenswrapper[4902]: I0121 16:09:41.480811 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:41 crc kubenswrapper[4902]: I0121 16:09:41.507782 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:41 crc kubenswrapper[4902]: I0121 16:09:41.517671 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:42 crc kubenswrapper[4902]: I0121 16:09:42.154280 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:42 crc kubenswrapper[4902]: I0121 16:09:42.154325 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:42 crc kubenswrapper[4902]: I0121 16:09:42.468875 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 16:09:42 crc kubenswrapper[4902]: I0121 16:09:42.468924 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 16:09:42 crc kubenswrapper[4902]: I0121 16:09:42.498937 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 16:09:42 crc kubenswrapper[4902]: I0121 16:09:42.534898 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 16:09:43 crc kubenswrapper[4902]: I0121 16:09:43.166937 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 16:09:43 crc kubenswrapper[4902]: I0121 16:09:43.167261 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 16:09:44 crc kubenswrapper[4902]: I0121 16:09:44.074873 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:44 crc kubenswrapper[4902]: I0121 16:09:44.140326 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:45 crc kubenswrapper[4902]: I0121 16:09:45.102311 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 16:09:45 crc kubenswrapper[4902]: I0121 16:09:45.181040 4902 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:09:45 crc kubenswrapper[4902]: I0121 16:09:45.242366 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 16:09:47 crc kubenswrapper[4902]: I0121 16:09:47.769552 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:09:47 crc kubenswrapper[4902]: I0121 16:09:47.770617 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.294236 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-70cb-account-create-update-hprl8"] Jan 21 16:09:53 crc kubenswrapper[4902]: E0121 16:09:53.295378 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a487ade-04df-42df-b2a4-694f02a2ebdb" containerName="dnsmasq-dns" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.295400 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a487ade-04df-42df-b2a4-694f02a2ebdb" containerName="dnsmasq-dns" Jan 21 16:09:53 crc kubenswrapper[4902]: E0121 16:09:53.295426 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a487ade-04df-42df-b2a4-694f02a2ebdb" containerName="init" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.295440 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a487ade-04df-42df-b2a4-694f02a2ebdb" containerName="init" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.295775 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a487ade-04df-42df-b2a4-694f02a2ebdb" containerName="dnsmasq-dns" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.296717 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-70cb-account-create-update-hprl8" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.298472 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.301636 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-xrlb5"] Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.302964 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xrlb5" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.312700 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-70cb-account-create-update-hprl8"] Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.320241 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/311b51a9-7349-42c3-8777-e1da9c997866-operator-scripts\") pod \"placement-70cb-account-create-update-hprl8\" (UID: \"311b51a9-7349-42c3-8777-e1da9c997866\") " pod="openstack/placement-70cb-account-create-update-hprl8" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.320337 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mb94\" (UniqueName: \"kubernetes.io/projected/311b51a9-7349-42c3-8777-e1da9c997866-kube-api-access-7mb94\") pod \"placement-70cb-account-create-update-hprl8\" (UID: \"311b51a9-7349-42c3-8777-e1da9c997866\") " pod="openstack/placement-70cb-account-create-update-hprl8" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.323179 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-xrlb5"] Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.422066 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcbfk\" (UniqueName: \"kubernetes.io/projected/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c-kube-api-access-fcbfk\") pod \"placement-db-create-xrlb5\" (UID: \"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c\") " pod="openstack/placement-db-create-xrlb5" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.422117 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c-operator-scripts\") pod \"placement-db-create-xrlb5\" (UID: \"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c\") " pod="openstack/placement-db-create-xrlb5" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.422278 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/311b51a9-7349-42c3-8777-e1da9c997866-operator-scripts\") pod \"placement-70cb-account-create-update-hprl8\" (UID: \"311b51a9-7349-42c3-8777-e1da9c997866\") " pod="openstack/placement-70cb-account-create-update-hprl8" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.422342 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mb94\" (UniqueName: \"kubernetes.io/projected/311b51a9-7349-42c3-8777-e1da9c997866-kube-api-access-7mb94\") pod \"placement-70cb-account-create-update-hprl8\" (UID: \"311b51a9-7349-42c3-8777-e1da9c997866\") " pod="openstack/placement-70cb-account-create-update-hprl8" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.423586 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/311b51a9-7349-42c3-8777-e1da9c997866-operator-scripts\") pod \"placement-70cb-account-create-update-hprl8\" (UID: \"311b51a9-7349-42c3-8777-e1da9c997866\") " pod="openstack/placement-70cb-account-create-update-hprl8" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.441305 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mb94\" (UniqueName: \"kubernetes.io/projected/311b51a9-7349-42c3-8777-e1da9c997866-kube-api-access-7mb94\") pod \"placement-70cb-account-create-update-hprl8\" (UID: \"311b51a9-7349-42c3-8777-e1da9c997866\") " pod="openstack/placement-70cb-account-create-update-hprl8" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.524006 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcbfk\" (UniqueName: \"kubernetes.io/projected/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c-kube-api-access-fcbfk\") pod \"placement-db-create-xrlb5\" (UID: \"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c\") " pod="openstack/placement-db-create-xrlb5" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.524082 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c-operator-scripts\") pod \"placement-db-create-xrlb5\" (UID: \"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c\") " pod="openstack/placement-db-create-xrlb5" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.524831 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c-operator-scripts\") pod \"placement-db-create-xrlb5\" (UID: \"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c\") " pod="openstack/placement-db-create-xrlb5" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.544364 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcbfk\" (UniqueName: \"kubernetes.io/projected/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c-kube-api-access-fcbfk\") pod \"placement-db-create-xrlb5\" (UID: \"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c\") " pod="openstack/placement-db-create-xrlb5" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.620700 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-70cb-account-create-update-hprl8" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.626741 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xrlb5" Jan 21 16:09:54 crc kubenswrapper[4902]: I0121 16:09:54.203485 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-70cb-account-create-update-hprl8"] Jan 21 16:09:54 crc kubenswrapper[4902]: I0121 16:09:54.264468 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-70cb-account-create-update-hprl8" event={"ID":"311b51a9-7349-42c3-8777-e1da9c997866","Type":"ContainerStarted","Data":"173c6c71b1c511d1ce8e014ff16b6925a0603aac50b3a26bc4726fac330fcd1d"} Jan 21 16:09:54 crc kubenswrapper[4902]: I0121 16:09:54.270111 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-xrlb5"] Jan 21 16:09:54 crc kubenswrapper[4902]: E0121 16:09:54.689374 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32dabaa5_86fa_4ff4_9a8e_7cd5360c978c.slice/crio-aef8011a0955408b9b496fd1dcaa48e11cb807245e77d4a67f379e75f01adc85.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:09:55 crc kubenswrapper[4902]: I0121 16:09:55.278388 4902 generic.go:334] "Generic (PLEG): container finished" podID="311b51a9-7349-42c3-8777-e1da9c997866" containerID="bfee1fd2715dd8d05c9392fd3ab86d1d97c355292e968dc34fcc4d66a846b5d3" exitCode=0 Jan 21 16:09:55 crc kubenswrapper[4902]: I0121 16:09:55.278499 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-70cb-account-create-update-hprl8" event={"ID":"311b51a9-7349-42c3-8777-e1da9c997866","Type":"ContainerDied","Data":"bfee1fd2715dd8d05c9392fd3ab86d1d97c355292e968dc34fcc4d66a846b5d3"} Jan 21 16:09:55 crc kubenswrapper[4902]: I0121 16:09:55.281763 4902 generic.go:334] "Generic (PLEG): container finished" podID="32dabaa5-86fa-4ff4-9a8e-7cd5360c978c" containerID="aef8011a0955408b9b496fd1dcaa48e11cb807245e77d4a67f379e75f01adc85" exitCode=0 Jan 21 16:09:55 crc kubenswrapper[4902]: I0121 16:09:55.281831 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xrlb5" event={"ID":"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c","Type":"ContainerDied","Data":"aef8011a0955408b9b496fd1dcaa48e11cb807245e77d4a67f379e75f01adc85"} Jan 21 16:09:55 crc kubenswrapper[4902]: I0121 16:09:55.281869 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xrlb5" event={"ID":"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c","Type":"ContainerStarted","Data":"fdda20c0b81f02a878c1630192d767aabc81eb313c73c05a0e37e860b871bdc4"} Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.712915 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-70cb-account-create-update-hprl8" Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.722277 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xrlb5" Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.840879 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c-operator-scripts\") pod \"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c\" (UID: \"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c\") " Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.840965 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/311b51a9-7349-42c3-8777-e1da9c997866-operator-scripts\") pod \"311b51a9-7349-42c3-8777-e1da9c997866\" (UID: \"311b51a9-7349-42c3-8777-e1da9c997866\") " Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.841037 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcbfk\" (UniqueName: \"kubernetes.io/projected/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c-kube-api-access-fcbfk\") pod \"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c\" (UID: \"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c\") " Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.841254 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mb94\" (UniqueName: \"kubernetes.io/projected/311b51a9-7349-42c3-8777-e1da9c997866-kube-api-access-7mb94\") pod \"311b51a9-7349-42c3-8777-e1da9c997866\" (UID: \"311b51a9-7349-42c3-8777-e1da9c997866\") " Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.841739 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "32dabaa5-86fa-4ff4-9a8e-7cd5360c978c" (UID: "32dabaa5-86fa-4ff4-9a8e-7cd5360c978c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.841766 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/311b51a9-7349-42c3-8777-e1da9c997866-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "311b51a9-7349-42c3-8777-e1da9c997866" (UID: "311b51a9-7349-42c3-8777-e1da9c997866"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.846272 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c-kube-api-access-fcbfk" (OuterVolumeSpecName: "kube-api-access-fcbfk") pod "32dabaa5-86fa-4ff4-9a8e-7cd5360c978c" (UID: "32dabaa5-86fa-4ff4-9a8e-7cd5360c978c"). InnerVolumeSpecName "kube-api-access-fcbfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.853341 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/311b51a9-7349-42c3-8777-e1da9c997866-kube-api-access-7mb94" (OuterVolumeSpecName: "kube-api-access-7mb94") pod "311b51a9-7349-42c3-8777-e1da9c997866" (UID: "311b51a9-7349-42c3-8777-e1da9c997866"). InnerVolumeSpecName "kube-api-access-7mb94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.943452 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mb94\" (UniqueName: \"kubernetes.io/projected/311b51a9-7349-42c3-8777-e1da9c997866-kube-api-access-7mb94\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.943489 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.943501 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/311b51a9-7349-42c3-8777-e1da9c997866-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.943510 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcbfk\" (UniqueName: \"kubernetes.io/projected/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c-kube-api-access-fcbfk\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:57 crc kubenswrapper[4902]: I0121 16:09:57.302697 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-70cb-account-create-update-hprl8" event={"ID":"311b51a9-7349-42c3-8777-e1da9c997866","Type":"ContainerDied","Data":"173c6c71b1c511d1ce8e014ff16b6925a0603aac50b3a26bc4726fac330fcd1d"} Jan 21 16:09:57 crc kubenswrapper[4902]: I0121 16:09:57.302746 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="173c6c71b1c511d1ce8e014ff16b6925a0603aac50b3a26bc4726fac330fcd1d" Jan 21 16:09:57 crc kubenswrapper[4902]: I0121 16:09:57.302714 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-70cb-account-create-update-hprl8" Jan 21 16:09:57 crc kubenswrapper[4902]: I0121 16:09:57.305116 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xrlb5" event={"ID":"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c","Type":"ContainerDied","Data":"fdda20c0b81f02a878c1630192d767aabc81eb313c73c05a0e37e860b871bdc4"} Jan 21 16:09:57 crc kubenswrapper[4902]: I0121 16:09:57.305164 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdda20c0b81f02a878c1630192d767aabc81eb313c73c05a0e37e860b871bdc4" Jan 21 16:09:57 crc kubenswrapper[4902]: I0121 16:09:57.305237 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xrlb5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.689143 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-nnsm5"] Jan 21 16:09:58 crc kubenswrapper[4902]: E0121 16:09:58.689771 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="311b51a9-7349-42c3-8777-e1da9c997866" containerName="mariadb-account-create-update" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.689782 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="311b51a9-7349-42c3-8777-e1da9c997866" containerName="mariadb-account-create-update" Jan 21 16:09:58 crc kubenswrapper[4902]: E0121 16:09:58.689793 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32dabaa5-86fa-4ff4-9a8e-7cd5360c978c" containerName="mariadb-database-create" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.689799 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="32dabaa5-86fa-4ff4-9a8e-7cd5360c978c" containerName="mariadb-database-create" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.690145 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="32dabaa5-86fa-4ff4-9a8e-7cd5360c978c" containerName="mariadb-database-create" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.690165 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="311b51a9-7349-42c3-8777-e1da9c997866" containerName="mariadb-account-create-update" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.690735 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.692562 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.692636 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.697138 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-7mj69" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.730613 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-config-data\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.730945 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-scripts\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.731004 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-logs\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.731208 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbjkm\" (UniqueName: \"kubernetes.io/projected/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-kube-api-access-kbjkm\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.731303 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-combined-ca-bundle\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.735017 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-nnsm5"] Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.780850 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-ddb658677-chfv4"] Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.783248 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.790567 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-ddb658677-chfv4"] Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.834323 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-ovsdbserver-sb\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.834690 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbjkm\" (UniqueName: \"kubernetes.io/projected/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-kube-api-access-kbjkm\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.834741 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5rm5\" (UniqueName: \"kubernetes.io/projected/9a24ae7c-3fa5-479a-84b4-56ad2792d386-kube-api-access-w5rm5\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.834783 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-ovsdbserver-nb\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.834818 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-combined-ca-bundle\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.834860 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-config-data\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.835010 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-config\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.835100 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-scripts\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.835195 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-logs\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.835305 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-dns-svc\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.836352 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-logs\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.840988 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-config-data\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.841292 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-scripts\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.842456 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-combined-ca-bundle\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.859054 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbjkm\" (UniqueName: \"kubernetes.io/projected/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-kube-api-access-kbjkm\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.937457 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-dns-svc\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.937530 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-ovsdbserver-sb\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.937579 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5rm5\" (UniqueName: \"kubernetes.io/projected/9a24ae7c-3fa5-479a-84b4-56ad2792d386-kube-api-access-w5rm5\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.937616 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-ovsdbserver-nb\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.937663 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-config\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.938638 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-ovsdbserver-sb\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.938649 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-config\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.938782 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-dns-svc\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.938917 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-ovsdbserver-nb\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.957791 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5rm5\" (UniqueName: \"kubernetes.io/projected/9a24ae7c-3fa5-479a-84b4-56ad2792d386-kube-api-access-w5rm5\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:59 crc kubenswrapper[4902]: I0121 16:09:59.020317 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:59 crc kubenswrapper[4902]: I0121 16:09:59.106919 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:59 crc kubenswrapper[4902]: I0121 16:09:59.457518 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-nnsm5"] Jan 21 16:09:59 crc kubenswrapper[4902]: W0121 16:09:59.465359 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf532d2b6_7ad3_4b83_9100_d4b94d5a512d.slice/crio-fe24954b37cc941c89ad2d9dcf99daa5617245da439afaa7d0f72c86750a77a2 WatchSource:0}: Error finding container fe24954b37cc941c89ad2d9dcf99daa5617245da439afaa7d0f72c86750a77a2: Status 404 returned error can't find the container with id fe24954b37cc941c89ad2d9dcf99daa5617245da439afaa7d0f72c86750a77a2 Jan 21 16:09:59 crc kubenswrapper[4902]: I0121 16:09:59.583873 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-ddb658677-chfv4"] Jan 21 16:10:00 crc kubenswrapper[4902]: I0121 16:10:00.345195 4902 generic.go:334] "Generic (PLEG): container finished" podID="9a24ae7c-3fa5-479a-84b4-56ad2792d386" containerID="84bed1e613719e3773c1b9d9b2ea9d7ab951f0b0abeb24309b7e16ecbd52c1a9" exitCode=0 Jan 21 16:10:00 crc kubenswrapper[4902]: I0121 16:10:00.345379 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ddb658677-chfv4" event={"ID":"9a24ae7c-3fa5-479a-84b4-56ad2792d386","Type":"ContainerDied","Data":"84bed1e613719e3773c1b9d9b2ea9d7ab951f0b0abeb24309b7e16ecbd52c1a9"} Jan 21 16:10:00 crc kubenswrapper[4902]: I0121 16:10:00.345543 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ddb658677-chfv4" event={"ID":"9a24ae7c-3fa5-479a-84b4-56ad2792d386","Type":"ContainerStarted","Data":"cdda0da4083f2f0e9099ddfcab1f5b7e57fb3c8539f90cf5b020d3761c23f6b0"} Jan 21 16:10:00 crc kubenswrapper[4902]: I0121 16:10:00.350951 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nnsm5" event={"ID":"f532d2b6-7ad3-4b83-9100-d4b94d5a512d","Type":"ContainerStarted","Data":"03abc4558e909383d3d41af8248acf4829b9d6450d3df00a2f6958bd3e3264e7"} Jan 21 16:10:00 crc kubenswrapper[4902]: I0121 16:10:00.351008 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nnsm5" event={"ID":"f532d2b6-7ad3-4b83-9100-d4b94d5a512d","Type":"ContainerStarted","Data":"fe24954b37cc941c89ad2d9dcf99daa5617245da439afaa7d0f72c86750a77a2"} Jan 21 16:10:00 crc kubenswrapper[4902]: I0121 16:10:00.450381 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-nnsm5" podStartSLOduration=2.450362603 podStartE2EDuration="2.450362603s" podCreationTimestamp="2026-01-21 16:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:10:00.445126896 +0000 UTC m=+5762.521959925" watchObservedRunningTime="2026-01-21 16:10:00.450362603 +0000 UTC m=+5762.527195632" Jan 21 16:10:01 crc kubenswrapper[4902]: I0121 16:10:01.362713 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ddb658677-chfv4" event={"ID":"9a24ae7c-3fa5-479a-84b4-56ad2792d386","Type":"ContainerStarted","Data":"ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7"} Jan 21 16:10:01 crc kubenswrapper[4902]: I0121 16:10:01.363033 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:10:01 crc kubenswrapper[4902]: I0121 16:10:01.365438 4902 generic.go:334] "Generic (PLEG): container finished" podID="f532d2b6-7ad3-4b83-9100-d4b94d5a512d" containerID="03abc4558e909383d3d41af8248acf4829b9d6450d3df00a2f6958bd3e3264e7" exitCode=0 Jan 21 16:10:01 crc kubenswrapper[4902]: I0121 16:10:01.365487 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nnsm5" event={"ID":"f532d2b6-7ad3-4b83-9100-d4b94d5a512d","Type":"ContainerDied","Data":"03abc4558e909383d3d41af8248acf4829b9d6450d3df00a2f6958bd3e3264e7"} Jan 21 16:10:01 crc kubenswrapper[4902]: I0121 16:10:01.387813 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-ddb658677-chfv4" podStartSLOduration=3.387790122 podStartE2EDuration="3.387790122s" podCreationTimestamp="2026-01-21 16:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:10:01.383393648 +0000 UTC m=+5763.460226677" watchObservedRunningTime="2026-01-21 16:10:01.387790122 +0000 UTC m=+5763.464623151" Jan 21 16:10:02 crc kubenswrapper[4902]: I0121 16:10:02.723847 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nnsm5" Jan 21 16:10:02 crc kubenswrapper[4902]: I0121 16:10:02.915755 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-scripts\") pod \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " Jan 21 16:10:02 crc kubenswrapper[4902]: I0121 16:10:02.915814 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-config-data\") pod \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " Jan 21 16:10:02 crc kubenswrapper[4902]: I0121 16:10:02.915857 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-combined-ca-bundle\") pod \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " Jan 21 16:10:02 crc kubenswrapper[4902]: I0121 16:10:02.915945 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-logs\") pod \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " Jan 21 16:10:02 crc kubenswrapper[4902]: I0121 16:10:02.916075 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbjkm\" (UniqueName: \"kubernetes.io/projected/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-kube-api-access-kbjkm\") pod \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " Jan 21 16:10:02 crc kubenswrapper[4902]: I0121 16:10:02.916623 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-logs" (OuterVolumeSpecName: "logs") pod "f532d2b6-7ad3-4b83-9100-d4b94d5a512d" (UID: "f532d2b6-7ad3-4b83-9100-d4b94d5a512d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:10:02 crc kubenswrapper[4902]: I0121 16:10:02.923473 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-scripts" (OuterVolumeSpecName: "scripts") pod "f532d2b6-7ad3-4b83-9100-d4b94d5a512d" (UID: "f532d2b6-7ad3-4b83-9100-d4b94d5a512d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:02 crc kubenswrapper[4902]: I0121 16:10:02.926216 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-kube-api-access-kbjkm" (OuterVolumeSpecName: "kube-api-access-kbjkm") pod "f532d2b6-7ad3-4b83-9100-d4b94d5a512d" (UID: "f532d2b6-7ad3-4b83-9100-d4b94d5a512d"). InnerVolumeSpecName "kube-api-access-kbjkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:02 crc kubenswrapper[4902]: I0121 16:10:02.942221 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f532d2b6-7ad3-4b83-9100-d4b94d5a512d" (UID: "f532d2b6-7ad3-4b83-9100-d4b94d5a512d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:02 crc kubenswrapper[4902]: I0121 16:10:02.942691 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-config-data" (OuterVolumeSpecName: "config-data") pod "f532d2b6-7ad3-4b83-9100-d4b94d5a512d" (UID: "f532d2b6-7ad3-4b83-9100-d4b94d5a512d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.017935 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbjkm\" (UniqueName: \"kubernetes.io/projected/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-kube-api-access-kbjkm\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.017973 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.017983 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.017993 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.018001 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.384611 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nnsm5" event={"ID":"f532d2b6-7ad3-4b83-9100-d4b94d5a512d","Type":"ContainerDied","Data":"fe24954b37cc941c89ad2d9dcf99daa5617245da439afaa7d0f72c86750a77a2"} Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.384910 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe24954b37cc941c89ad2d9dcf99daa5617245da439afaa7d0f72c86750a77a2" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.385023 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nnsm5" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.864759 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-856775b9dd-twjxc"] Jan 21 16:10:03 crc kubenswrapper[4902]: E0121 16:10:03.865403 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f532d2b6-7ad3-4b83-9100-d4b94d5a512d" containerName="placement-db-sync" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.865426 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f532d2b6-7ad3-4b83-9100-d4b94d5a512d" containerName="placement-db-sync" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.865730 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f532d2b6-7ad3-4b83-9100-d4b94d5a512d" containerName="placement-db-sync" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.867035 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.870490 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.870527 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.870579 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.870977 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-7mj69" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.871136 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.880540 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-856775b9dd-twjxc"] Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.034428 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-logs\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.034487 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtwz4\" (UniqueName: \"kubernetes.io/projected/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-kube-api-access-xtwz4\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.034523 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-internal-tls-certs\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.034540 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-public-tls-certs\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.034576 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-combined-ca-bundle\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.035242 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-scripts\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.035306 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-config-data\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.136661 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtwz4\" (UniqueName: \"kubernetes.io/projected/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-kube-api-access-xtwz4\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.136720 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-internal-tls-certs\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.136746 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-public-tls-certs\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.137314 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-combined-ca-bundle\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.137408 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-scripts\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.137459 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-config-data\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.137565 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-logs\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.137986 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-logs\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.140623 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-public-tls-certs\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.140633 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-internal-tls-certs\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.140984 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-config-data\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.142291 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-scripts\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.147739 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-combined-ca-bundle\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.160571 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtwz4\" (UniqueName: \"kubernetes.io/projected/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-kube-api-access-xtwz4\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.203167 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.629484 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-856775b9dd-twjxc"] Jan 21 16:10:04 crc kubenswrapper[4902]: W0121 16:10:04.636915 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43a8c70b_ebc7_4ce0_8d5c_e790226eff45.slice/crio-a363c3df22484b95080493226ef6de548bce7b37077a62ea634a957ad031d675 WatchSource:0}: Error finding container a363c3df22484b95080493226ef6de548bce7b37077a62ea634a957ad031d675: Status 404 returned error can't find the container with id a363c3df22484b95080493226ef6de548bce7b37077a62ea634a957ad031d675 Jan 21 16:10:05 crc kubenswrapper[4902]: I0121 16:10:05.405399 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-856775b9dd-twjxc" event={"ID":"43a8c70b-ebc7-4ce0-8d5c-e790226eff45","Type":"ContainerStarted","Data":"67794b6a2b1e5568faa321d86fa25828738739aa68ab11d7c7db8061fb2e5729"} Jan 21 16:10:05 crc kubenswrapper[4902]: I0121 16:10:05.405707 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-856775b9dd-twjxc" event={"ID":"43a8c70b-ebc7-4ce0-8d5c-e790226eff45","Type":"ContainerStarted","Data":"abf7fda4412b62127e45777e86533d8cca3f8c8b810b11811bd51c3975da6d2c"} Jan 21 16:10:05 crc kubenswrapper[4902]: I0121 16:10:05.405722 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:05 crc kubenswrapper[4902]: I0121 16:10:05.405732 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-856775b9dd-twjxc" event={"ID":"43a8c70b-ebc7-4ce0-8d5c-e790226eff45","Type":"ContainerStarted","Data":"a363c3df22484b95080493226ef6de548bce7b37077a62ea634a957ad031d675"} Jan 21 16:10:05 crc kubenswrapper[4902]: I0121 16:10:05.405746 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:05 crc kubenswrapper[4902]: I0121 16:10:05.429748 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-856775b9dd-twjxc" podStartSLOduration=2.42972895 podStartE2EDuration="2.42972895s" podCreationTimestamp="2026-01-21 16:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:10:05.423411193 +0000 UTC m=+5767.500244252" watchObservedRunningTime="2026-01-21 16:10:05.42972895 +0000 UTC m=+5767.506561969" Jan 21 16:10:09 crc kubenswrapper[4902]: I0121 16:10:09.109340 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:10:09 crc kubenswrapper[4902]: I0121 16:10:09.204115 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bdc9ddbfc-5j79v"] Jan 21 16:10:09 crc kubenswrapper[4902]: I0121 16:10:09.204386 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" podUID="f87b7e66-2e90-42f0-babb-fc5013fa6077" containerName="dnsmasq-dns" containerID="cri-o://e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7" gracePeriod=10 Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.200898 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.351666 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-config\") pod \"f87b7e66-2e90-42f0-babb-fc5013fa6077\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.351724 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-dns-svc\") pod \"f87b7e66-2e90-42f0-babb-fc5013fa6077\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.351778 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2cj4\" (UniqueName: \"kubernetes.io/projected/f87b7e66-2e90-42f0-babb-fc5013fa6077-kube-api-access-s2cj4\") pod \"f87b7e66-2e90-42f0-babb-fc5013fa6077\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.351822 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-ovsdbserver-sb\") pod \"f87b7e66-2e90-42f0-babb-fc5013fa6077\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.353643 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-ovsdbserver-nb\") pod \"f87b7e66-2e90-42f0-babb-fc5013fa6077\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.374259 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f87b7e66-2e90-42f0-babb-fc5013fa6077-kube-api-access-s2cj4" (OuterVolumeSpecName: "kube-api-access-s2cj4") pod "f87b7e66-2e90-42f0-babb-fc5013fa6077" (UID: "f87b7e66-2e90-42f0-babb-fc5013fa6077"). InnerVolumeSpecName "kube-api-access-s2cj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.396727 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-config" (OuterVolumeSpecName: "config") pod "f87b7e66-2e90-42f0-babb-fc5013fa6077" (UID: "f87b7e66-2e90-42f0-babb-fc5013fa6077"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.397621 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f87b7e66-2e90-42f0-babb-fc5013fa6077" (UID: "f87b7e66-2e90-42f0-babb-fc5013fa6077"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.412184 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f87b7e66-2e90-42f0-babb-fc5013fa6077" (UID: "f87b7e66-2e90-42f0-babb-fc5013fa6077"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.414667 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f87b7e66-2e90-42f0-babb-fc5013fa6077" (UID: "f87b7e66-2e90-42f0-babb-fc5013fa6077"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.457779 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.458196 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.458321 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2cj4\" (UniqueName: \"kubernetes.io/projected/f87b7e66-2e90-42f0-babb-fc5013fa6077-kube-api-access-s2cj4\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.458416 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.458543 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.460159 4902 generic.go:334] "Generic (PLEG): container finished" podID="f87b7e66-2e90-42f0-babb-fc5013fa6077" containerID="e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7" exitCode=0 Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.460266 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" event={"ID":"f87b7e66-2e90-42f0-babb-fc5013fa6077","Type":"ContainerDied","Data":"e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7"} Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.460353 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" event={"ID":"f87b7e66-2e90-42f0-babb-fc5013fa6077","Type":"ContainerDied","Data":"00e6bf2e91ac88d92bb8b5aa081d62fc62c975ee1e0acebbcdf006224895188a"} Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.460418 4902 scope.go:117] "RemoveContainer" containerID="e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.460577 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.535429 4902 scope.go:117] "RemoveContainer" containerID="9b90b5e9976ff404939d97898e5d6de981b873dcf40445c1eff2bc3716325545" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.537835 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bdc9ddbfc-5j79v"] Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.546807 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bdc9ddbfc-5j79v"] Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.564092 4902 scope.go:117] "RemoveContainer" containerID="e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7" Jan 21 16:10:10 crc kubenswrapper[4902]: E0121 16:10:10.564640 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7\": container with ID starting with e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7 not found: ID does not exist" containerID="e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.564677 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7"} err="failed to get container status \"e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7\": rpc error: code = NotFound desc = could not find container \"e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7\": container with ID starting with e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7 not found: ID does not exist" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.564705 4902 scope.go:117] "RemoveContainer" containerID="9b90b5e9976ff404939d97898e5d6de981b873dcf40445c1eff2bc3716325545" Jan 21 16:10:10 crc kubenswrapper[4902]: E0121 16:10:10.564969 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b90b5e9976ff404939d97898e5d6de981b873dcf40445c1eff2bc3716325545\": container with ID starting with 9b90b5e9976ff404939d97898e5d6de981b873dcf40445c1eff2bc3716325545 not found: ID does not exist" containerID="9b90b5e9976ff404939d97898e5d6de981b873dcf40445c1eff2bc3716325545" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.564993 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b90b5e9976ff404939d97898e5d6de981b873dcf40445c1eff2bc3716325545"} err="failed to get container status \"9b90b5e9976ff404939d97898e5d6de981b873dcf40445c1eff2bc3716325545\": rpc error: code = NotFound desc = could not find container \"9b90b5e9976ff404939d97898e5d6de981b873dcf40445c1eff2bc3716325545\": container with ID starting with 9b90b5e9976ff404939d97898e5d6de981b873dcf40445c1eff2bc3716325545 not found: ID does not exist" Jan 21 16:10:12 crc kubenswrapper[4902]: I0121 16:10:12.310718 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f87b7e66-2e90-42f0-babb-fc5013fa6077" path="/var/lib/kubelet/pods/f87b7e66-2e90-42f0-babb-fc5013fa6077/volumes" Jan 21 16:10:17 crc kubenswrapper[4902]: I0121 16:10:17.770425 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:10:17 crc kubenswrapper[4902]: I0121 16:10:17.771245 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:10:24 crc kubenswrapper[4902]: I0121 16:10:24.678618 4902 scope.go:117] "RemoveContainer" containerID="311b61cd815d9e9e4c95e8d3428eb904438d2e7a6efb54993e589d294d8780c4" Jan 21 16:10:24 crc kubenswrapper[4902]: I0121 16:10:24.717602 4902 scope.go:117] "RemoveContainer" containerID="401f56f07810074a750a97f4da0d7c60e93e7a8c193e6d8365b52546dfbecc13" Jan 21 16:10:24 crc kubenswrapper[4902]: I0121 16:10:24.763927 4902 scope.go:117] "RemoveContainer" containerID="c548aa5ba6d350e77b6beec3d64af186cf452dd8633be8614338761c7800ca06" Jan 21 16:10:24 crc kubenswrapper[4902]: I0121 16:10:24.787510 4902 scope.go:117] "RemoveContainer" containerID="d712f1b4cdf6346532f6de92bb64a6956b68ba70087482d2c995c46acdeba1e0" Jan 21 16:10:24 crc kubenswrapper[4902]: I0121 16:10:24.830908 4902 scope.go:117] "RemoveContainer" containerID="aba9f698bb3c03d4e31ec5eca5323d9be2568c046cc99860cd7803581de5e34e" Jan 21 16:10:35 crc kubenswrapper[4902]: I0121 16:10:35.628661 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:35 crc kubenswrapper[4902]: I0121 16:10:35.630586 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:47 crc kubenswrapper[4902]: I0121 16:10:47.771328 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:10:47 crc kubenswrapper[4902]: I0121 16:10:47.771903 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:10:47 crc kubenswrapper[4902]: I0121 16:10:47.771957 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 16:10:47 crc kubenswrapper[4902]: I0121 16:10:47.773277 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"db5e286ed12d5cdac8541e22aa5c6794629a15f27a4e802d85c369fc2b4f4f6b"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:10:47 crc kubenswrapper[4902]: I0121 16:10:47.773342 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://db5e286ed12d5cdac8541e22aa5c6794629a15f27a4e802d85c369fc2b4f4f6b" gracePeriod=600 Jan 21 16:10:48 crc kubenswrapper[4902]: I0121 16:10:48.843565 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="db5e286ed12d5cdac8541e22aa5c6794629a15f27a4e802d85c369fc2b4f4f6b" exitCode=0 Jan 21 16:10:48 crc kubenswrapper[4902]: I0121 16:10:48.844111 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"db5e286ed12d5cdac8541e22aa5c6794629a15f27a4e802d85c369fc2b4f4f6b"} Jan 21 16:10:48 crc kubenswrapper[4902]: I0121 16:10:48.844145 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac"} Jan 21 16:10:48 crc kubenswrapper[4902]: I0121 16:10:48.844166 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.290627 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-8csjv"] Jan 21 16:10:56 crc kubenswrapper[4902]: E0121 16:10:56.291575 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f87b7e66-2e90-42f0-babb-fc5013fa6077" containerName="init" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.291594 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f87b7e66-2e90-42f0-babb-fc5013fa6077" containerName="init" Jan 21 16:10:56 crc kubenswrapper[4902]: E0121 16:10:56.291611 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f87b7e66-2e90-42f0-babb-fc5013fa6077" containerName="dnsmasq-dns" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.291619 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f87b7e66-2e90-42f0-babb-fc5013fa6077" containerName="dnsmasq-dns" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.291850 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f87b7e66-2e90-42f0-babb-fc5013fa6077" containerName="dnsmasq-dns" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.292597 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8csjv" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.305929 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8csjv"] Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.380851 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-974x9"] Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.382241 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-974x9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.391328 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-974x9"] Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.392275 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4lht\" (UniqueName: \"kubernetes.io/projected/5963807a-fc48-485b-a3a5-7b07791dfdd0-kube-api-access-d4lht\") pod \"nova-api-db-create-8csjv\" (UID: \"5963807a-fc48-485b-a3a5-7b07791dfdd0\") " pod="openstack/nova-api-db-create-8csjv" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.392537 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5963807a-fc48-485b-a3a5-7b07791dfdd0-operator-scripts\") pod \"nova-api-db-create-8csjv\" (UID: \"5963807a-fc48-485b-a3a5-7b07791dfdd0\") " pod="openstack/nova-api-db-create-8csjv" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.491006 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-3bb8-account-create-update-k967z"] Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.492099 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3bb8-account-create-update-k967z" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.493998 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.494736 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5963807a-fc48-485b-a3a5-7b07791dfdd0-operator-scripts\") pod \"nova-api-db-create-8csjv\" (UID: \"5963807a-fc48-485b-a3a5-7b07791dfdd0\") " pod="openstack/nova-api-db-create-8csjv" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.494866 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4lht\" (UniqueName: \"kubernetes.io/projected/5963807a-fc48-485b-a3a5-7b07791dfdd0-kube-api-access-d4lht\") pod \"nova-api-db-create-8csjv\" (UID: \"5963807a-fc48-485b-a3a5-7b07791dfdd0\") " pod="openstack/nova-api-db-create-8csjv" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.494943 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26phd\" (UniqueName: \"kubernetes.io/projected/fbe639d2-1844-47b8-b4c8-3b602547070a-kube-api-access-26phd\") pod \"nova-cell0-db-create-974x9\" (UID: \"fbe639d2-1844-47b8-b4c8-3b602547070a\") " pod="openstack/nova-cell0-db-create-974x9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.494984 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe639d2-1844-47b8-b4c8-3b602547070a-operator-scripts\") pod \"nova-cell0-db-create-974x9\" (UID: \"fbe639d2-1844-47b8-b4c8-3b602547070a\") " pod="openstack/nova-cell0-db-create-974x9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.495653 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5963807a-fc48-485b-a3a5-7b07791dfdd0-operator-scripts\") pod \"nova-api-db-create-8csjv\" (UID: \"5963807a-fc48-485b-a3a5-7b07791dfdd0\") " pod="openstack/nova-api-db-create-8csjv" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.512672 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3bb8-account-create-update-k967z"] Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.521526 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4lht\" (UniqueName: \"kubernetes.io/projected/5963807a-fc48-485b-a3a5-7b07791dfdd0-kube-api-access-d4lht\") pod \"nova-api-db-create-8csjv\" (UID: \"5963807a-fc48-485b-a3a5-7b07791dfdd0\") " pod="openstack/nova-api-db-create-8csjv" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.595328 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-9kql9"] Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.596428 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26phd\" (UniqueName: \"kubernetes.io/projected/fbe639d2-1844-47b8-b4c8-3b602547070a-kube-api-access-26phd\") pod \"nova-cell0-db-create-974x9\" (UID: \"fbe639d2-1844-47b8-b4c8-3b602547070a\") " pod="openstack/nova-cell0-db-create-974x9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.596482 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe639d2-1844-47b8-b4c8-3b602547070a-operator-scripts\") pod \"nova-cell0-db-create-974x9\" (UID: \"fbe639d2-1844-47b8-b4c8-3b602547070a\") " pod="openstack/nova-cell0-db-create-974x9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.596527 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c847ba2-4e65-4677-b8b6-514162b0c1bc-operator-scripts\") pod \"nova-api-3bb8-account-create-update-k967z\" (UID: \"9c847ba2-4e65-4677-b8b6-514162b0c1bc\") " pod="openstack/nova-api-3bb8-account-create-update-k967z" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.596577 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bnq5\" (UniqueName: \"kubernetes.io/projected/9c847ba2-4e65-4677-b8b6-514162b0c1bc-kube-api-access-2bnq5\") pod \"nova-api-3bb8-account-create-update-k967z\" (UID: \"9c847ba2-4e65-4677-b8b6-514162b0c1bc\") " pod="openstack/nova-api-3bb8-account-create-update-k967z" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.596734 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9kql9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.597394 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe639d2-1844-47b8-b4c8-3b602547070a-operator-scripts\") pod \"nova-cell0-db-create-974x9\" (UID: \"fbe639d2-1844-47b8-b4c8-3b602547070a\") " pod="openstack/nova-cell0-db-create-974x9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.607118 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9kql9"] Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.633777 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8csjv" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.636690 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26phd\" (UniqueName: \"kubernetes.io/projected/fbe639d2-1844-47b8-b4c8-3b602547070a-kube-api-access-26phd\") pod \"nova-cell0-db-create-974x9\" (UID: \"fbe639d2-1844-47b8-b4c8-3b602547070a\") " pod="openstack/nova-cell0-db-create-974x9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.695634 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-4fdb-account-create-update-4c46m"] Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.696272 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-974x9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.697223 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.697464 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c847ba2-4e65-4677-b8b6-514162b0c1bc-operator-scripts\") pod \"nova-api-3bb8-account-create-update-k967z\" (UID: \"9c847ba2-4e65-4677-b8b6-514162b0c1bc\") " pod="openstack/nova-api-3bb8-account-create-update-k967z" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.697506 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6kv4\" (UniqueName: \"kubernetes.io/projected/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172-kube-api-access-f6kv4\") pod \"nova-cell1-db-create-9kql9\" (UID: \"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172\") " pod="openstack/nova-cell1-db-create-9kql9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.697548 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bnq5\" (UniqueName: \"kubernetes.io/projected/9c847ba2-4e65-4677-b8b6-514162b0c1bc-kube-api-access-2bnq5\") pod \"nova-api-3bb8-account-create-update-k967z\" (UID: \"9c847ba2-4e65-4677-b8b6-514162b0c1bc\") " pod="openstack/nova-api-3bb8-account-create-update-k967z" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.697591 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172-operator-scripts\") pod \"nova-cell1-db-create-9kql9\" (UID: \"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172\") " pod="openstack/nova-cell1-db-create-9kql9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.698517 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c847ba2-4e65-4677-b8b6-514162b0c1bc-operator-scripts\") pod \"nova-api-3bb8-account-create-update-k967z\" (UID: \"9c847ba2-4e65-4677-b8b6-514162b0c1bc\") " pod="openstack/nova-api-3bb8-account-create-update-k967z" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.700374 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.707564 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4fdb-account-create-update-4c46m"] Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.715728 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bnq5\" (UniqueName: \"kubernetes.io/projected/9c847ba2-4e65-4677-b8b6-514162b0c1bc-kube-api-access-2bnq5\") pod \"nova-api-3bb8-account-create-update-k967z\" (UID: \"9c847ba2-4e65-4677-b8b6-514162b0c1bc\") " pod="openstack/nova-api-3bb8-account-create-update-k967z" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.799752 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58-operator-scripts\") pod \"nova-cell0-4fdb-account-create-update-4c46m\" (UID: \"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58\") " pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.800088 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlrcs\" (UniqueName: \"kubernetes.io/projected/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58-kube-api-access-jlrcs\") pod \"nova-cell0-4fdb-account-create-update-4c46m\" (UID: \"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58\") " pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.800119 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6kv4\" (UniqueName: \"kubernetes.io/projected/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172-kube-api-access-f6kv4\") pod \"nova-cell1-db-create-9kql9\" (UID: \"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172\") " pod="openstack/nova-cell1-db-create-9kql9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.800174 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172-operator-scripts\") pod \"nova-cell1-db-create-9kql9\" (UID: \"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172\") " pod="openstack/nova-cell1-db-create-9kql9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.801147 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172-operator-scripts\") pod \"nova-cell1-db-create-9kql9\" (UID: \"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172\") " pod="openstack/nova-cell1-db-create-9kql9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.815846 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3bb8-account-create-update-k967z" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.827065 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6kv4\" (UniqueName: \"kubernetes.io/projected/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172-kube-api-access-f6kv4\") pod \"nova-cell1-db-create-9kql9\" (UID: \"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172\") " pod="openstack/nova-cell1-db-create-9kql9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.901537 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58-operator-scripts\") pod \"nova-cell0-4fdb-account-create-update-4c46m\" (UID: \"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58\") " pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.901597 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlrcs\" (UniqueName: \"kubernetes.io/projected/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58-kube-api-access-jlrcs\") pod \"nova-cell0-4fdb-account-create-update-4c46m\" (UID: \"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58\") " pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.903036 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58-operator-scripts\") pod \"nova-cell0-4fdb-account-create-update-4c46m\" (UID: \"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58\") " pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.905649 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-6d31-account-create-update-v52m2"] Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.907198 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6d31-account-create-update-v52m2" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.910058 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.918454 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlrcs\" (UniqueName: \"kubernetes.io/projected/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58-kube-api-access-jlrcs\") pod \"nova-cell0-4fdb-account-create-update-4c46m\" (UID: \"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58\") " pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.927578 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6d31-account-create-update-v52m2"] Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.994016 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9kql9" Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.005592 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4f58498-29bd-47d8-8af1-ac98b4a9f510-operator-scripts\") pod \"nova-cell1-6d31-account-create-update-v52m2\" (UID: \"e4f58498-29bd-47d8-8af1-ac98b4a9f510\") " pod="openstack/nova-cell1-6d31-account-create-update-v52m2" Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.005720 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7hr9\" (UniqueName: \"kubernetes.io/projected/e4f58498-29bd-47d8-8af1-ac98b4a9f510-kube-api-access-v7hr9\") pod \"nova-cell1-6d31-account-create-update-v52m2\" (UID: \"e4f58498-29bd-47d8-8af1-ac98b4a9f510\") " pod="openstack/nova-cell1-6d31-account-create-update-v52m2" Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.107743 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4f58498-29bd-47d8-8af1-ac98b4a9f510-operator-scripts\") pod \"nova-cell1-6d31-account-create-update-v52m2\" (UID: \"e4f58498-29bd-47d8-8af1-ac98b4a9f510\") " pod="openstack/nova-cell1-6d31-account-create-update-v52m2" Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.107835 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7hr9\" (UniqueName: \"kubernetes.io/projected/e4f58498-29bd-47d8-8af1-ac98b4a9f510-kube-api-access-v7hr9\") pod \"nova-cell1-6d31-account-create-update-v52m2\" (UID: \"e4f58498-29bd-47d8-8af1-ac98b4a9f510\") " pod="openstack/nova-cell1-6d31-account-create-update-v52m2" Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.109173 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4f58498-29bd-47d8-8af1-ac98b4a9f510-operator-scripts\") pod \"nova-cell1-6d31-account-create-update-v52m2\" (UID: \"e4f58498-29bd-47d8-8af1-ac98b4a9f510\") " pod="openstack/nova-cell1-6d31-account-create-update-v52m2" Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.141820 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.161272 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7hr9\" (UniqueName: \"kubernetes.io/projected/e4f58498-29bd-47d8-8af1-ac98b4a9f510-kube-api-access-v7hr9\") pod \"nova-cell1-6d31-account-create-update-v52m2\" (UID: \"e4f58498-29bd-47d8-8af1-ac98b4a9f510\") " pod="openstack/nova-cell1-6d31-account-create-update-v52m2" Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.181202 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8csjv"] Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.230212 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6d31-account-create-update-v52m2" Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.268308 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-974x9"] Jan 21 16:10:57 crc kubenswrapper[4902]: W0121 16:10:57.276249 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbe639d2_1844_47b8_b4c8_3b602547070a.slice/crio-7f37d779c700826fe60bdfa1a4ef8f061e8d1556831ebeec2ca1fc724c5892c6 WatchSource:0}: Error finding container 7f37d779c700826fe60bdfa1a4ef8f061e8d1556831ebeec2ca1fc724c5892c6: Status 404 returned error can't find the container with id 7f37d779c700826fe60bdfa1a4ef8f061e8d1556831ebeec2ca1fc724c5892c6 Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.307476 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3bb8-account-create-update-k967z"] Jan 21 16:10:57 crc kubenswrapper[4902]: W0121 16:10:57.314933 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c847ba2_4e65_4677_b8b6_514162b0c1bc.slice/crio-cfa1f35b10366d8ca884c346ca0693b8589dcc18b17aa9639bcf675816b48969 WatchSource:0}: Error finding container cfa1f35b10366d8ca884c346ca0693b8589dcc18b17aa9639bcf675816b48969: Status 404 returned error can't find the container with id cfa1f35b10366d8ca884c346ca0693b8589dcc18b17aa9639bcf675816b48969 Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.491196 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9kql9"] Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.603519 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4fdb-account-create-update-4c46m"] Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.779657 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6d31-account-create-update-v52m2"] Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.939376 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9kql9" event={"ID":"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172","Type":"ContainerStarted","Data":"99ec12ef319b65df6402a210f267dd882457bc1525334bf5d8dcc815a06bbc60"} Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.942945 4902 generic.go:334] "Generic (PLEG): container finished" podID="fbe639d2-1844-47b8-b4c8-3b602547070a" containerID="99ee9f7749f725c9768c807df30815b54542175e3f04ac09d8600799af1e8a19" exitCode=0 Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.943014 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-974x9" event={"ID":"fbe639d2-1844-47b8-b4c8-3b602547070a","Type":"ContainerDied","Data":"99ee9f7749f725c9768c807df30815b54542175e3f04ac09d8600799af1e8a19"} Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.943097 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-974x9" event={"ID":"fbe639d2-1844-47b8-b4c8-3b602547070a","Type":"ContainerStarted","Data":"7f37d779c700826fe60bdfa1a4ef8f061e8d1556831ebeec2ca1fc724c5892c6"} Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.944256 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" event={"ID":"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58","Type":"ContainerStarted","Data":"98651171c7b939f6328de68d0a540446143d9ad43bf62668613678d3ae0d8135"} Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.946106 4902 generic.go:334] "Generic (PLEG): container finished" podID="5963807a-fc48-485b-a3a5-7b07791dfdd0" containerID="a9669cf760ec41fe8c9ac56172de1dfc2733858ea7763d6ffbfc15c535c182ce" exitCode=0 Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.946161 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8csjv" event={"ID":"5963807a-fc48-485b-a3a5-7b07791dfdd0","Type":"ContainerDied","Data":"a9669cf760ec41fe8c9ac56172de1dfc2733858ea7763d6ffbfc15c535c182ce"} Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.946181 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8csjv" event={"ID":"5963807a-fc48-485b-a3a5-7b07791dfdd0","Type":"ContainerStarted","Data":"07fcf8f0edbd4f84bc91164ded641268ec9af1fe660812bf6c9ef74d84b50a42"} Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.948712 4902 generic.go:334] "Generic (PLEG): container finished" podID="9c847ba2-4e65-4677-b8b6-514162b0c1bc" containerID="f7c278e1da3c54353778da6f63a10b5d381146af280b9714be7ae6c71d2e3772" exitCode=0 Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.948771 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3bb8-account-create-update-k967z" event={"ID":"9c847ba2-4e65-4677-b8b6-514162b0c1bc","Type":"ContainerDied","Data":"f7c278e1da3c54353778da6f63a10b5d381146af280b9714be7ae6c71d2e3772"} Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.948800 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3bb8-account-create-update-k967z" event={"ID":"9c847ba2-4e65-4677-b8b6-514162b0c1bc","Type":"ContainerStarted","Data":"cfa1f35b10366d8ca884c346ca0693b8589dcc18b17aa9639bcf675816b48969"} Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.950070 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6d31-account-create-update-v52m2" event={"ID":"e4f58498-29bd-47d8-8af1-ac98b4a9f510","Type":"ContainerStarted","Data":"fb4a64e2025c12ae8adaca5cf9a94c80a4cbadedb759b77ea3de8530b331e28a"} Jan 21 16:10:58 crc kubenswrapper[4902]: I0121 16:10:58.968692 4902 generic.go:334] "Generic (PLEG): container finished" podID="e4f58498-29bd-47d8-8af1-ac98b4a9f510" containerID="1b0ff0cc281058854299a37c0eae467595b367d385ca015e5d0368dda142849e" exitCode=0 Jan 21 16:10:58 crc kubenswrapper[4902]: I0121 16:10:58.968907 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6d31-account-create-update-v52m2" event={"ID":"e4f58498-29bd-47d8-8af1-ac98b4a9f510","Type":"ContainerDied","Data":"1b0ff0cc281058854299a37c0eae467595b367d385ca015e5d0368dda142849e"} Jan 21 16:10:58 crc kubenswrapper[4902]: I0121 16:10:58.972457 4902 generic.go:334] "Generic (PLEG): container finished" podID="ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172" containerID="e2e258a3a1605851e7cb0ee36afe37bb54f98c9526d53b997a37f6c2cacd6192" exitCode=0 Jan 21 16:10:58 crc kubenswrapper[4902]: I0121 16:10:58.972558 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9kql9" event={"ID":"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172","Type":"ContainerDied","Data":"e2e258a3a1605851e7cb0ee36afe37bb54f98c9526d53b997a37f6c2cacd6192"} Jan 21 16:10:58 crc kubenswrapper[4902]: I0121 16:10:58.976767 4902 generic.go:334] "Generic (PLEG): container finished" podID="bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58" containerID="9e04cfcc3e9b81819b9ca08bf91b4f4038827b55094f93cb2cd3586ac9a3d537" exitCode=0 Jan 21 16:10:58 crc kubenswrapper[4902]: I0121 16:10:58.977015 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" event={"ID":"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58","Type":"ContainerDied","Data":"9e04cfcc3e9b81819b9ca08bf91b4f4038827b55094f93cb2cd3586ac9a3d537"} Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.379944 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3bb8-account-create-update-k967z" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.459431 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8csjv" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.471431 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-974x9" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.556119 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lht\" (UniqueName: \"kubernetes.io/projected/5963807a-fc48-485b-a3a5-7b07791dfdd0-kube-api-access-d4lht\") pod \"5963807a-fc48-485b-a3a5-7b07791dfdd0\" (UID: \"5963807a-fc48-485b-a3a5-7b07791dfdd0\") " Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.556274 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c847ba2-4e65-4677-b8b6-514162b0c1bc-operator-scripts\") pod \"9c847ba2-4e65-4677-b8b6-514162b0c1bc\" (UID: \"9c847ba2-4e65-4677-b8b6-514162b0c1bc\") " Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.556312 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26phd\" (UniqueName: \"kubernetes.io/projected/fbe639d2-1844-47b8-b4c8-3b602547070a-kube-api-access-26phd\") pod \"fbe639d2-1844-47b8-b4c8-3b602547070a\" (UID: \"fbe639d2-1844-47b8-b4c8-3b602547070a\") " Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.556363 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5963807a-fc48-485b-a3a5-7b07791dfdd0-operator-scripts\") pod \"5963807a-fc48-485b-a3a5-7b07791dfdd0\" (UID: \"5963807a-fc48-485b-a3a5-7b07791dfdd0\") " Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.556732 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bnq5\" (UniqueName: \"kubernetes.io/projected/9c847ba2-4e65-4677-b8b6-514162b0c1bc-kube-api-access-2bnq5\") pod \"9c847ba2-4e65-4677-b8b6-514162b0c1bc\" (UID: \"9c847ba2-4e65-4677-b8b6-514162b0c1bc\") " Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.556793 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe639d2-1844-47b8-b4c8-3b602547070a-operator-scripts\") pod \"fbe639d2-1844-47b8-b4c8-3b602547070a\" (UID: \"fbe639d2-1844-47b8-b4c8-3b602547070a\") " Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.556835 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5963807a-fc48-485b-a3a5-7b07791dfdd0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5963807a-fc48-485b-a3a5-7b07791dfdd0" (UID: "5963807a-fc48-485b-a3a5-7b07791dfdd0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.556991 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c847ba2-4e65-4677-b8b6-514162b0c1bc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c847ba2-4e65-4677-b8b6-514162b0c1bc" (UID: "9c847ba2-4e65-4677-b8b6-514162b0c1bc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.557522 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbe639d2-1844-47b8-b4c8-3b602547070a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fbe639d2-1844-47b8-b4c8-3b602547070a" (UID: "fbe639d2-1844-47b8-b4c8-3b602547070a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.557645 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c847ba2-4e65-4677-b8b6-514162b0c1bc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.557684 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5963807a-fc48-485b-a3a5-7b07791dfdd0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.557693 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe639d2-1844-47b8-b4c8-3b602547070a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.561953 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5963807a-fc48-485b-a3a5-7b07791dfdd0-kube-api-access-d4lht" (OuterVolumeSpecName: "kube-api-access-d4lht") pod "5963807a-fc48-485b-a3a5-7b07791dfdd0" (UID: "5963807a-fc48-485b-a3a5-7b07791dfdd0"). InnerVolumeSpecName "kube-api-access-d4lht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.563559 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c847ba2-4e65-4677-b8b6-514162b0c1bc-kube-api-access-2bnq5" (OuterVolumeSpecName: "kube-api-access-2bnq5") pod "9c847ba2-4e65-4677-b8b6-514162b0c1bc" (UID: "9c847ba2-4e65-4677-b8b6-514162b0c1bc"). InnerVolumeSpecName "kube-api-access-2bnq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.571839 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbe639d2-1844-47b8-b4c8-3b602547070a-kube-api-access-26phd" (OuterVolumeSpecName: "kube-api-access-26phd") pod "fbe639d2-1844-47b8-b4c8-3b602547070a" (UID: "fbe639d2-1844-47b8-b4c8-3b602547070a"). InnerVolumeSpecName "kube-api-access-26phd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.659608 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bnq5\" (UniqueName: \"kubernetes.io/projected/9c847ba2-4e65-4677-b8b6-514162b0c1bc-kube-api-access-2bnq5\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.659824 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lht\" (UniqueName: \"kubernetes.io/projected/5963807a-fc48-485b-a3a5-7b07791dfdd0-kube-api-access-d4lht\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.659910 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26phd\" (UniqueName: \"kubernetes.io/projected/fbe639d2-1844-47b8-b4c8-3b602547070a-kube-api-access-26phd\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.987862 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-974x9" event={"ID":"fbe639d2-1844-47b8-b4c8-3b602547070a","Type":"ContainerDied","Data":"7f37d779c700826fe60bdfa1a4ef8f061e8d1556831ebeec2ca1fc724c5892c6"} Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.988249 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f37d779c700826fe60bdfa1a4ef8f061e8d1556831ebeec2ca1fc724c5892c6" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.988315 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-974x9" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.996566 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8csjv" event={"ID":"5963807a-fc48-485b-a3a5-7b07791dfdd0","Type":"ContainerDied","Data":"07fcf8f0edbd4f84bc91164ded641268ec9af1fe660812bf6c9ef74d84b50a42"} Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.996605 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07fcf8f0edbd4f84bc91164ded641268ec9af1fe660812bf6c9ef74d84b50a42" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.996609 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8csjv" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.999364 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3bb8-account-create-update-k967z" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.000160 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3bb8-account-create-update-k967z" event={"ID":"9c847ba2-4e65-4677-b8b6-514162b0c1bc","Type":"ContainerDied","Data":"cfa1f35b10366d8ca884c346ca0693b8589dcc18b17aa9639bcf675816b48969"} Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.000191 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfa1f35b10366d8ca884c346ca0693b8589dcc18b17aa9639bcf675816b48969" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.497984 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6d31-account-create-update-v52m2" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.510766 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.517981 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9kql9" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.678596 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58-operator-scripts\") pod \"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58\" (UID: \"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58\") " Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.678706 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7hr9\" (UniqueName: \"kubernetes.io/projected/e4f58498-29bd-47d8-8af1-ac98b4a9f510-kube-api-access-v7hr9\") pod \"e4f58498-29bd-47d8-8af1-ac98b4a9f510\" (UID: \"e4f58498-29bd-47d8-8af1-ac98b4a9f510\") " Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.678782 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6kv4\" (UniqueName: \"kubernetes.io/projected/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172-kube-api-access-f6kv4\") pod \"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172\" (UID: \"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172\") " Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.678825 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172-operator-scripts\") pod \"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172\" (UID: \"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172\") " Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.678865 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlrcs\" (UniqueName: \"kubernetes.io/projected/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58-kube-api-access-jlrcs\") pod \"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58\" (UID: \"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58\") " Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.678882 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4f58498-29bd-47d8-8af1-ac98b4a9f510-operator-scripts\") pod \"e4f58498-29bd-47d8-8af1-ac98b4a9f510\" (UID: \"e4f58498-29bd-47d8-8af1-ac98b4a9f510\") " Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.679252 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58" (UID: "bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.679680 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172" (UID: "ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.679940 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.679964 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.680495 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4f58498-29bd-47d8-8af1-ac98b4a9f510-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e4f58498-29bd-47d8-8af1-ac98b4a9f510" (UID: "e4f58498-29bd-47d8-8af1-ac98b4a9f510"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.684139 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172-kube-api-access-f6kv4" (OuterVolumeSpecName: "kube-api-access-f6kv4") pod "ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172" (UID: "ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172"). InnerVolumeSpecName "kube-api-access-f6kv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.691900 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58-kube-api-access-jlrcs" (OuterVolumeSpecName: "kube-api-access-jlrcs") pod "bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58" (UID: "bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58"). InnerVolumeSpecName "kube-api-access-jlrcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.692354 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4f58498-29bd-47d8-8af1-ac98b4a9f510-kube-api-access-v7hr9" (OuterVolumeSpecName: "kube-api-access-v7hr9") pod "e4f58498-29bd-47d8-8af1-ac98b4a9f510" (UID: "e4f58498-29bd-47d8-8af1-ac98b4a9f510"). InnerVolumeSpecName "kube-api-access-v7hr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.781945 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlrcs\" (UniqueName: \"kubernetes.io/projected/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58-kube-api-access-jlrcs\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.782318 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4f58498-29bd-47d8-8af1-ac98b4a9f510-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.782464 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7hr9\" (UniqueName: \"kubernetes.io/projected/e4f58498-29bd-47d8-8af1-ac98b4a9f510-kube-api-access-v7hr9\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.782607 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6kv4\" (UniqueName: \"kubernetes.io/projected/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172-kube-api-access-f6kv4\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.012512 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6d31-account-create-update-v52m2" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.012537 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6d31-account-create-update-v52m2" event={"ID":"e4f58498-29bd-47d8-8af1-ac98b4a9f510","Type":"ContainerDied","Data":"fb4a64e2025c12ae8adaca5cf9a94c80a4cbadedb759b77ea3de8530b331e28a"} Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.013651 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb4a64e2025c12ae8adaca5cf9a94c80a4cbadedb759b77ea3de8530b331e28a" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.015709 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9kql9" event={"ID":"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172","Type":"ContainerDied","Data":"99ec12ef319b65df6402a210f267dd882457bc1525334bf5d8dcc815a06bbc60"} Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.015768 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99ec12ef319b65df6402a210f267dd882457bc1525334bf5d8dcc815a06bbc60" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.015889 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9kql9" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.018391 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" event={"ID":"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58","Type":"ContainerDied","Data":"98651171c7b939f6328de68d0a540446143d9ad43bf62668613678d3ae0d8135"} Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.018433 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98651171c7b939f6328de68d0a540446143d9ad43bf62668613678d3ae0d8135" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.018520 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.964258 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zr8nj"] Jan 21 16:11:01 crc kubenswrapper[4902]: E0121 16:11:01.964735 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5963807a-fc48-485b-a3a5-7b07791dfdd0" containerName="mariadb-database-create" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.964755 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5963807a-fc48-485b-a3a5-7b07791dfdd0" containerName="mariadb-database-create" Jan 21 16:11:01 crc kubenswrapper[4902]: E0121 16:11:01.964784 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4f58498-29bd-47d8-8af1-ac98b4a9f510" containerName="mariadb-account-create-update" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.964793 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4f58498-29bd-47d8-8af1-ac98b4a9f510" containerName="mariadb-account-create-update" Jan 21 16:11:01 crc kubenswrapper[4902]: E0121 16:11:01.964805 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58" containerName="mariadb-account-create-update" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.964814 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58" containerName="mariadb-account-create-update" Jan 21 16:11:01 crc kubenswrapper[4902]: E0121 16:11:01.964835 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c847ba2-4e65-4677-b8b6-514162b0c1bc" containerName="mariadb-account-create-update" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.964844 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c847ba2-4e65-4677-b8b6-514162b0c1bc" containerName="mariadb-account-create-update" Jan 21 16:11:01 crc kubenswrapper[4902]: E0121 16:11:01.964863 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172" containerName="mariadb-database-create" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.964872 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172" containerName="mariadb-database-create" Jan 21 16:11:01 crc kubenswrapper[4902]: E0121 16:11:01.964889 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbe639d2-1844-47b8-b4c8-3b602547070a" containerName="mariadb-database-create" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.964898 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbe639d2-1844-47b8-b4c8-3b602547070a" containerName="mariadb-database-create" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.965125 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c847ba2-4e65-4677-b8b6-514162b0c1bc" containerName="mariadb-account-create-update" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.965148 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbe639d2-1844-47b8-b4c8-3b602547070a" containerName="mariadb-database-create" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.965169 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58" containerName="mariadb-account-create-update" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.965192 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4f58498-29bd-47d8-8af1-ac98b4a9f510" containerName="mariadb-account-create-update" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.965205 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172" containerName="mariadb-database-create" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.965219 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5963807a-fc48-485b-a3a5-7b07791dfdd0" containerName="mariadb-database-create" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.965945 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.968170 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-frtf7" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.968228 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.968373 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.974791 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zr8nj"] Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.105935 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-scripts\") pod \"nova-cell0-conductor-db-sync-zr8nj\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.106009 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zr8nj\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.106194 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-config-data\") pod \"nova-cell0-conductor-db-sync-zr8nj\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.106222 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm9wf\" (UniqueName: \"kubernetes.io/projected/76e6442c-e6fd-498e-b20d-e994574644ea-kube-api-access-gm9wf\") pod \"nova-cell0-conductor-db-sync-zr8nj\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.207881 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-config-data\") pod \"nova-cell0-conductor-db-sync-zr8nj\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.207931 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm9wf\" (UniqueName: \"kubernetes.io/projected/76e6442c-e6fd-498e-b20d-e994574644ea-kube-api-access-gm9wf\") pod \"nova-cell0-conductor-db-sync-zr8nj\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.208060 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-scripts\") pod \"nova-cell0-conductor-db-sync-zr8nj\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.208103 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zr8nj\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.212780 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-scripts\") pod \"nova-cell0-conductor-db-sync-zr8nj\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.212885 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-config-data\") pod \"nova-cell0-conductor-db-sync-zr8nj\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.213771 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zr8nj\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.238180 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm9wf\" (UniqueName: \"kubernetes.io/projected/76e6442c-e6fd-498e-b20d-e994574644ea-kube-api-access-gm9wf\") pod \"nova-cell0-conductor-db-sync-zr8nj\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.282339 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.757460 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zr8nj"] Jan 21 16:11:03 crc kubenswrapper[4902]: I0121 16:11:03.034291 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zr8nj" event={"ID":"76e6442c-e6fd-498e-b20d-e994574644ea","Type":"ContainerStarted","Data":"a6ae0e388b560d80c76d474d9e559dfd6b82ed31121bdddca1d8b03c2f3ee0f8"} Jan 21 16:11:04 crc kubenswrapper[4902]: I0121 16:11:04.043072 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zr8nj" event={"ID":"76e6442c-e6fd-498e-b20d-e994574644ea","Type":"ContainerStarted","Data":"432f7ea37f3132bc52dfdced9ef97fb63c40a136694ea136586f2dee4c4a42b9"} Jan 21 16:11:04 crc kubenswrapper[4902]: I0121 16:11:04.065420 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-zr8nj" podStartSLOduration=3.065398427 podStartE2EDuration="3.065398427s" podCreationTimestamp="2026-01-21 16:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:04.060196091 +0000 UTC m=+5826.137029140" watchObservedRunningTime="2026-01-21 16:11:04.065398427 +0000 UTC m=+5826.142231456" Jan 21 16:11:09 crc kubenswrapper[4902]: I0121 16:11:09.092372 4902 generic.go:334] "Generic (PLEG): container finished" podID="76e6442c-e6fd-498e-b20d-e994574644ea" containerID="432f7ea37f3132bc52dfdced9ef97fb63c40a136694ea136586f2dee4c4a42b9" exitCode=0 Jan 21 16:11:09 crc kubenswrapper[4902]: I0121 16:11:09.092472 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zr8nj" event={"ID":"76e6442c-e6fd-498e-b20d-e994574644ea","Type":"ContainerDied","Data":"432f7ea37f3132bc52dfdced9ef97fb63c40a136694ea136586f2dee4c4a42b9"} Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.468488 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.576834 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-config-data\") pod \"76e6442c-e6fd-498e-b20d-e994574644ea\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.577069 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm9wf\" (UniqueName: \"kubernetes.io/projected/76e6442c-e6fd-498e-b20d-e994574644ea-kube-api-access-gm9wf\") pod \"76e6442c-e6fd-498e-b20d-e994574644ea\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.578312 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-combined-ca-bundle\") pod \"76e6442c-e6fd-498e-b20d-e994574644ea\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.578864 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-scripts\") pod \"76e6442c-e6fd-498e-b20d-e994574644ea\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.583134 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e6442c-e6fd-498e-b20d-e994574644ea-kube-api-access-gm9wf" (OuterVolumeSpecName: "kube-api-access-gm9wf") pod "76e6442c-e6fd-498e-b20d-e994574644ea" (UID: "76e6442c-e6fd-498e-b20d-e994574644ea"). InnerVolumeSpecName "kube-api-access-gm9wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.585617 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-scripts" (OuterVolumeSpecName: "scripts") pod "76e6442c-e6fd-498e-b20d-e994574644ea" (UID: "76e6442c-e6fd-498e-b20d-e994574644ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.604306 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76e6442c-e6fd-498e-b20d-e994574644ea" (UID: "76e6442c-e6fd-498e-b20d-e994574644ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.612091 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-config-data" (OuterVolumeSpecName: "config-data") pod "76e6442c-e6fd-498e-b20d-e994574644ea" (UID: "76e6442c-e6fd-498e-b20d-e994574644ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.682635 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.682666 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm9wf\" (UniqueName: \"kubernetes.io/projected/76e6442c-e6fd-498e-b20d-e994574644ea-kube-api-access-gm9wf\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.682679 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.682687 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.115877 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zr8nj" event={"ID":"76e6442c-e6fd-498e-b20d-e994574644ea","Type":"ContainerDied","Data":"a6ae0e388b560d80c76d474d9e559dfd6b82ed31121bdddca1d8b03c2f3ee0f8"} Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.115936 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.115943 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6ae0e388b560d80c76d474d9e559dfd6b82ed31121bdddca1d8b03c2f3ee0f8" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.216219 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 16:11:11 crc kubenswrapper[4902]: E0121 16:11:11.216612 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e6442c-e6fd-498e-b20d-e994574644ea" containerName="nova-cell0-conductor-db-sync" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.216631 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e6442c-e6fd-498e-b20d-e994574644ea" containerName="nova-cell0-conductor-db-sync" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.216818 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="76e6442c-e6fd-498e-b20d-e994574644ea" containerName="nova-cell0-conductor-db-sync" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.217446 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.222571 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.222835 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-frtf7" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.226431 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.295660 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61fa221c-a236-471b-a3ca-0efc339d0fcc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"61fa221c-a236-471b-a3ca-0efc339d0fcc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.295740 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61fa221c-a236-471b-a3ca-0efc339d0fcc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"61fa221c-a236-471b-a3ca-0efc339d0fcc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.295886 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shn78\" (UniqueName: \"kubernetes.io/projected/61fa221c-a236-471b-a3ca-0efc339d0fcc-kube-api-access-shn78\") pod \"nova-cell0-conductor-0\" (UID: \"61fa221c-a236-471b-a3ca-0efc339d0fcc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.397463 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61fa221c-a236-471b-a3ca-0efc339d0fcc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"61fa221c-a236-471b-a3ca-0efc339d0fcc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.397891 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61fa221c-a236-471b-a3ca-0efc339d0fcc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"61fa221c-a236-471b-a3ca-0efc339d0fcc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.398449 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shn78\" (UniqueName: \"kubernetes.io/projected/61fa221c-a236-471b-a3ca-0efc339d0fcc-kube-api-access-shn78\") pod \"nova-cell0-conductor-0\" (UID: \"61fa221c-a236-471b-a3ca-0efc339d0fcc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.401292 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61fa221c-a236-471b-a3ca-0efc339d0fcc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"61fa221c-a236-471b-a3ca-0efc339d0fcc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.412668 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61fa221c-a236-471b-a3ca-0efc339d0fcc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"61fa221c-a236-471b-a3ca-0efc339d0fcc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.415250 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shn78\" (UniqueName: \"kubernetes.io/projected/61fa221c-a236-471b-a3ca-0efc339d0fcc-kube-api-access-shn78\") pod \"nova-cell0-conductor-0\" (UID: \"61fa221c-a236-471b-a3ca-0efc339d0fcc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.536905 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.967519 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 16:11:11 crc kubenswrapper[4902]: W0121 16:11:11.967989 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61fa221c_a236_471b_a3ca_0efc339d0fcc.slice/crio-04bd2cb2d11ee246f2fb6729138c92051305552a6c5ec2e1178872bba7d94017 WatchSource:0}: Error finding container 04bd2cb2d11ee246f2fb6729138c92051305552a6c5ec2e1178872bba7d94017: Status 404 returned error can't find the container with id 04bd2cb2d11ee246f2fb6729138c92051305552a6c5ec2e1178872bba7d94017 Jan 21 16:11:12 crc kubenswrapper[4902]: I0121 16:11:12.126033 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"61fa221c-a236-471b-a3ca-0efc339d0fcc","Type":"ContainerStarted","Data":"04bd2cb2d11ee246f2fb6729138c92051305552a6c5ec2e1178872bba7d94017"} Jan 21 16:11:13 crc kubenswrapper[4902]: I0121 16:11:13.138036 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"61fa221c-a236-471b-a3ca-0efc339d0fcc","Type":"ContainerStarted","Data":"10ad884c092a8180f4ad83a6625db58b7cfd28f41342f72b2c36f4abf6c61ace"} Jan 21 16:11:13 crc kubenswrapper[4902]: I0121 16:11:13.138391 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:13 crc kubenswrapper[4902]: I0121 16:11:13.170447 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.170424265 podStartE2EDuration="2.170424265s" podCreationTimestamp="2026-01-21 16:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:13.158228632 +0000 UTC m=+5835.235061691" watchObservedRunningTime="2026-01-21 16:11:13.170424265 +0000 UTC m=+5835.247257304" Jan 21 16:11:21 crc kubenswrapper[4902]: I0121 16:11:21.572225 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.005381 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-7ld7m"] Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.006686 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.009557 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.009580 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.021687 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7ld7m"] Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.129080 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-scripts\") pod \"nova-cell0-cell-mapping-7ld7m\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.129167 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbbjh\" (UniqueName: \"kubernetes.io/projected/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-kube-api-access-sbbjh\") pod \"nova-cell0-cell-mapping-7ld7m\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.129260 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-config-data\") pod \"nova-cell0-cell-mapping-7ld7m\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.129298 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7ld7m\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.178658 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.180268 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.183506 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.190573 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.192076 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.197593 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.204646 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.217665 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.231625 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-config-data\") pod \"nova-cell0-cell-mapping-7ld7m\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.231700 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7ld7m\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.231819 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-scripts\") pod \"nova-cell0-cell-mapping-7ld7m\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.231873 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbbjh\" (UniqueName: \"kubernetes.io/projected/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-kube-api-access-sbbjh\") pod \"nova-cell0-cell-mapping-7ld7m\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.241930 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-scripts\") pod \"nova-cell0-cell-mapping-7ld7m\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.245406 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7ld7m\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.251740 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-config-data\") pod \"nova-cell0-cell-mapping-7ld7m\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.258062 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.259859 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.264698 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.276017 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbbjh\" (UniqueName: \"kubernetes.io/projected/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-kube-api-access-sbbjh\") pod \"nova-cell0-cell-mapping-7ld7m\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.289253 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.333420 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.333479 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57ebee9b-653a-4d49-9002-23c81b622b7c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"57ebee9b-653a-4d49-9002-23c81b622b7c\") " pod="openstack/nova-scheduler-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.333508 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pq4g\" (UniqueName: \"kubernetes.io/projected/3a6e0e21-ab4e-40db-ad7d-fde50926c691-kube-api-access-4pq4g\") pod \"nova-api-0\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.333541 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a6e0e21-ab4e-40db-ad7d-fde50926c691-logs\") pod \"nova-api-0\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.333565 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6e0e21-ab4e-40db-ad7d-fde50926c691-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.333617 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m2j8\" (UniqueName: \"kubernetes.io/projected/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-kube-api-access-5m2j8\") pod \"nova-cell1-novncproxy-0\" (UID: \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.333654 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdcvq\" (UniqueName: \"kubernetes.io/projected/57ebee9b-653a-4d49-9002-23c81b622b7c-kube-api-access-vdcvq\") pod \"nova-scheduler-0\" (UID: \"57ebee9b-653a-4d49-9002-23c81b622b7c\") " pod="openstack/nova-scheduler-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.333686 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57ebee9b-653a-4d49-9002-23c81b622b7c-config-data\") pod \"nova-scheduler-0\" (UID: \"57ebee9b-653a-4d49-9002-23c81b622b7c\") " pod="openstack/nova-scheduler-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.333715 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.333980 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6e0e21-ab4e-40db-ad7d-fde50926c691-config-data\") pod \"nova-api-0\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.335465 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.384596 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.386075 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.394684 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.418293 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.435606 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.435668 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57ebee9b-653a-4d49-9002-23c81b622b7c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"57ebee9b-653a-4d49-9002-23c81b622b7c\") " pod="openstack/nova-scheduler-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.435696 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pq4g\" (UniqueName: \"kubernetes.io/projected/3a6e0e21-ab4e-40db-ad7d-fde50926c691-kube-api-access-4pq4g\") pod \"nova-api-0\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.435734 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a6e0e21-ab4e-40db-ad7d-fde50926c691-logs\") pod \"nova-api-0\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.435759 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6e0e21-ab4e-40db-ad7d-fde50926c691-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.435811 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m2j8\" (UniqueName: \"kubernetes.io/projected/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-kube-api-access-5m2j8\") pod \"nova-cell1-novncproxy-0\" (UID: \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.435853 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdcvq\" (UniqueName: \"kubernetes.io/projected/57ebee9b-653a-4d49-9002-23c81b622b7c-kube-api-access-vdcvq\") pod \"nova-scheduler-0\" (UID: \"57ebee9b-653a-4d49-9002-23c81b622b7c\") " pod="openstack/nova-scheduler-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.435900 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57ebee9b-653a-4d49-9002-23c81b622b7c-config-data\") pod \"nova-scheduler-0\" (UID: \"57ebee9b-653a-4d49-9002-23c81b622b7c\") " pod="openstack/nova-scheduler-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.435932 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.436123 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6e0e21-ab4e-40db-ad7d-fde50926c691-config-data\") pod \"nova-api-0\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.436733 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a6e0e21-ab4e-40db-ad7d-fde50926c691-logs\") pod \"nova-api-0\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.444097 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.446977 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57ebee9b-653a-4d49-9002-23c81b622b7c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"57ebee9b-653a-4d49-9002-23c81b622b7c\") " pod="openstack/nova-scheduler-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.455514 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57ebee9b-653a-4d49-9002-23c81b622b7c-config-data\") pod \"nova-scheduler-0\" (UID: \"57ebee9b-653a-4d49-9002-23c81b622b7c\") " pod="openstack/nova-scheduler-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.456527 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.462784 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6e0e21-ab4e-40db-ad7d-fde50926c691-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.469250 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6e0e21-ab4e-40db-ad7d-fde50926c691-config-data\") pod \"nova-api-0\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.473802 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pq4g\" (UniqueName: \"kubernetes.io/projected/3a6e0e21-ab4e-40db-ad7d-fde50926c691-kube-api-access-4pq4g\") pod \"nova-api-0\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.475455 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m2j8\" (UniqueName: \"kubernetes.io/projected/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-kube-api-access-5m2j8\") pod \"nova-cell1-novncproxy-0\" (UID: \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.476325 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdcvq\" (UniqueName: \"kubernetes.io/projected/57ebee9b-653a-4d49-9002-23c81b622b7c-kube-api-access-vdcvq\") pod \"nova-scheduler-0\" (UID: \"57ebee9b-653a-4d49-9002-23c81b622b7c\") " pod="openstack/nova-scheduler-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.486803 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66f49c7d99-gbqjj"] Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.488478 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.504548 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.515529 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.537214 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66f49c7d99-gbqjj"] Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.538388 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31a37d5-535d-42a2-85bd-29497224ebb2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.538468 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb79m\" (UniqueName: \"kubernetes.io/projected/f31a37d5-535d-42a2-85bd-29497224ebb2-kube-api-access-vb79m\") pod \"nova-metadata-0\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.538576 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-ovsdbserver-nb\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.538616 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-dns-svc\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.538656 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-ovsdbserver-sb\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.538730 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31a37d5-535d-42a2-85bd-29497224ebb2-config-data\") pod \"nova-metadata-0\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.538756 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f31a37d5-535d-42a2-85bd-29497224ebb2-logs\") pod \"nova-metadata-0\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.538930 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-config\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.538982 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwjh7\" (UniqueName: \"kubernetes.io/projected/b01a7675-d9b2-451e-8137-b069f892c1dd-kube-api-access-nwjh7\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.641176 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31a37d5-535d-42a2-85bd-29497224ebb2-config-data\") pod \"nova-metadata-0\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.641233 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f31a37d5-535d-42a2-85bd-29497224ebb2-logs\") pod \"nova-metadata-0\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.641286 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-config\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.641325 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwjh7\" (UniqueName: \"kubernetes.io/projected/b01a7675-d9b2-451e-8137-b069f892c1dd-kube-api-access-nwjh7\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.641363 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31a37d5-535d-42a2-85bd-29497224ebb2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.641420 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb79m\" (UniqueName: \"kubernetes.io/projected/f31a37d5-535d-42a2-85bd-29497224ebb2-kube-api-access-vb79m\") pod \"nova-metadata-0\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.641475 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-ovsdbserver-nb\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.641513 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-dns-svc\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.641574 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-ovsdbserver-sb\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.642889 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-ovsdbserver-sb\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.644392 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-config\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.644581 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f31a37d5-535d-42a2-85bd-29497224ebb2-logs\") pod \"nova-metadata-0\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.645151 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-ovsdbserver-nb\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.645419 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-dns-svc\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.653783 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31a37d5-535d-42a2-85bd-29497224ebb2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.653915 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31a37d5-535d-42a2-85bd-29497224ebb2-config-data\") pod \"nova-metadata-0\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.662486 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb79m\" (UniqueName: \"kubernetes.io/projected/f31a37d5-535d-42a2-85bd-29497224ebb2-kube-api-access-vb79m\") pod \"nova-metadata-0\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.667321 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwjh7\" (UniqueName: \"kubernetes.io/projected/b01a7675-d9b2-451e-8137-b069f892c1dd-kube-api-access-nwjh7\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.694400 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.864315 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.881176 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.941363 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7ld7m"] Jan 21 16:11:22 crc kubenswrapper[4902]: W0121 16:11:22.952303 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbeebb97d_c56a_4c7d_8ec0_f9982f9c2e32.slice/crio-aff4329b968547b9f0c29b41c0b1ec28e6bda5a703c8e7a44c04923bf8513724 WatchSource:0}: Error finding container aff4329b968547b9f0c29b41c0b1ec28e6bda5a703c8e7a44c04923bf8513724: Status 404 returned error can't find the container with id aff4329b968547b9f0c29b41c0b1ec28e6bda5a703c8e7a44c04923bf8513724 Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.106014 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.123760 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.204452 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mqjfk"] Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.212444 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.221263 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.221644 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.237283 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mqjfk"] Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.250178 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.254994 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mqjfk\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.255130 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj5cn\" (UniqueName: \"kubernetes.io/projected/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-kube-api-access-vj5cn\") pod \"nova-cell1-conductor-db-sync-mqjfk\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.255169 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-scripts\") pod \"nova-cell1-conductor-db-sync-mqjfk\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.255249 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-config-data\") pod \"nova-cell1-conductor-db-sync-mqjfk\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.297198 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7ld7m" event={"ID":"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32","Type":"ContainerStarted","Data":"7e054620420f286eb319ea74bdca60ca0a6e43b9d52a5c4ad7043b88a7a02929"} Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.297424 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7ld7m" event={"ID":"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32","Type":"ContainerStarted","Data":"aff4329b968547b9f0c29b41c0b1ec28e6bda5a703c8e7a44c04923bf8513724"} Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.304388 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"57ebee9b-653a-4d49-9002-23c81b622b7c","Type":"ContainerStarted","Data":"9ebe5c00f1a81b515c7ecc716c300b5811813ab974f7f4bd90b9fc00489cfc97"} Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.306085 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"11b80ea3-f5a8-48c8-ba60-d26265f71a6b","Type":"ContainerStarted","Data":"0ac3900103983dac93de1f7fd1bd8a8ee9bd704671df4483ae644f05d4a22117"} Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.319119 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-7ld7m" podStartSLOduration=2.319098496 podStartE2EDuration="2.319098496s" podCreationTimestamp="2026-01-21 16:11:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:23.309853206 +0000 UTC m=+5845.386686235" watchObservedRunningTime="2026-01-21 16:11:23.319098496 +0000 UTC m=+5845.395931525" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.356538 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj5cn\" (UniqueName: \"kubernetes.io/projected/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-kube-api-access-vj5cn\") pod \"nova-cell1-conductor-db-sync-mqjfk\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.356797 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-scripts\") pod \"nova-cell1-conductor-db-sync-mqjfk\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.357030 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-config-data\") pod \"nova-cell1-conductor-db-sync-mqjfk\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.357213 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mqjfk\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.362280 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-scripts\") pod \"nova-cell1-conductor-db-sync-mqjfk\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.365559 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-config-data\") pod \"nova-cell1-conductor-db-sync-mqjfk\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.366254 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mqjfk\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.379049 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj5cn\" (UniqueName: \"kubernetes.io/projected/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-kube-api-access-vj5cn\") pod \"nova-cell1-conductor-db-sync-mqjfk\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.418783 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66f49c7d99-gbqjj"] Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.497863 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.526786 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:24 crc kubenswrapper[4902]: W0121 16:11:24.120646 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fafbdf5_1100_4f6f_831e_c7dd0fc63586.slice/crio-a74358b1c5a77d77722a14ca051562ae9ede85ed8984c1a6ea4d025963ae5d19 WatchSource:0}: Error finding container a74358b1c5a77d77722a14ca051562ae9ede85ed8984c1a6ea4d025963ae5d19: Status 404 returned error can't find the container with id a74358b1c5a77d77722a14ca051562ae9ede85ed8984c1a6ea4d025963ae5d19 Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.124880 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mqjfk"] Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.320366 4902 generic.go:334] "Generic (PLEG): container finished" podID="b01a7675-d9b2-451e-8137-b069f892c1dd" containerID="18ff4c16518d3370beabf9b4d3b009e9a9341f86cab33887ac50b593cc5cd8c4" exitCode=0 Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.320454 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" event={"ID":"b01a7675-d9b2-451e-8137-b069f892c1dd","Type":"ContainerDied","Data":"18ff4c16518d3370beabf9b4d3b009e9a9341f86cab33887ac50b593cc5cd8c4"} Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.320481 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" event={"ID":"b01a7675-d9b2-451e-8137-b069f892c1dd","Type":"ContainerStarted","Data":"3e67c0e6a35688e67c9ae97a666562e7a06fee4e64265deb351ad6f1c7a1f81e"} Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.367366 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a6e0e21-ab4e-40db-ad7d-fde50926c691","Type":"ContainerStarted","Data":"1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82"} Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.367423 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a6e0e21-ab4e-40db-ad7d-fde50926c691","Type":"ContainerStarted","Data":"5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b"} Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.367435 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a6e0e21-ab4e-40db-ad7d-fde50926c691","Type":"ContainerStarted","Data":"4b639bfa91be42f81663a77a5e76c1832f5e50df04be72677151e02c7b0de405"} Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.376926 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mqjfk" event={"ID":"6fafbdf5-1100-4f6f-831e-c7dd0fc63586","Type":"ContainerStarted","Data":"a74358b1c5a77d77722a14ca051562ae9ede85ed8984c1a6ea4d025963ae5d19"} Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.380480 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"57ebee9b-653a-4d49-9002-23c81b622b7c","Type":"ContainerStarted","Data":"31fae740e9f177c6066f6345aa9a2713697f906cf0f7e1329e3a321359e5144b"} Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.395925 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.395905853 podStartE2EDuration="2.395905853s" podCreationTimestamp="2026-01-21 16:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:24.385996155 +0000 UTC m=+5846.462829184" watchObservedRunningTime="2026-01-21 16:11:24.395905853 +0000 UTC m=+5846.472738882" Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.403631 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"11b80ea3-f5a8-48c8-ba60-d26265f71a6b","Type":"ContainerStarted","Data":"45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a"} Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.409361 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.409345881 podStartE2EDuration="2.409345881s" podCreationTimestamp="2026-01-21 16:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:24.407977772 +0000 UTC m=+5846.484810801" watchObservedRunningTime="2026-01-21 16:11:24.409345881 +0000 UTC m=+5846.486178910" Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.411329 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f31a37d5-535d-42a2-85bd-29497224ebb2","Type":"ContainerStarted","Data":"537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55"} Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.411374 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f31a37d5-535d-42a2-85bd-29497224ebb2","Type":"ContainerStarted","Data":"40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45"} Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.411385 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f31a37d5-535d-42a2-85bd-29497224ebb2","Type":"ContainerStarted","Data":"70ca050dfe064f0adfae2ee5b6a0be6a9d2b4a8c56771dced41717611bd3cc98"} Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.440628 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.44060604 podStartE2EDuration="2.44060604s" podCreationTimestamp="2026-01-21 16:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:24.431984497 +0000 UTC m=+5846.508817546" watchObservedRunningTime="2026-01-21 16:11:24.44060604 +0000 UTC m=+5846.517439069" Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.458428 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.45840441 podStartE2EDuration="2.45840441s" podCreationTimestamp="2026-01-21 16:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:24.446746482 +0000 UTC m=+5846.523579511" watchObservedRunningTime="2026-01-21 16:11:24.45840441 +0000 UTC m=+5846.535237439" Jan 21 16:11:25 crc kubenswrapper[4902]: I0121 16:11:25.433869 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" event={"ID":"b01a7675-d9b2-451e-8137-b069f892c1dd","Type":"ContainerStarted","Data":"c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2"} Jan 21 16:11:25 crc kubenswrapper[4902]: I0121 16:11:25.435330 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:25 crc kubenswrapper[4902]: I0121 16:11:25.445733 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mqjfk" event={"ID":"6fafbdf5-1100-4f6f-831e-c7dd0fc63586","Type":"ContainerStarted","Data":"889fe026bf2a7b74189409dad70c2684f40ab43f381e9a39094266539161c3b9"} Jan 21 16:11:25 crc kubenswrapper[4902]: I0121 16:11:25.465910 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" podStartSLOduration=3.465888967 podStartE2EDuration="3.465888967s" podCreationTimestamp="2026-01-21 16:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:25.455428133 +0000 UTC m=+5847.532261172" watchObservedRunningTime="2026-01-21 16:11:25.465888967 +0000 UTC m=+5847.542721996" Jan 21 16:11:25 crc kubenswrapper[4902]: I0121 16:11:25.472527 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-mqjfk" podStartSLOduration=2.472492012 podStartE2EDuration="2.472492012s" podCreationTimestamp="2026-01-21 16:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:25.470915158 +0000 UTC m=+5847.547748187" watchObservedRunningTime="2026-01-21 16:11:25.472492012 +0000 UTC m=+5847.549325041" Jan 21 16:11:26 crc kubenswrapper[4902]: I0121 16:11:26.388137 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:26 crc kubenswrapper[4902]: I0121 16:11:26.451474 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f31a37d5-535d-42a2-85bd-29497224ebb2" containerName="nova-metadata-log" containerID="cri-o://40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45" gracePeriod=30 Jan 21 16:11:26 crc kubenswrapper[4902]: I0121 16:11:26.451617 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f31a37d5-535d-42a2-85bd-29497224ebb2" containerName="nova-metadata-metadata" containerID="cri-o://537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55" gracePeriod=30 Jan 21 16:11:26 crc kubenswrapper[4902]: I0121 16:11:26.459096 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:11:26 crc kubenswrapper[4902]: I0121 16:11:26.459306 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="11b80ea3-f5a8-48c8-ba60-d26265f71a6b" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a" gracePeriod=30 Jan 21 16:11:26 crc kubenswrapper[4902]: E0121 16:11:26.774148 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf31a37d5_535d_42a2_85bd_29497224ebb2.slice/crio-537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.283259 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.372776 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m2j8\" (UniqueName: \"kubernetes.io/projected/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-kube-api-access-5m2j8\") pod \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\" (UID: \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\") " Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.372852 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-config-data\") pod \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\" (UID: \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\") " Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.372882 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-combined-ca-bundle\") pod \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\" (UID: \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\") " Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.383710 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-kube-api-access-5m2j8" (OuterVolumeSpecName: "kube-api-access-5m2j8") pod "11b80ea3-f5a8-48c8-ba60-d26265f71a6b" (UID: "11b80ea3-f5a8-48c8-ba60-d26265f71a6b"). InnerVolumeSpecName "kube-api-access-5m2j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.413228 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11b80ea3-f5a8-48c8-ba60-d26265f71a6b" (UID: "11b80ea3-f5a8-48c8-ba60-d26265f71a6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.428459 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-config-data" (OuterVolumeSpecName: "config-data") pod "11b80ea3-f5a8-48c8-ba60-d26265f71a6b" (UID: "11b80ea3-f5a8-48c8-ba60-d26265f71a6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.457443 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.462198 4902 generic.go:334] "Generic (PLEG): container finished" podID="11b80ea3-f5a8-48c8-ba60-d26265f71a6b" containerID="45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a" exitCode=0 Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.462298 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"11b80ea3-f5a8-48c8-ba60-d26265f71a6b","Type":"ContainerDied","Data":"45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a"} Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.462294 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.462342 4902 scope.go:117] "RemoveContainer" containerID="45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.462331 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"11b80ea3-f5a8-48c8-ba60-d26265f71a6b","Type":"ContainerDied","Data":"0ac3900103983dac93de1f7fd1bd8a8ee9bd704671df4483ae644f05d4a22117"} Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.464014 4902 generic.go:334] "Generic (PLEG): container finished" podID="f31a37d5-535d-42a2-85bd-29497224ebb2" containerID="537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55" exitCode=0 Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.464033 4902 generic.go:334] "Generic (PLEG): container finished" podID="f31a37d5-535d-42a2-85bd-29497224ebb2" containerID="40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45" exitCode=143 Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.464062 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f31a37d5-535d-42a2-85bd-29497224ebb2","Type":"ContainerDied","Data":"537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55"} Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.464093 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f31a37d5-535d-42a2-85bd-29497224ebb2","Type":"ContainerDied","Data":"40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45"} Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.464104 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f31a37d5-535d-42a2-85bd-29497224ebb2","Type":"ContainerDied","Data":"70ca050dfe064f0adfae2ee5b6a0be6a9d2b4a8c56771dced41717611bd3cc98"} Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.464111 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.475287 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb79m\" (UniqueName: \"kubernetes.io/projected/f31a37d5-535d-42a2-85bd-29497224ebb2-kube-api-access-vb79m\") pod \"f31a37d5-535d-42a2-85bd-29497224ebb2\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.475555 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f31a37d5-535d-42a2-85bd-29497224ebb2-logs\") pod \"f31a37d5-535d-42a2-85bd-29497224ebb2\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.475633 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31a37d5-535d-42a2-85bd-29497224ebb2-config-data\") pod \"f31a37d5-535d-42a2-85bd-29497224ebb2\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.476084 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f31a37d5-535d-42a2-85bd-29497224ebb2-logs" (OuterVolumeSpecName: "logs") pod "f31a37d5-535d-42a2-85bd-29497224ebb2" (UID: "f31a37d5-535d-42a2-85bd-29497224ebb2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.476709 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31a37d5-535d-42a2-85bd-29497224ebb2-combined-ca-bundle\") pod \"f31a37d5-535d-42a2-85bd-29497224ebb2\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.478296 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f31a37d5-535d-42a2-85bd-29497224ebb2-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.478330 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m2j8\" (UniqueName: \"kubernetes.io/projected/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-kube-api-access-5m2j8\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.478346 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.478359 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.479801 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f31a37d5-535d-42a2-85bd-29497224ebb2-kube-api-access-vb79m" (OuterVolumeSpecName: "kube-api-access-vb79m") pod "f31a37d5-535d-42a2-85bd-29497224ebb2" (UID: "f31a37d5-535d-42a2-85bd-29497224ebb2"). InnerVolumeSpecName "kube-api-access-vb79m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.502288 4902 scope.go:117] "RemoveContainer" containerID="45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a" Jan 21 16:11:27 crc kubenswrapper[4902]: E0121 16:11:27.502736 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a\": container with ID starting with 45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a not found: ID does not exist" containerID="45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.502770 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a"} err="failed to get container status \"45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a\": rpc error: code = NotFound desc = could not find container \"45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a\": container with ID starting with 45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a not found: ID does not exist" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.502791 4902 scope.go:117] "RemoveContainer" containerID="537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.505387 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.512276 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f31a37d5-535d-42a2-85bd-29497224ebb2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f31a37d5-535d-42a2-85bd-29497224ebb2" (UID: "f31a37d5-535d-42a2-85bd-29497224ebb2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.516610 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.525289 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.532415 4902 scope.go:117] "RemoveContainer" containerID="40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.541096 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f31a37d5-535d-42a2-85bd-29497224ebb2-config-data" (OuterVolumeSpecName: "config-data") pod "f31a37d5-535d-42a2-85bd-29497224ebb2" (UID: "f31a37d5-535d-42a2-85bd-29497224ebb2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.542673 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:11:27 crc kubenswrapper[4902]: E0121 16:11:27.543035 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f31a37d5-535d-42a2-85bd-29497224ebb2" containerName="nova-metadata-metadata" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.543065 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f31a37d5-535d-42a2-85bd-29497224ebb2" containerName="nova-metadata-metadata" Jan 21 16:11:27 crc kubenswrapper[4902]: E0121 16:11:27.543094 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f31a37d5-535d-42a2-85bd-29497224ebb2" containerName="nova-metadata-log" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.543103 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f31a37d5-535d-42a2-85bd-29497224ebb2" containerName="nova-metadata-log" Jan 21 16:11:27 crc kubenswrapper[4902]: E0121 16:11:27.543120 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11b80ea3-f5a8-48c8-ba60-d26265f71a6b" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.543126 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b80ea3-f5a8-48c8-ba60-d26265f71a6b" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.543300 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f31a37d5-535d-42a2-85bd-29497224ebb2" containerName="nova-metadata-metadata" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.543327 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f31a37d5-535d-42a2-85bd-29497224ebb2" containerName="nova-metadata-log" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.543336 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="11b80ea3-f5a8-48c8-ba60-d26265f71a6b" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.544032 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.550821 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.551034 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.551184 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.565998 4902 scope.go:117] "RemoveContainer" containerID="537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55" Jan 21 16:11:27 crc kubenswrapper[4902]: E0121 16:11:27.569977 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55\": container with ID starting with 537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55 not found: ID does not exist" containerID="537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.570016 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55"} err="failed to get container status \"537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55\": rpc error: code = NotFound desc = could not find container \"537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55\": container with ID starting with 537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55 not found: ID does not exist" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.570056 4902 scope.go:117] "RemoveContainer" containerID="40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.573304 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:11:27 crc kubenswrapper[4902]: E0121 16:11:27.575197 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45\": container with ID starting with 40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45 not found: ID does not exist" containerID="40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.575242 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45"} err="failed to get container status \"40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45\": rpc error: code = NotFound desc = could not find container \"40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45\": container with ID starting with 40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45 not found: ID does not exist" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.575272 4902 scope.go:117] "RemoveContainer" containerID="537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.579174 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55"} err="failed to get container status \"537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55\": rpc error: code = NotFound desc = could not find container \"537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55\": container with ID starting with 537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55 not found: ID does not exist" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.579212 4902 scope.go:117] "RemoveContainer" containerID="40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.585308 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/78825018-5d0a-4fe7-83c7-ef79700642cd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.585464 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lrnt\" (UniqueName: \"kubernetes.io/projected/78825018-5d0a-4fe7-83c7-ef79700642cd-kube-api-access-2lrnt\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.585523 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78825018-5d0a-4fe7-83c7-ef79700642cd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.585595 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/78825018-5d0a-4fe7-83c7-ef79700642cd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.585646 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78825018-5d0a-4fe7-83c7-ef79700642cd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.585773 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb79m\" (UniqueName: \"kubernetes.io/projected/f31a37d5-535d-42a2-85bd-29497224ebb2-kube-api-access-vb79m\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.585792 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31a37d5-535d-42a2-85bd-29497224ebb2-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.585806 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31a37d5-535d-42a2-85bd-29497224ebb2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.586189 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45"} err="failed to get container status \"40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45\": rpc error: code = NotFound desc = could not find container \"40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45\": container with ID starting with 40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45 not found: ID does not exist" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.688254 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lrnt\" (UniqueName: \"kubernetes.io/projected/78825018-5d0a-4fe7-83c7-ef79700642cd-kube-api-access-2lrnt\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.688338 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78825018-5d0a-4fe7-83c7-ef79700642cd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.688413 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/78825018-5d0a-4fe7-83c7-ef79700642cd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.688448 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78825018-5d0a-4fe7-83c7-ef79700642cd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.688521 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/78825018-5d0a-4fe7-83c7-ef79700642cd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.696249 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78825018-5d0a-4fe7-83c7-ef79700642cd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.700490 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78825018-5d0a-4fe7-83c7-ef79700642cd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.700606 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/78825018-5d0a-4fe7-83c7-ef79700642cd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.712703 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lrnt\" (UniqueName: \"kubernetes.io/projected/78825018-5d0a-4fe7-83c7-ef79700642cd-kube-api-access-2lrnt\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.713396 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/78825018-5d0a-4fe7-83c7-ef79700642cd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.830612 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.843552 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.854189 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.855654 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.862517 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.862683 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.869516 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.892459 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.892554 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06cee7ae-d3df-4c78-8056-2877e835409a-logs\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.892583 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.892606 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpjw6\" (UniqueName: \"kubernetes.io/projected/06cee7ae-d3df-4c78-8056-2877e835409a-kube-api-access-cpjw6\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.892643 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-config-data\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.933352 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.994271 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.994552 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06cee7ae-d3df-4c78-8056-2877e835409a-logs\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.994578 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.994605 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpjw6\" (UniqueName: \"kubernetes.io/projected/06cee7ae-d3df-4c78-8056-2877e835409a-kube-api-access-cpjw6\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.994645 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-config-data\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.997550 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06cee7ae-d3df-4c78-8056-2877e835409a-logs\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:28 crc kubenswrapper[4902]: I0121 16:11:28.001611 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:28 crc kubenswrapper[4902]: I0121 16:11:28.001611 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-config-data\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:28 crc kubenswrapper[4902]: I0121 16:11:28.002711 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:28 crc kubenswrapper[4902]: I0121 16:11:28.015705 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpjw6\" (UniqueName: \"kubernetes.io/projected/06cee7ae-d3df-4c78-8056-2877e835409a-kube-api-access-cpjw6\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:28 crc kubenswrapper[4902]: I0121 16:11:28.184378 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:11:28 crc kubenswrapper[4902]: I0121 16:11:28.312418 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11b80ea3-f5a8-48c8-ba60-d26265f71a6b" path="/var/lib/kubelet/pods/11b80ea3-f5a8-48c8-ba60-d26265f71a6b/volumes" Jan 21 16:11:28 crc kubenswrapper[4902]: I0121 16:11:28.313100 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f31a37d5-535d-42a2-85bd-29497224ebb2" path="/var/lib/kubelet/pods/f31a37d5-535d-42a2-85bd-29497224ebb2/volumes" Jan 21 16:11:28 crc kubenswrapper[4902]: I0121 16:11:28.463175 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:11:28 crc kubenswrapper[4902]: W0121 16:11:28.468876 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78825018_5d0a_4fe7_83c7_ef79700642cd.slice/crio-14c732739398993b1ed2c099355743d5d42d0e1130b24e86f9bbbb3a83a28a2c WatchSource:0}: Error finding container 14c732739398993b1ed2c099355743d5d42d0e1130b24e86f9bbbb3a83a28a2c: Status 404 returned error can't find the container with id 14c732739398993b1ed2c099355743d5d42d0e1130b24e86f9bbbb3a83a28a2c Jan 21 16:11:28 crc kubenswrapper[4902]: I0121 16:11:28.478103 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mqjfk" event={"ID":"6fafbdf5-1100-4f6f-831e-c7dd0fc63586","Type":"ContainerDied","Data":"889fe026bf2a7b74189409dad70c2684f40ab43f381e9a39094266539161c3b9"} Jan 21 16:11:28 crc kubenswrapper[4902]: I0121 16:11:28.478039 4902 generic.go:334] "Generic (PLEG): container finished" podID="6fafbdf5-1100-4f6f-831e-c7dd0fc63586" containerID="889fe026bf2a7b74189409dad70c2684f40ab43f381e9a39094266539161c3b9" exitCode=0 Jan 21 16:11:28 crc kubenswrapper[4902]: I0121 16:11:28.620753 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.490813 4902 generic.go:334] "Generic (PLEG): container finished" podID="beebb97d-c56a-4c7d-8ec0-f9982f9c2e32" containerID="7e054620420f286eb319ea74bdca60ca0a6e43b9d52a5c4ad7043b88a7a02929" exitCode=0 Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.491357 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7ld7m" event={"ID":"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32","Type":"ContainerDied","Data":"7e054620420f286eb319ea74bdca60ca0a6e43b9d52a5c4ad7043b88a7a02929"} Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.493478 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"06cee7ae-d3df-4c78-8056-2877e835409a","Type":"ContainerStarted","Data":"6db095d59697014f5d540bbdbf584a4f12b528507f890ae6dcc568fbd9d40309"} Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.493549 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"06cee7ae-d3df-4c78-8056-2877e835409a","Type":"ContainerStarted","Data":"badd389d3e26fbdcc8666e8bf066881f6cf09b62a13c95591c64f06bb805654a"} Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.493566 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"06cee7ae-d3df-4c78-8056-2877e835409a","Type":"ContainerStarted","Data":"c7e7aaf7977a7ad1cca77abff20f575bad05c5750d7ec60c2c6c5384633a215a"} Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.495578 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"78825018-5d0a-4fe7-83c7-ef79700642cd","Type":"ContainerStarted","Data":"d3fba8ddc406502ea544213a2eb84d3faf6ab2405d901c458926b932c0b86ae7"} Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.495613 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"78825018-5d0a-4fe7-83c7-ef79700642cd","Type":"ContainerStarted","Data":"14c732739398993b1ed2c099355743d5d42d0e1130b24e86f9bbbb3a83a28a2c"} Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.557637 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.557613984 podStartE2EDuration="2.557613984s" podCreationTimestamp="2026-01-21 16:11:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:29.538442585 +0000 UTC m=+5851.615275624" watchObservedRunningTime="2026-01-21 16:11:29.557613984 +0000 UTC m=+5851.634447013" Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.583859 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.583834281 podStartE2EDuration="2.583834281s" podCreationTimestamp="2026-01-21 16:11:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:29.575773015 +0000 UTC m=+5851.652606044" watchObservedRunningTime="2026-01-21 16:11:29.583834281 +0000 UTC m=+5851.660667320" Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.905380 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.930862 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj5cn\" (UniqueName: \"kubernetes.io/projected/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-kube-api-access-vj5cn\") pod \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.930999 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-config-data\") pod \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.931089 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-scripts\") pod \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.931174 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-combined-ca-bundle\") pod \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.936252 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-kube-api-access-vj5cn" (OuterVolumeSpecName: "kube-api-access-vj5cn") pod "6fafbdf5-1100-4f6f-831e-c7dd0fc63586" (UID: "6fafbdf5-1100-4f6f-831e-c7dd0fc63586"). InnerVolumeSpecName "kube-api-access-vj5cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.936522 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-scripts" (OuterVolumeSpecName: "scripts") pod "6fafbdf5-1100-4f6f-831e-c7dd0fc63586" (UID: "6fafbdf5-1100-4f6f-831e-c7dd0fc63586"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.974501 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-config-data" (OuterVolumeSpecName: "config-data") pod "6fafbdf5-1100-4f6f-831e-c7dd0fc63586" (UID: "6fafbdf5-1100-4f6f-831e-c7dd0fc63586"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.980745 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fafbdf5-1100-4f6f-831e-c7dd0fc63586" (UID: "6fafbdf5-1100-4f6f-831e-c7dd0fc63586"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.034040 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj5cn\" (UniqueName: \"kubernetes.io/projected/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-kube-api-access-vj5cn\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.034099 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.034112 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.034126 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.510707 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.510631 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mqjfk" event={"ID":"6fafbdf5-1100-4f6f-831e-c7dd0fc63586","Type":"ContainerDied","Data":"a74358b1c5a77d77722a14ca051562ae9ede85ed8984c1a6ea4d025963ae5d19"} Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.510766 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a74358b1c5a77d77722a14ca051562ae9ede85ed8984c1a6ea4d025963ae5d19" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.614977 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 16:11:30 crc kubenswrapper[4902]: E0121 16:11:30.615485 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fafbdf5-1100-4f6f-831e-c7dd0fc63586" containerName="nova-cell1-conductor-db-sync" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.615504 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fafbdf5-1100-4f6f-831e-c7dd0fc63586" containerName="nova-cell1-conductor-db-sync" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.615778 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fafbdf5-1100-4f6f-831e-c7dd0fc63586" containerName="nova-cell1-conductor-db-sync" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.616536 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.616632 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.634203 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.647386 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgv4h\" (UniqueName: \"kubernetes.io/projected/f7b3d3ef-1806-4318-95f7-eb9cd2526d32-kube-api-access-zgv4h\") pod \"nova-cell1-conductor-0\" (UID: \"f7b3d3ef-1806-4318-95f7-eb9cd2526d32\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.647806 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7b3d3ef-1806-4318-95f7-eb9cd2526d32-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f7b3d3ef-1806-4318-95f7-eb9cd2526d32\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.647899 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7b3d3ef-1806-4318-95f7-eb9cd2526d32-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f7b3d3ef-1806-4318-95f7-eb9cd2526d32\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.748765 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgv4h\" (UniqueName: \"kubernetes.io/projected/f7b3d3ef-1806-4318-95f7-eb9cd2526d32-kube-api-access-zgv4h\") pod \"nova-cell1-conductor-0\" (UID: \"f7b3d3ef-1806-4318-95f7-eb9cd2526d32\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.748813 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7b3d3ef-1806-4318-95f7-eb9cd2526d32-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f7b3d3ef-1806-4318-95f7-eb9cd2526d32\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.748882 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7b3d3ef-1806-4318-95f7-eb9cd2526d32-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f7b3d3ef-1806-4318-95f7-eb9cd2526d32\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.766081 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7b3d3ef-1806-4318-95f7-eb9cd2526d32-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f7b3d3ef-1806-4318-95f7-eb9cd2526d32\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.766456 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7b3d3ef-1806-4318-95f7-eb9cd2526d32-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f7b3d3ef-1806-4318-95f7-eb9cd2526d32\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.767819 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgv4h\" (UniqueName: \"kubernetes.io/projected/f7b3d3ef-1806-4318-95f7-eb9cd2526d32-kube-api-access-zgv4h\") pod \"nova-cell1-conductor-0\" (UID: \"f7b3d3ef-1806-4318-95f7-eb9cd2526d32\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.930888 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.953102 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-scripts\") pod \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.953316 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbbjh\" (UniqueName: \"kubernetes.io/projected/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-kube-api-access-sbbjh\") pod \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.953458 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-combined-ca-bundle\") pod \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.953526 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-config-data\") pod \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.957829 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-kube-api-access-sbbjh" (OuterVolumeSpecName: "kube-api-access-sbbjh") pod "beebb97d-c56a-4c7d-8ec0-f9982f9c2e32" (UID: "beebb97d-c56a-4c7d-8ec0-f9982f9c2e32"). InnerVolumeSpecName "kube-api-access-sbbjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.960582 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-scripts" (OuterVolumeSpecName: "scripts") pod "beebb97d-c56a-4c7d-8ec0-f9982f9c2e32" (UID: "beebb97d-c56a-4c7d-8ec0-f9982f9c2e32"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.968148 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.000946 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-config-data" (OuterVolumeSpecName: "config-data") pod "beebb97d-c56a-4c7d-8ec0-f9982f9c2e32" (UID: "beebb97d-c56a-4c7d-8ec0-f9982f9c2e32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.007745 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "beebb97d-c56a-4c7d-8ec0-f9982f9c2e32" (UID: "beebb97d-c56a-4c7d-8ec0-f9982f9c2e32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.055508 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.055564 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.055575 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.055586 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbbjh\" (UniqueName: \"kubernetes.io/projected/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-kube-api-access-sbbjh\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.406631 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 16:11:31 crc kubenswrapper[4902]: W0121 16:11:31.408125 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7b3d3ef_1806_4318_95f7_eb9cd2526d32.slice/crio-37bcc25cebf61837fdb3efe7f1d84f593d9fd45e8c61da49dcac811a63bcaa34 WatchSource:0}: Error finding container 37bcc25cebf61837fdb3efe7f1d84f593d9fd45e8c61da49dcac811a63bcaa34: Status 404 returned error can't find the container with id 37bcc25cebf61837fdb3efe7f1d84f593d9fd45e8c61da49dcac811a63bcaa34 Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.520189 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7ld7m" event={"ID":"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32","Type":"ContainerDied","Data":"aff4329b968547b9f0c29b41c0b1ec28e6bda5a703c8e7a44c04923bf8513724"} Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.520232 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aff4329b968547b9f0c29b41c0b1ec28e6bda5a703c8e7a44c04923bf8513724" Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.520258 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.522837 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f7b3d3ef-1806-4318-95f7-eb9cd2526d32","Type":"ContainerStarted","Data":"37bcc25cebf61837fdb3efe7f1d84f593d9fd45e8c61da49dcac811a63bcaa34"} Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.717523 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.718307 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3a6e0e21-ab4e-40db-ad7d-fde50926c691" containerName="nova-api-log" containerID="cri-o://5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b" gracePeriod=30 Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.718347 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3a6e0e21-ab4e-40db-ad7d-fde50926c691" containerName="nova-api-api" containerID="cri-o://1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82" gracePeriod=30 Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.745079 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.749138 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="57ebee9b-653a-4d49-9002-23c81b622b7c" containerName="nova-scheduler-scheduler" containerID="cri-o://31fae740e9f177c6066f6345aa9a2713697f906cf0f7e1329e3a321359e5144b" gracePeriod=30 Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.769255 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.769494 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="06cee7ae-d3df-4c78-8056-2877e835409a" containerName="nova-metadata-log" containerID="cri-o://badd389d3e26fbdcc8666e8bf066881f6cf09b62a13c95591c64f06bb805654a" gracePeriod=30 Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.769644 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="06cee7ae-d3df-4c78-8056-2877e835409a" containerName="nova-metadata-metadata" containerID="cri-o://6db095d59697014f5d540bbdbf584a4f12b528507f890ae6dcc568fbd9d40309" gracePeriod=30 Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.463279 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.560183 4902 generic.go:334] "Generic (PLEG): container finished" podID="3a6e0e21-ab4e-40db-ad7d-fde50926c691" containerID="1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82" exitCode=0 Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.560455 4902 generic.go:334] "Generic (PLEG): container finished" podID="3a6e0e21-ab4e-40db-ad7d-fde50926c691" containerID="5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b" exitCode=143 Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.560396 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a6e0e21-ab4e-40db-ad7d-fde50926c691","Type":"ContainerDied","Data":"1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82"} Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.560522 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.560548 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a6e0e21-ab4e-40db-ad7d-fde50926c691","Type":"ContainerDied","Data":"5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b"} Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.560562 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a6e0e21-ab4e-40db-ad7d-fde50926c691","Type":"ContainerDied","Data":"4b639bfa91be42f81663a77a5e76c1832f5e50df04be72677151e02c7b0de405"} Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.560579 4902 scope.go:117] "RemoveContainer" containerID="1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.568471 4902 generic.go:334] "Generic (PLEG): container finished" podID="06cee7ae-d3df-4c78-8056-2877e835409a" containerID="6db095d59697014f5d540bbdbf584a4f12b528507f890ae6dcc568fbd9d40309" exitCode=0 Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.568505 4902 generic.go:334] "Generic (PLEG): container finished" podID="06cee7ae-d3df-4c78-8056-2877e835409a" containerID="badd389d3e26fbdcc8666e8bf066881f6cf09b62a13c95591c64f06bb805654a" exitCode=143 Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.568573 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"06cee7ae-d3df-4c78-8056-2877e835409a","Type":"ContainerDied","Data":"6db095d59697014f5d540bbdbf584a4f12b528507f890ae6dcc568fbd9d40309"} Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.568619 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"06cee7ae-d3df-4c78-8056-2877e835409a","Type":"ContainerDied","Data":"badd389d3e26fbdcc8666e8bf066881f6cf09b62a13c95591c64f06bb805654a"} Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.570196 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f7b3d3ef-1806-4318-95f7-eb9cd2526d32","Type":"ContainerStarted","Data":"8ca33c8fc5a0e63e441bf1f4d2ea2248656dab9b2afd65c9c46a409be9c991bf"} Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.571560 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.597251 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.59722799 podStartE2EDuration="2.59722799s" podCreationTimestamp="2026-01-21 16:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:32.589626166 +0000 UTC m=+5854.666459195" watchObservedRunningTime="2026-01-21 16:11:32.59722799 +0000 UTC m=+5854.674061019" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.647396 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6e0e21-ab4e-40db-ad7d-fde50926c691-config-data\") pod \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.647957 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pq4g\" (UniqueName: \"kubernetes.io/projected/3a6e0e21-ab4e-40db-ad7d-fde50926c691-kube-api-access-4pq4g\") pod \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.648002 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a6e0e21-ab4e-40db-ad7d-fde50926c691-logs\") pod \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.648110 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6e0e21-ab4e-40db-ad7d-fde50926c691-combined-ca-bundle\") pod \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.649357 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a6e0e21-ab4e-40db-ad7d-fde50926c691-logs" (OuterVolumeSpecName: "logs") pod "3a6e0e21-ab4e-40db-ad7d-fde50926c691" (UID: "3a6e0e21-ab4e-40db-ad7d-fde50926c691"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.654395 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a6e0e21-ab4e-40db-ad7d-fde50926c691-kube-api-access-4pq4g" (OuterVolumeSpecName: "kube-api-access-4pq4g") pod "3a6e0e21-ab4e-40db-ad7d-fde50926c691" (UID: "3a6e0e21-ab4e-40db-ad7d-fde50926c691"). InnerVolumeSpecName "kube-api-access-4pq4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.673196 4902 scope.go:117] "RemoveContainer" containerID="5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.688133 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a6e0e21-ab4e-40db-ad7d-fde50926c691-config-data" (OuterVolumeSpecName: "config-data") pod "3a6e0e21-ab4e-40db-ad7d-fde50926c691" (UID: "3a6e0e21-ab4e-40db-ad7d-fde50926c691"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.691329 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a6e0e21-ab4e-40db-ad7d-fde50926c691-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a6e0e21-ab4e-40db-ad7d-fde50926c691" (UID: "3a6e0e21-ab4e-40db-ad7d-fde50926c691"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.748432 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.749300 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6e0e21-ab4e-40db-ad7d-fde50926c691-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.749328 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6e0e21-ab4e-40db-ad7d-fde50926c691-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.749337 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pq4g\" (UniqueName: \"kubernetes.io/projected/3a6e0e21-ab4e-40db-ad7d-fde50926c691-kube-api-access-4pq4g\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.749346 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a6e0e21-ab4e-40db-ad7d-fde50926c691-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.757143 4902 scope.go:117] "RemoveContainer" containerID="1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82" Jan 21 16:11:32 crc kubenswrapper[4902]: E0121 16:11:32.760769 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82\": container with ID starting with 1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82 not found: ID does not exist" containerID="1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.760808 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82"} err="failed to get container status \"1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82\": rpc error: code = NotFound desc = could not find container \"1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82\": container with ID starting with 1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82 not found: ID does not exist" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.760829 4902 scope.go:117] "RemoveContainer" containerID="5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b" Jan 21 16:11:32 crc kubenswrapper[4902]: E0121 16:11:32.761186 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b\": container with ID starting with 5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b not found: ID does not exist" containerID="5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.761222 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b"} err="failed to get container status \"5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b\": rpc error: code = NotFound desc = could not find container \"5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b\": container with ID starting with 5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b not found: ID does not exist" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.761267 4902 scope.go:117] "RemoveContainer" containerID="1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.761495 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82"} err="failed to get container status \"1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82\": rpc error: code = NotFound desc = could not find container \"1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82\": container with ID starting with 1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82 not found: ID does not exist" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.761511 4902 scope.go:117] "RemoveContainer" containerID="5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.761682 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b"} err="failed to get container status \"5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b\": rpc error: code = NotFound desc = could not find container \"5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b\": container with ID starting with 5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b not found: ID does not exist" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.850109 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06cee7ae-d3df-4c78-8056-2877e835409a-logs\") pod \"06cee7ae-d3df-4c78-8056-2877e835409a\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.850277 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-config-data\") pod \"06cee7ae-d3df-4c78-8056-2877e835409a\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.850356 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-combined-ca-bundle\") pod \"06cee7ae-d3df-4c78-8056-2877e835409a\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.850431 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-nova-metadata-tls-certs\") pod \"06cee7ae-d3df-4c78-8056-2877e835409a\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.850479 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpjw6\" (UniqueName: \"kubernetes.io/projected/06cee7ae-d3df-4c78-8056-2877e835409a-kube-api-access-cpjw6\") pod \"06cee7ae-d3df-4c78-8056-2877e835409a\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.851286 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06cee7ae-d3df-4c78-8056-2877e835409a-logs" (OuterVolumeSpecName: "logs") pod "06cee7ae-d3df-4c78-8056-2877e835409a" (UID: "06cee7ae-d3df-4c78-8056-2877e835409a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.854111 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06cee7ae-d3df-4c78-8056-2877e835409a-kube-api-access-cpjw6" (OuterVolumeSpecName: "kube-api-access-cpjw6") pod "06cee7ae-d3df-4c78-8056-2877e835409a" (UID: "06cee7ae-d3df-4c78-8056-2877e835409a"). InnerVolumeSpecName "kube-api-access-cpjw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.882527 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-config-data" (OuterVolumeSpecName: "config-data") pod "06cee7ae-d3df-4c78-8056-2877e835409a" (UID: "06cee7ae-d3df-4c78-8056-2877e835409a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.884267 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.892527 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06cee7ae-d3df-4c78-8056-2877e835409a" (UID: "06cee7ae-d3df-4c78-8056-2877e835409a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.903929 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "06cee7ae-d3df-4c78-8056-2877e835409a" (UID: "06cee7ae-d3df-4c78-8056-2877e835409a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.941446 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.942852 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.954636 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06cee7ae-d3df-4c78-8056-2877e835409a-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.954676 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.954688 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.954698 4902 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.954708 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpjw6\" (UniqueName: \"kubernetes.io/projected/06cee7ae-d3df-4c78-8056-2877e835409a-kube-api-access-cpjw6\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.969324 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.981372 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-ddb658677-chfv4"] Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.981680 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-ddb658677-chfv4" podUID="9a24ae7c-3fa5-479a-84b4-56ad2792d386" containerName="dnsmasq-dns" containerID="cri-o://ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7" gracePeriod=10 Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.000117 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 16:11:33 crc kubenswrapper[4902]: E0121 16:11:33.000557 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a6e0e21-ab4e-40db-ad7d-fde50926c691" containerName="nova-api-log" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.000571 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a6e0e21-ab4e-40db-ad7d-fde50926c691" containerName="nova-api-log" Jan 21 16:11:33 crc kubenswrapper[4902]: E0121 16:11:33.000591 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06cee7ae-d3df-4c78-8056-2877e835409a" containerName="nova-metadata-metadata" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.000597 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="06cee7ae-d3df-4c78-8056-2877e835409a" containerName="nova-metadata-metadata" Jan 21 16:11:33 crc kubenswrapper[4902]: E0121 16:11:33.000611 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a6e0e21-ab4e-40db-ad7d-fde50926c691" containerName="nova-api-api" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.000618 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a6e0e21-ab4e-40db-ad7d-fde50926c691" containerName="nova-api-api" Jan 21 16:11:33 crc kubenswrapper[4902]: E0121 16:11:33.000638 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06cee7ae-d3df-4c78-8056-2877e835409a" containerName="nova-metadata-log" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.000644 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="06cee7ae-d3df-4c78-8056-2877e835409a" containerName="nova-metadata-log" Jan 21 16:11:33 crc kubenswrapper[4902]: E0121 16:11:33.000665 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beebb97d-c56a-4c7d-8ec0-f9982f9c2e32" containerName="nova-manage" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.000677 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="beebb97d-c56a-4c7d-8ec0-f9982f9c2e32" containerName="nova-manage" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.000873 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="beebb97d-c56a-4c7d-8ec0-f9982f9c2e32" containerName="nova-manage" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.000891 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="06cee7ae-d3df-4c78-8056-2877e835409a" containerName="nova-metadata-metadata" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.000902 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a6e0e21-ab4e-40db-ad7d-fde50926c691" containerName="nova-api-api" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.000913 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="06cee7ae-d3df-4c78-8056-2877e835409a" containerName="nova-metadata-log" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.000927 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a6e0e21-ab4e-40db-ad7d-fde50926c691" containerName="nova-api-log" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.002130 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.008798 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.016548 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.057239 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-logs\") pod \"nova-api-0\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.057347 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kfth\" (UniqueName: \"kubernetes.io/projected/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-kube-api-access-7kfth\") pod \"nova-api-0\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.057828 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.057898 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-config-data\") pod \"nova-api-0\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.160582 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-logs\") pod \"nova-api-0\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.160705 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kfth\" (UniqueName: \"kubernetes.io/projected/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-kube-api-access-7kfth\") pod \"nova-api-0\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.160786 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.160821 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-config-data\") pod \"nova-api-0\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.161760 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-logs\") pod \"nova-api-0\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.169388 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-config-data\") pod \"nova-api-0\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.170909 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.187767 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kfth\" (UniqueName: \"kubernetes.io/projected/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-kube-api-access-7kfth\") pod \"nova-api-0\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.328759 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.422308 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.465556 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-ovsdbserver-nb\") pod \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.465597 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-ovsdbserver-sb\") pod \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.465676 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-dns-svc\") pod \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.465738 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-config\") pod \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.465886 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5rm5\" (UniqueName: \"kubernetes.io/projected/9a24ae7c-3fa5-479a-84b4-56ad2792d386-kube-api-access-w5rm5\") pod \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.472942 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a24ae7c-3fa5-479a-84b4-56ad2792d386-kube-api-access-w5rm5" (OuterVolumeSpecName: "kube-api-access-w5rm5") pod "9a24ae7c-3fa5-479a-84b4-56ad2792d386" (UID: "9a24ae7c-3fa5-479a-84b4-56ad2792d386"). InnerVolumeSpecName "kube-api-access-w5rm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.526723 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9a24ae7c-3fa5-479a-84b4-56ad2792d386" (UID: "9a24ae7c-3fa5-479a-84b4-56ad2792d386"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.533448 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9a24ae7c-3fa5-479a-84b4-56ad2792d386" (UID: "9a24ae7c-3fa5-479a-84b4-56ad2792d386"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.541116 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9a24ae7c-3fa5-479a-84b4-56ad2792d386" (UID: "9a24ae7c-3fa5-479a-84b4-56ad2792d386"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.541970 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-config" (OuterVolumeSpecName: "config") pod "9a24ae7c-3fa5-479a-84b4-56ad2792d386" (UID: "9a24ae7c-3fa5-479a-84b4-56ad2792d386"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.570005 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5rm5\" (UniqueName: \"kubernetes.io/projected/9a24ae7c-3fa5-479a-84b4-56ad2792d386-kube-api-access-w5rm5\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.570077 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.570092 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.570105 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.570118 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.596385 4902 generic.go:334] "Generic (PLEG): container finished" podID="9a24ae7c-3fa5-479a-84b4-56ad2792d386" containerID="ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7" exitCode=0 Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.596458 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ddb658677-chfv4" event={"ID":"9a24ae7c-3fa5-479a-84b4-56ad2792d386","Type":"ContainerDied","Data":"ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7"} Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.596494 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ddb658677-chfv4" event={"ID":"9a24ae7c-3fa5-479a-84b4-56ad2792d386","Type":"ContainerDied","Data":"cdda0da4083f2f0e9099ddfcab1f5b7e57fb3c8539f90cf5b020d3761c23f6b0"} Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.596516 4902 scope.go:117] "RemoveContainer" containerID="ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.596658 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.610273 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.614568 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"06cee7ae-d3df-4c78-8056-2877e835409a","Type":"ContainerDied","Data":"c7e7aaf7977a7ad1cca77abff20f575bad05c5750d7ec60c2c6c5384633a215a"} Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.661201 4902 scope.go:117] "RemoveContainer" containerID="84bed1e613719e3773c1b9d9b2ea9d7ab951f0b0abeb24309b7e16ecbd52c1a9" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.701062 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-ddb658677-chfv4"] Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.715965 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-ddb658677-chfv4"] Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.720592 4902 scope.go:117] "RemoveContainer" containerID="ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7" Jan 21 16:11:33 crc kubenswrapper[4902]: E0121 16:11:33.721191 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7\": container with ID starting with ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7 not found: ID does not exist" containerID="ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.721234 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7"} err="failed to get container status \"ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7\": rpc error: code = NotFound desc = could not find container \"ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7\": container with ID starting with ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7 not found: ID does not exist" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.721257 4902 scope.go:117] "RemoveContainer" containerID="84bed1e613719e3773c1b9d9b2ea9d7ab951f0b0abeb24309b7e16ecbd52c1a9" Jan 21 16:11:33 crc kubenswrapper[4902]: E0121 16:11:33.721459 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84bed1e613719e3773c1b9d9b2ea9d7ab951f0b0abeb24309b7e16ecbd52c1a9\": container with ID starting with 84bed1e613719e3773c1b9d9b2ea9d7ab951f0b0abeb24309b7e16ecbd52c1a9 not found: ID does not exist" containerID="84bed1e613719e3773c1b9d9b2ea9d7ab951f0b0abeb24309b7e16ecbd52c1a9" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.721478 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84bed1e613719e3773c1b9d9b2ea9d7ab951f0b0abeb24309b7e16ecbd52c1a9"} err="failed to get container status \"84bed1e613719e3773c1b9d9b2ea9d7ab951f0b0abeb24309b7e16ecbd52c1a9\": rpc error: code = NotFound desc = could not find container \"84bed1e613719e3773c1b9d9b2ea9d7ab951f0b0abeb24309b7e16ecbd52c1a9\": container with ID starting with 84bed1e613719e3773c1b9d9b2ea9d7ab951f0b0abeb24309b7e16ecbd52c1a9 not found: ID does not exist" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.721491 4902 scope.go:117] "RemoveContainer" containerID="6db095d59697014f5d540bbdbf584a4f12b528507f890ae6dcc568fbd9d40309" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.752133 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.754891 4902 scope.go:117] "RemoveContainer" containerID="badd389d3e26fbdcc8666e8bf066881f6cf09b62a13c95591c64f06bb805654a" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.763061 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.770807 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:33 crc kubenswrapper[4902]: E0121 16:11:33.771309 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a24ae7c-3fa5-479a-84b4-56ad2792d386" containerName="init" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.771333 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a24ae7c-3fa5-479a-84b4-56ad2792d386" containerName="init" Jan 21 16:11:33 crc kubenswrapper[4902]: E0121 16:11:33.771365 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a24ae7c-3fa5-479a-84b4-56ad2792d386" containerName="dnsmasq-dns" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.771374 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a24ae7c-3fa5-479a-84b4-56ad2792d386" containerName="dnsmasq-dns" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.771599 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a24ae7c-3fa5-479a-84b4-56ad2792d386" containerName="dnsmasq-dns" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.773390 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.775626 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.775669 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.782085 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.830458 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:11:33 crc kubenswrapper[4902]: W0121 16:11:33.832418 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11cfdeec_5c4f_4051_8c8d_3c4c3e648e87.slice/crio-869e26c7cdc408cc6d30fb73754154da4ed626aa2bcfbfb82320a1a50bc79cff WatchSource:0}: Error finding container 869e26c7cdc408cc6d30fb73754154da4ed626aa2bcfbfb82320a1a50bc79cff: Status 404 returned error can't find the container with id 869e26c7cdc408cc6d30fb73754154da4ed626aa2bcfbfb82320a1a50bc79cff Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.884580 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.884644 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-logs\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.884722 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-config-data\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.884762 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9qv8\" (UniqueName: \"kubernetes.io/projected/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-kube-api-access-k9qv8\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.884835 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.986509 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.986579 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.986612 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-logs\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.986671 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-config-data\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.986699 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9qv8\" (UniqueName: \"kubernetes.io/projected/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-kube-api-access-k9qv8\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.987303 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-logs\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.992550 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.993446 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-config-data\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.993583 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:34 crc kubenswrapper[4902]: I0121 16:11:34.012625 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9qv8\" (UniqueName: \"kubernetes.io/projected/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-kube-api-access-k9qv8\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:34 crc kubenswrapper[4902]: I0121 16:11:34.100263 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:11:34 crc kubenswrapper[4902]: I0121 16:11:34.314336 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06cee7ae-d3df-4c78-8056-2877e835409a" path="/var/lib/kubelet/pods/06cee7ae-d3df-4c78-8056-2877e835409a/volumes" Jan 21 16:11:34 crc kubenswrapper[4902]: I0121 16:11:34.315769 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a6e0e21-ab4e-40db-ad7d-fde50926c691" path="/var/lib/kubelet/pods/3a6e0e21-ab4e-40db-ad7d-fde50926c691/volumes" Jan 21 16:11:34 crc kubenswrapper[4902]: I0121 16:11:34.316963 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a24ae7c-3fa5-479a-84b4-56ad2792d386" path="/var/lib/kubelet/pods/9a24ae7c-3fa5-479a-84b4-56ad2792d386/volumes" Jan 21 16:11:34 crc kubenswrapper[4902]: I0121 16:11:34.554055 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:34 crc kubenswrapper[4902]: W0121 16:11:34.555974 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b3b10dc_7950_4a5c_a31d_3fc11ce4de05.slice/crio-5dc878fe2810b860179baaee0d2ae9f7776b38daf321c444be0ed0a6d720d1e8 WatchSource:0}: Error finding container 5dc878fe2810b860179baaee0d2ae9f7776b38daf321c444be0ed0a6d720d1e8: Status 404 returned error can't find the container with id 5dc878fe2810b860179baaee0d2ae9f7776b38daf321c444be0ed0a6d720d1e8 Jan 21 16:11:34 crc kubenswrapper[4902]: I0121 16:11:34.622438 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87","Type":"ContainerStarted","Data":"84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692"} Jan 21 16:11:34 crc kubenswrapper[4902]: I0121 16:11:34.622484 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87","Type":"ContainerStarted","Data":"486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681"} Jan 21 16:11:34 crc kubenswrapper[4902]: I0121 16:11:34.622500 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87","Type":"ContainerStarted","Data":"869e26c7cdc408cc6d30fb73754154da4ed626aa2bcfbfb82320a1a50bc79cff"} Jan 21 16:11:34 crc kubenswrapper[4902]: I0121 16:11:34.625637 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05","Type":"ContainerStarted","Data":"5dc878fe2810b860179baaee0d2ae9f7776b38daf321c444be0ed0a6d720d1e8"} Jan 21 16:11:34 crc kubenswrapper[4902]: I0121 16:11:34.647469 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.647447756 podStartE2EDuration="2.647447756s" podCreationTimestamp="2026-01-21 16:11:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:34.639018349 +0000 UTC m=+5856.715851388" watchObservedRunningTime="2026-01-21 16:11:34.647447756 +0000 UTC m=+5856.724280785" Jan 21 16:11:35 crc kubenswrapper[4902]: I0121 16:11:35.648865 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05","Type":"ContainerStarted","Data":"42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e"} Jan 21 16:11:35 crc kubenswrapper[4902]: I0121 16:11:35.649243 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05","Type":"ContainerStarted","Data":"1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f"} Jan 21 16:11:35 crc kubenswrapper[4902]: I0121 16:11:35.679067 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.679022571 podStartE2EDuration="2.679022571s" podCreationTimestamp="2026-01-21 16:11:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:35.676749538 +0000 UTC m=+5857.753582587" watchObservedRunningTime="2026-01-21 16:11:35.679022571 +0000 UTC m=+5857.755855620" Jan 21 16:11:37 crc kubenswrapper[4902]: I0121 16:11:37.934635 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:37 crc kubenswrapper[4902]: I0121 16:11:37.960359 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:38 crc kubenswrapper[4902]: I0121 16:11:38.694257 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:39 crc kubenswrapper[4902]: I0121 16:11:39.100981 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:11:39 crc kubenswrapper[4902]: I0121 16:11:39.101147 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.005552 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.528129 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-qd5pv"] Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.530203 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.533118 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.533404 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.543676 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qd5pv"] Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.645960 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qd5pv\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.646028 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-config-data\") pod \"nova-cell1-cell-mapping-qd5pv\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.646288 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq8ks\" (UniqueName: \"kubernetes.io/projected/87c2e205-1cb6-4b63-89d5-c03370d5cb02-kube-api-access-vq8ks\") pod \"nova-cell1-cell-mapping-qd5pv\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.646622 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-scripts\") pod \"nova-cell1-cell-mapping-qd5pv\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.748732 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq8ks\" (UniqueName: \"kubernetes.io/projected/87c2e205-1cb6-4b63-89d5-c03370d5cb02-kube-api-access-vq8ks\") pod \"nova-cell1-cell-mapping-qd5pv\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.749380 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-scripts\") pod \"nova-cell1-cell-mapping-qd5pv\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.750529 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qd5pv\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.750718 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-config-data\") pod \"nova-cell1-cell-mapping-qd5pv\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.755624 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-scripts\") pod \"nova-cell1-cell-mapping-qd5pv\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.756816 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-config-data\") pod \"nova-cell1-cell-mapping-qd5pv\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.766454 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq8ks\" (UniqueName: \"kubernetes.io/projected/87c2e205-1cb6-4b63-89d5-c03370d5cb02-kube-api-access-vq8ks\") pod \"nova-cell1-cell-mapping-qd5pv\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.774100 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qd5pv\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.863992 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:42 crc kubenswrapper[4902]: I0121 16:11:42.319009 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qd5pv"] Jan 21 16:11:42 crc kubenswrapper[4902]: W0121 16:11:42.321018 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87c2e205_1cb6_4b63_89d5_c03370d5cb02.slice/crio-e424a70566b2f7137d7456e4862b7715141b75fada4e7de5d82ecb3129696b33 WatchSource:0}: Error finding container e424a70566b2f7137d7456e4862b7715141b75fada4e7de5d82ecb3129696b33: Status 404 returned error can't find the container with id e424a70566b2f7137d7456e4862b7715141b75fada4e7de5d82ecb3129696b33 Jan 21 16:11:42 crc kubenswrapper[4902]: I0121 16:11:42.719591 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qd5pv" event={"ID":"87c2e205-1cb6-4b63-89d5-c03370d5cb02","Type":"ContainerStarted","Data":"2c69e68e7d02d1de6bf68e1e65e17ee7498b6d1191ba5efd74e3f15243d799ed"} Jan 21 16:11:42 crc kubenswrapper[4902]: I0121 16:11:42.719895 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qd5pv" event={"ID":"87c2e205-1cb6-4b63-89d5-c03370d5cb02","Type":"ContainerStarted","Data":"e424a70566b2f7137d7456e4862b7715141b75fada4e7de5d82ecb3129696b33"} Jan 21 16:11:42 crc kubenswrapper[4902]: I0121 16:11:42.744076 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-qd5pv" podStartSLOduration=1.74403542 podStartE2EDuration="1.74403542s" podCreationTimestamp="2026-01-21 16:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:42.735821849 +0000 UTC m=+5864.812654888" watchObservedRunningTime="2026-01-21 16:11:42.74403542 +0000 UTC m=+5864.820868459" Jan 21 16:11:43 crc kubenswrapper[4902]: I0121 16:11:43.329893 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:11:43 crc kubenswrapper[4902]: I0121 16:11:43.330247 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:11:44 crc kubenswrapper[4902]: I0121 16:11:44.101565 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 16:11:44 crc kubenswrapper[4902]: I0121 16:11:44.101609 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 16:11:44 crc kubenswrapper[4902]: I0121 16:11:44.414231 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.82:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:11:44 crc kubenswrapper[4902]: I0121 16:11:44.414231 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.82:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:11:45 crc kubenswrapper[4902]: I0121 16:11:45.118508 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.83:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:11:45 crc kubenswrapper[4902]: I0121 16:11:45.118516 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.83:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:11:47 crc kubenswrapper[4902]: I0121 16:11:47.773722 4902 generic.go:334] "Generic (PLEG): container finished" podID="87c2e205-1cb6-4b63-89d5-c03370d5cb02" containerID="2c69e68e7d02d1de6bf68e1e65e17ee7498b6d1191ba5efd74e3f15243d799ed" exitCode=0 Jan 21 16:11:47 crc kubenswrapper[4902]: I0121 16:11:47.773797 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qd5pv" event={"ID":"87c2e205-1cb6-4b63-89d5-c03370d5cb02","Type":"ContainerDied","Data":"2c69e68e7d02d1de6bf68e1e65e17ee7498b6d1191ba5efd74e3f15243d799ed"} Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.131024 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.302698 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vq8ks\" (UniqueName: \"kubernetes.io/projected/87c2e205-1cb6-4b63-89d5-c03370d5cb02-kube-api-access-vq8ks\") pod \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.302819 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-combined-ca-bundle\") pod \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.302892 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-config-data\") pod \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.303013 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-scripts\") pod \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.308663 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87c2e205-1cb6-4b63-89d5-c03370d5cb02-kube-api-access-vq8ks" (OuterVolumeSpecName: "kube-api-access-vq8ks") pod "87c2e205-1cb6-4b63-89d5-c03370d5cb02" (UID: "87c2e205-1cb6-4b63-89d5-c03370d5cb02"). InnerVolumeSpecName "kube-api-access-vq8ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.310303 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-scripts" (OuterVolumeSpecName: "scripts") pod "87c2e205-1cb6-4b63-89d5-c03370d5cb02" (UID: "87c2e205-1cb6-4b63-89d5-c03370d5cb02"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.331769 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-config-data" (OuterVolumeSpecName: "config-data") pod "87c2e205-1cb6-4b63-89d5-c03370d5cb02" (UID: "87c2e205-1cb6-4b63-89d5-c03370d5cb02"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.348871 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87c2e205-1cb6-4b63-89d5-c03370d5cb02" (UID: "87c2e205-1cb6-4b63-89d5-c03370d5cb02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.405227 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.405265 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.405277 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.405290 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vq8ks\" (UniqueName: \"kubernetes.io/projected/87c2e205-1cb6-4b63-89d5-c03370d5cb02-kube-api-access-vq8ks\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.798536 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qd5pv" event={"ID":"87c2e205-1cb6-4b63-89d5-c03370d5cb02","Type":"ContainerDied","Data":"e424a70566b2f7137d7456e4862b7715141b75fada4e7de5d82ecb3129696b33"} Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.798573 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e424a70566b2f7137d7456e4862b7715141b75fada4e7de5d82ecb3129696b33" Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.798614 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:50 crc kubenswrapper[4902]: I0121 16:11:50.000458 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:11:50 crc kubenswrapper[4902]: I0121 16:11:50.000812 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" containerName="nova-api-log" containerID="cri-o://486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681" gracePeriod=30 Jan 21 16:11:50 crc kubenswrapper[4902]: I0121 16:11:50.000932 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" containerName="nova-api-api" containerID="cri-o://84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692" gracePeriod=30 Jan 21 16:11:50 crc kubenswrapper[4902]: I0121 16:11:50.040514 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:50 crc kubenswrapper[4902]: I0121 16:11:50.040963 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" containerName="nova-metadata-log" containerID="cri-o://1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f" gracePeriod=30 Jan 21 16:11:50 crc kubenswrapper[4902]: I0121 16:11:50.041134 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" containerName="nova-metadata-metadata" containerID="cri-o://42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e" gracePeriod=30 Jan 21 16:11:50 crc kubenswrapper[4902]: I0121 16:11:50.808829 4902 generic.go:334] "Generic (PLEG): container finished" podID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" containerID="1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f" exitCode=143 Jan 21 16:11:50 crc kubenswrapper[4902]: I0121 16:11:50.808910 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05","Type":"ContainerDied","Data":"1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f"} Jan 21 16:11:50 crc kubenswrapper[4902]: I0121 16:11:50.811690 4902 generic.go:334] "Generic (PLEG): container finished" podID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" containerID="486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681" exitCode=143 Jan 21 16:11:50 crc kubenswrapper[4902]: I0121 16:11:50.811868 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87","Type":"ContainerDied","Data":"486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681"} Jan 21 16:12:01 crc kubenswrapper[4902]: I0121 16:12:01.940768 4902 generic.go:334] "Generic (PLEG): container finished" podID="57ebee9b-653a-4d49-9002-23c81b622b7c" containerID="31fae740e9f177c6066f6345aa9a2713697f906cf0f7e1329e3a321359e5144b" exitCode=137 Jan 21 16:12:01 crc kubenswrapper[4902]: I0121 16:12:01.940995 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"57ebee9b-653a-4d49-9002-23c81b622b7c","Type":"ContainerDied","Data":"31fae740e9f177c6066f6345aa9a2713697f906cf0f7e1329e3a321359e5144b"} Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.249011 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.277604 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57ebee9b-653a-4d49-9002-23c81b622b7c-combined-ca-bundle\") pod \"57ebee9b-653a-4d49-9002-23c81b622b7c\" (UID: \"57ebee9b-653a-4d49-9002-23c81b622b7c\") " Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.277656 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdcvq\" (UniqueName: \"kubernetes.io/projected/57ebee9b-653a-4d49-9002-23c81b622b7c-kube-api-access-vdcvq\") pod \"57ebee9b-653a-4d49-9002-23c81b622b7c\" (UID: \"57ebee9b-653a-4d49-9002-23c81b622b7c\") " Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.277697 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57ebee9b-653a-4d49-9002-23c81b622b7c-config-data\") pod \"57ebee9b-653a-4d49-9002-23c81b622b7c\" (UID: \"57ebee9b-653a-4d49-9002-23c81b622b7c\") " Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.283036 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57ebee9b-653a-4d49-9002-23c81b622b7c-kube-api-access-vdcvq" (OuterVolumeSpecName: "kube-api-access-vdcvq") pod "57ebee9b-653a-4d49-9002-23c81b622b7c" (UID: "57ebee9b-653a-4d49-9002-23c81b622b7c"). InnerVolumeSpecName "kube-api-access-vdcvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.306207 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57ebee9b-653a-4d49-9002-23c81b622b7c-config-data" (OuterVolumeSpecName: "config-data") pod "57ebee9b-653a-4d49-9002-23c81b622b7c" (UID: "57ebee9b-653a-4d49-9002-23c81b622b7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.309720 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57ebee9b-653a-4d49-9002-23c81b622b7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57ebee9b-653a-4d49-9002-23c81b622b7c" (UID: "57ebee9b-653a-4d49-9002-23c81b622b7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.379866 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57ebee9b-653a-4d49-9002-23c81b622b7c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.380770 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57ebee9b-653a-4d49-9002-23c81b622b7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.380895 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdcvq\" (UniqueName: \"kubernetes.io/projected/57ebee9b-653a-4d49-9002-23c81b622b7c-kube-api-access-vdcvq\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.958245 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"57ebee9b-653a-4d49-9002-23c81b622b7c","Type":"ContainerDied","Data":"9ebe5c00f1a81b515c7ecc716c300b5811813ab974f7f4bd90b9fc00489cfc97"} Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.958570 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.958615 4902 scope.go:117] "RemoveContainer" containerID="31fae740e9f177c6066f6345aa9a2713697f906cf0f7e1329e3a321359e5144b" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.010443 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.020763 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.052751 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:12:03 crc kubenswrapper[4902]: E0121 16:12:03.053265 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87c2e205-1cb6-4b63-89d5-c03370d5cb02" containerName="nova-manage" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.053291 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="87c2e205-1cb6-4b63-89d5-c03370d5cb02" containerName="nova-manage" Jan 21 16:12:03 crc kubenswrapper[4902]: E0121 16:12:03.053353 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57ebee9b-653a-4d49-9002-23c81b622b7c" containerName="nova-scheduler-scheduler" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.053363 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="57ebee9b-653a-4d49-9002-23c81b622b7c" containerName="nova-scheduler-scheduler" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.053593 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="87c2e205-1cb6-4b63-89d5-c03370d5cb02" containerName="nova-manage" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.053613 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="57ebee9b-653a-4d49-9002-23c81b622b7c" containerName="nova-scheduler-scheduler" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.054409 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.057035 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.070854 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.094968 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlmhb\" (UniqueName: \"kubernetes.io/projected/6d12c9a0-2841-4a53-abd3-0cdb15d404fb-kube-api-access-tlmhb\") pod \"nova-scheduler-0\" (UID: \"6d12c9a0-2841-4a53-abd3-0cdb15d404fb\") " pod="openstack/nova-scheduler-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.095057 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d12c9a0-2841-4a53-abd3-0cdb15d404fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6d12c9a0-2841-4a53-abd3-0cdb15d404fb\") " pod="openstack/nova-scheduler-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.095100 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d12c9a0-2841-4a53-abd3-0cdb15d404fb-config-data\") pod \"nova-scheduler-0\" (UID: \"6d12c9a0-2841-4a53-abd3-0cdb15d404fb\") " pod="openstack/nova-scheduler-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.196936 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlmhb\" (UniqueName: \"kubernetes.io/projected/6d12c9a0-2841-4a53-abd3-0cdb15d404fb-kube-api-access-tlmhb\") pod \"nova-scheduler-0\" (UID: \"6d12c9a0-2841-4a53-abd3-0cdb15d404fb\") " pod="openstack/nova-scheduler-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.197017 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d12c9a0-2841-4a53-abd3-0cdb15d404fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6d12c9a0-2841-4a53-abd3-0cdb15d404fb\") " pod="openstack/nova-scheduler-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.197075 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d12c9a0-2841-4a53-abd3-0cdb15d404fb-config-data\") pod \"nova-scheduler-0\" (UID: \"6d12c9a0-2841-4a53-abd3-0cdb15d404fb\") " pod="openstack/nova-scheduler-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.203179 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d12c9a0-2841-4a53-abd3-0cdb15d404fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6d12c9a0-2841-4a53-abd3-0cdb15d404fb\") " pod="openstack/nova-scheduler-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.203338 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d12c9a0-2841-4a53-abd3-0cdb15d404fb-config-data\") pod \"nova-scheduler-0\" (UID: \"6d12c9a0-2841-4a53-abd3-0cdb15d404fb\") " pod="openstack/nova-scheduler-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.216748 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlmhb\" (UniqueName: \"kubernetes.io/projected/6d12c9a0-2841-4a53-abd3-0cdb15d404fb-kube-api-access-tlmhb\") pod \"nova-scheduler-0\" (UID: \"6d12c9a0-2841-4a53-abd3-0cdb15d404fb\") " pod="openstack/nova-scheduler-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.329375 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.329517 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.377002 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.862523 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:12:03 crc kubenswrapper[4902]: W0121 16:12:03.867397 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d12c9a0_2841_4a53_abd3_0cdb15d404fb.slice/crio-6b0058327c456ab7eb78a9420c03f22a3fd81f785cc01fb07fadf7771d6a7ff7 WatchSource:0}: Error finding container 6b0058327c456ab7eb78a9420c03f22a3fd81f785cc01fb07fadf7771d6a7ff7: Status 404 returned error can't find the container with id 6b0058327c456ab7eb78a9420c03f22a3fd81f785cc01fb07fadf7771d6a7ff7 Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.882630 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.925207 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.970787 4902 generic.go:334] "Generic (PLEG): container finished" podID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" containerID="42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e" exitCode=0 Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.970876 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.970876 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05","Type":"ContainerDied","Data":"42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e"} Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.972244 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05","Type":"ContainerDied","Data":"5dc878fe2810b860179baaee0d2ae9f7776b38daf321c444be0ed0a6d720d1e8"} Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.972267 4902 scope.go:117] "RemoveContainer" containerID="42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.977315 4902 generic.go:334] "Generic (PLEG): container finished" podID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" containerID="84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692" exitCode=0 Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.977409 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87","Type":"ContainerDied","Data":"84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692"} Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.977437 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87","Type":"ContainerDied","Data":"869e26c7cdc408cc6d30fb73754154da4ed626aa2bcfbfb82320a1a50bc79cff"} Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.977440 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.999232 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6d12c9a0-2841-4a53-abd3-0cdb15d404fb","Type":"ContainerStarted","Data":"6b0058327c456ab7eb78a9420c03f22a3fd81f785cc01fb07fadf7771d6a7ff7"} Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.014939 4902 scope.go:117] "RemoveContainer" containerID="1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.029275 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-config-data\") pod \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.029328 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-combined-ca-bundle\") pod \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.029360 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-config-data\") pod \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.029385 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kfth\" (UniqueName: \"kubernetes.io/projected/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-kube-api-access-7kfth\") pod \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.029425 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-logs\") pod \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.029476 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-nova-metadata-tls-certs\") pod \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.029503 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9qv8\" (UniqueName: \"kubernetes.io/projected/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-kube-api-access-k9qv8\") pod \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.029547 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-combined-ca-bundle\") pod \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.029591 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-logs\") pod \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.030394 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-logs" (OuterVolumeSpecName: "logs") pod "11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" (UID: "11cfdeec-5c4f-4051-8c8d-3c4c3e648e87"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.030823 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-logs" (OuterVolumeSpecName: "logs") pod "8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" (UID: "8b3b10dc-7950-4a5c-a31d-3fc11ce4de05"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.034033 4902 scope.go:117] "RemoveContainer" containerID="42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.034373 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-kube-api-access-k9qv8" (OuterVolumeSpecName: "kube-api-access-k9qv8") pod "8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" (UID: "8b3b10dc-7950-4a5c-a31d-3fc11ce4de05"). InnerVolumeSpecName "kube-api-access-k9qv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.034778 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-kube-api-access-7kfth" (OuterVolumeSpecName: "kube-api-access-7kfth") pod "11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" (UID: "11cfdeec-5c4f-4051-8c8d-3c4c3e648e87"). InnerVolumeSpecName "kube-api-access-7kfth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:12:04 crc kubenswrapper[4902]: E0121 16:12:04.038576 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e\": container with ID starting with 42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e not found: ID does not exist" containerID="42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.038620 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e"} err="failed to get container status \"42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e\": rpc error: code = NotFound desc = could not find container \"42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e\": container with ID starting with 42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e not found: ID does not exist" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.038648 4902 scope.go:117] "RemoveContainer" containerID="1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f" Jan 21 16:12:04 crc kubenswrapper[4902]: E0121 16:12:04.039107 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f\": container with ID starting with 1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f not found: ID does not exist" containerID="1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.039152 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f"} err="failed to get container status \"1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f\": rpc error: code = NotFound desc = could not find container \"1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f\": container with ID starting with 1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f not found: ID does not exist" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.039180 4902 scope.go:117] "RemoveContainer" containerID="84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.056067 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-config-data" (OuterVolumeSpecName: "config-data") pod "11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" (UID: "11cfdeec-5c4f-4051-8c8d-3c4c3e648e87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.058828 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" (UID: "11cfdeec-5c4f-4051-8c8d-3c4c3e648e87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.061262 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" (UID: "8b3b10dc-7950-4a5c-a31d-3fc11ce4de05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.062317 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-config-data" (OuterVolumeSpecName: "config-data") pod "8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" (UID: "8b3b10dc-7950-4a5c-a31d-3fc11ce4de05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.069644 4902 scope.go:117] "RemoveContainer" containerID="486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.079792 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" (UID: "8b3b10dc-7950-4a5c-a31d-3fc11ce4de05"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.089007 4902 scope.go:117] "RemoveContainer" containerID="84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692" Jan 21 16:12:04 crc kubenswrapper[4902]: E0121 16:12:04.089611 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692\": container with ID starting with 84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692 not found: ID does not exist" containerID="84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.089676 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692"} err="failed to get container status \"84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692\": rpc error: code = NotFound desc = could not find container \"84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692\": container with ID starting with 84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692 not found: ID does not exist" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.089710 4902 scope.go:117] "RemoveContainer" containerID="486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681" Jan 21 16:12:04 crc kubenswrapper[4902]: E0121 16:12:04.090175 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681\": container with ID starting with 486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681 not found: ID does not exist" containerID="486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.090206 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681"} err="failed to get container status \"486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681\": rpc error: code = NotFound desc = could not find container \"486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681\": container with ID starting with 486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681 not found: ID does not exist" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.131673 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.131706 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.131716 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.131725 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kfth\" (UniqueName: \"kubernetes.io/projected/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-kube-api-access-7kfth\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.131734 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.131742 4902 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.131751 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9qv8\" (UniqueName: \"kubernetes.io/projected/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-kube-api-access-k9qv8\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.131762 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.131770 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.309400 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57ebee9b-653a-4d49-9002-23c81b622b7c" path="/var/lib/kubelet/pods/57ebee9b-653a-4d49-9002-23c81b622b7c/volumes" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.315247 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.325513 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.341084 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.364591 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.390493 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 16:12:04 crc kubenswrapper[4902]: E0121 16:12:04.390903 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" containerName="nova-api-log" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.390921 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" containerName="nova-api-log" Jan 21 16:12:04 crc kubenswrapper[4902]: E0121 16:12:04.390935 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" containerName="nova-metadata-metadata" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.390941 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" containerName="nova-metadata-metadata" Jan 21 16:12:04 crc kubenswrapper[4902]: E0121 16:12:04.390967 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" containerName="nova-api-api" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.390973 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" containerName="nova-api-api" Jan 21 16:12:04 crc kubenswrapper[4902]: E0121 16:12:04.390985 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" containerName="nova-metadata-log" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.390991 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" containerName="nova-metadata-log" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.391177 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" containerName="nova-metadata-metadata" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.391197 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" containerName="nova-api-log" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.391207 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" containerName="nova-api-api" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.391215 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" containerName="nova-metadata-log" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.392068 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.413799 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.427657 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.436023 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.439565 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.439628 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.446423 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.458757 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.539409 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c018ac-4b51-418d-8410-6f3f6e84d0b0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.539470 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzc48\" (UniqueName: \"kubernetes.io/projected/07c018ac-4b51-418d-8410-6f3f6e84d0b0-kube-api-access-fzc48\") pod \"nova-api-0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.539495 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/98338524-801f-465f-8845-1d061027c735-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.539520 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c018ac-4b51-418d-8410-6f3f6e84d0b0-config-data\") pod \"nova-api-0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.539536 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98338524-801f-465f-8845-1d061027c735-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.539560 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07c018ac-4b51-418d-8410-6f3f6e84d0b0-logs\") pod \"nova-api-0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.539598 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whg8b\" (UniqueName: \"kubernetes.io/projected/98338524-801f-465f-8845-1d061027c735-kube-api-access-whg8b\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.539623 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98338524-801f-465f-8845-1d061027c735-config-data\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.539665 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98338524-801f-465f-8845-1d061027c735-logs\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.640830 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07c018ac-4b51-418d-8410-6f3f6e84d0b0-logs\") pod \"nova-api-0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.640903 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whg8b\" (UniqueName: \"kubernetes.io/projected/98338524-801f-465f-8845-1d061027c735-kube-api-access-whg8b\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.640932 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98338524-801f-465f-8845-1d061027c735-config-data\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.640982 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98338524-801f-465f-8845-1d061027c735-logs\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.641055 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c018ac-4b51-418d-8410-6f3f6e84d0b0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.641083 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzc48\" (UniqueName: \"kubernetes.io/projected/07c018ac-4b51-418d-8410-6f3f6e84d0b0-kube-api-access-fzc48\") pod \"nova-api-0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.641099 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/98338524-801f-465f-8845-1d061027c735-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.641146 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c018ac-4b51-418d-8410-6f3f6e84d0b0-config-data\") pod \"nova-api-0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.641164 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98338524-801f-465f-8845-1d061027c735-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.641734 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07c018ac-4b51-418d-8410-6f3f6e84d0b0-logs\") pod \"nova-api-0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.642166 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98338524-801f-465f-8845-1d061027c735-logs\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.645655 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98338524-801f-465f-8845-1d061027c735-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.645713 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c018ac-4b51-418d-8410-6f3f6e84d0b0-config-data\") pod \"nova-api-0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.655009 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/98338524-801f-465f-8845-1d061027c735-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.656316 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98338524-801f-465f-8845-1d061027c735-config-data\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.656747 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c018ac-4b51-418d-8410-6f3f6e84d0b0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.662001 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzc48\" (UniqueName: \"kubernetes.io/projected/07c018ac-4b51-418d-8410-6f3f6e84d0b0-kube-api-access-fzc48\") pod \"nova-api-0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.669558 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whg8b\" (UniqueName: \"kubernetes.io/projected/98338524-801f-465f-8845-1d061027c735-kube-api-access-whg8b\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.738911 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.756521 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:12:05 crc kubenswrapper[4902]: I0121 16:12:05.013883 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6d12c9a0-2841-4a53-abd3-0cdb15d404fb","Type":"ContainerStarted","Data":"272796d652b0e210858235440b0694234f672ace0166d1702647a8424056a119"} Jan 21 16:12:05 crc kubenswrapper[4902]: I0121 16:12:05.033090 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.033063874 podStartE2EDuration="2.033063874s" podCreationTimestamp="2026-01-21 16:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:12:05.028732502 +0000 UTC m=+5887.105565541" watchObservedRunningTime="2026-01-21 16:12:05.033063874 +0000 UTC m=+5887.109896903" Jan 21 16:12:05 crc kubenswrapper[4902]: I0121 16:12:05.256952 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:12:05 crc kubenswrapper[4902]: I0121 16:12:05.273851 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:12:05 crc kubenswrapper[4902]: W0121 16:12:05.282566 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07c018ac_4b51_418d_8410_6f3f6e84d0b0.slice/crio-060c58d0d4654f115ba1bf8b0070a811e8ab56e3accae8cf59f2fe82e65ba5ea WatchSource:0}: Error finding container 060c58d0d4654f115ba1bf8b0070a811e8ab56e3accae8cf59f2fe82e65ba5ea: Status 404 returned error can't find the container with id 060c58d0d4654f115ba1bf8b0070a811e8ab56e3accae8cf59f2fe82e65ba5ea Jan 21 16:12:06 crc kubenswrapper[4902]: I0121 16:12:06.021943 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"07c018ac-4b51-418d-8410-6f3f6e84d0b0","Type":"ContainerStarted","Data":"e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767"} Jan 21 16:12:06 crc kubenswrapper[4902]: I0121 16:12:06.023088 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"07c018ac-4b51-418d-8410-6f3f6e84d0b0","Type":"ContainerStarted","Data":"eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58"} Jan 21 16:12:06 crc kubenswrapper[4902]: I0121 16:12:06.023622 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"07c018ac-4b51-418d-8410-6f3f6e84d0b0","Type":"ContainerStarted","Data":"060c58d0d4654f115ba1bf8b0070a811e8ab56e3accae8cf59f2fe82e65ba5ea"} Jan 21 16:12:06 crc kubenswrapper[4902]: I0121 16:12:06.025114 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"98338524-801f-465f-8845-1d061027c735","Type":"ContainerStarted","Data":"5d9233d0170bcce8ade2bfa80238657d2c9370b5536b8df7ceaca5a7602eba77"} Jan 21 16:12:06 crc kubenswrapper[4902]: I0121 16:12:06.025156 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"98338524-801f-465f-8845-1d061027c735","Type":"ContainerStarted","Data":"dd676ea54b6c53446b1c0a9f38edc82c24c523ff131724124eb75e6c356a88a6"} Jan 21 16:12:06 crc kubenswrapper[4902]: I0121 16:12:06.025167 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"98338524-801f-465f-8845-1d061027c735","Type":"ContainerStarted","Data":"32df1557c7fd931265e0a3f2671b981c2fb64d61b4d7cc70b5f3fc1120d35a11"} Jan 21 16:12:06 crc kubenswrapper[4902]: I0121 16:12:06.044987 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.044966486 podStartE2EDuration="2.044966486s" podCreationTimestamp="2026-01-21 16:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:12:06.043567757 +0000 UTC m=+5888.120400796" watchObservedRunningTime="2026-01-21 16:12:06.044966486 +0000 UTC m=+5888.121799515" Jan 21 16:12:06 crc kubenswrapper[4902]: I0121 16:12:06.072035 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.072012336 podStartE2EDuration="2.072012336s" podCreationTimestamp="2026-01-21 16:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:12:06.064287759 +0000 UTC m=+5888.141120828" watchObservedRunningTime="2026-01-21 16:12:06.072012336 +0000 UTC m=+5888.148845355" Jan 21 16:12:06 crc kubenswrapper[4902]: I0121 16:12:06.312235 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" path="/var/lib/kubelet/pods/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87/volumes" Jan 21 16:12:06 crc kubenswrapper[4902]: I0121 16:12:06.312873 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" path="/var/lib/kubelet/pods/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05/volumes" Jan 21 16:12:08 crc kubenswrapper[4902]: I0121 16:12:08.378144 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 16:12:09 crc kubenswrapper[4902]: I0121 16:12:09.757376 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:12:09 crc kubenswrapper[4902]: I0121 16:12:09.757511 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:12:13 crc kubenswrapper[4902]: I0121 16:12:13.378009 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 16:12:13 crc kubenswrapper[4902]: I0121 16:12:13.403956 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 16:12:14 crc kubenswrapper[4902]: I0121 16:12:14.147918 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 16:12:14 crc kubenswrapper[4902]: I0121 16:12:14.739938 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:12:14 crc kubenswrapper[4902]: I0121 16:12:14.740002 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:12:14 crc kubenswrapper[4902]: I0121 16:12:14.757805 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 16:12:14 crc kubenswrapper[4902]: I0121 16:12:14.757887 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 16:12:15 crc kubenswrapper[4902]: I0121 16:12:15.892266 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.86:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:12:15 crc kubenswrapper[4902]: I0121 16:12:15.892357 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="98338524-801f-465f-8845-1d061027c735" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.87:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:12:15 crc kubenswrapper[4902]: I0121 16:12:15.892473 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.86:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:12:15 crc kubenswrapper[4902]: I0121 16:12:15.892522 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="98338524-801f-465f-8845-1d061027c735" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.87:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:12:24 crc kubenswrapper[4902]: I0121 16:12:24.747711 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 16:12:24 crc kubenswrapper[4902]: I0121 16:12:24.748472 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 16:12:24 crc kubenswrapper[4902]: I0121 16:12:24.749278 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 16:12:24 crc kubenswrapper[4902]: I0121 16:12:24.749578 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 16:12:24 crc kubenswrapper[4902]: I0121 16:12:24.752419 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 16:12:24 crc kubenswrapper[4902]: I0121 16:12:24.753190 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 16:12:24 crc kubenswrapper[4902]: I0121 16:12:24.765315 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 16:12:24 crc kubenswrapper[4902]: I0121 16:12:24.774332 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 16:12:24 crc kubenswrapper[4902]: I0121 16:12:24.777347 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.001902 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f7b5475f9-g5lzz"] Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.003578 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.029484 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f7b5475f9-g5lzz"] Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.074135 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-dns-svc\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.074198 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2vm2\" (UniqueName: \"kubernetes.io/projected/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-kube-api-access-h2vm2\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.074272 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-config\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.074291 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-ovsdbserver-nb\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.074308 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-ovsdbserver-sb\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.176129 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-dns-svc\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.176227 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2vm2\" (UniqueName: \"kubernetes.io/projected/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-kube-api-access-h2vm2\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.176319 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-config\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.176346 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-ovsdbserver-nb\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.176372 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-ovsdbserver-sb\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.177519 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-ovsdbserver-sb\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.178032 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-ovsdbserver-nb\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.179410 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-dns-svc\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.181037 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-config\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.207721 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2vm2\" (UniqueName: \"kubernetes.io/projected/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-kube-api-access-h2vm2\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.338655 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.352445 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.946078 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f7b5475f9-g5lzz"] Jan 21 16:12:26 crc kubenswrapper[4902]: I0121 16:12:26.316980 4902 generic.go:334] "Generic (PLEG): container finished" podID="e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" containerID="e76932770c6254b11b917bc645b83b0c1aaf28ee17d431c3d586506bef4ab067" exitCode=0 Jan 21 16:12:26 crc kubenswrapper[4902]: I0121 16:12:26.317071 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" event={"ID":"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7","Type":"ContainerDied","Data":"e76932770c6254b11b917bc645b83b0c1aaf28ee17d431c3d586506bef4ab067"} Jan 21 16:12:26 crc kubenswrapper[4902]: I0121 16:12:26.317321 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" event={"ID":"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7","Type":"ContainerStarted","Data":"38677ca61f06b9260ed5f983f8682c334bd87743eff5be88bd87e6a5090aa3da"} Jan 21 16:12:27 crc kubenswrapper[4902]: I0121 16:12:27.327779 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" event={"ID":"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7","Type":"ContainerStarted","Data":"baf3c482643b3ef05bb015530d7c001d912cf37cabd28f9882b045c54788e7f1"} Jan 21 16:12:27 crc kubenswrapper[4902]: I0121 16:12:27.328117 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:27 crc kubenswrapper[4902]: I0121 16:12:27.346056 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" podStartSLOduration=3.346014241 podStartE2EDuration="3.346014241s" podCreationTimestamp="2026-01-21 16:12:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:12:27.343584272 +0000 UTC m=+5909.420417301" watchObservedRunningTime="2026-01-21 16:12:27.346014241 +0000 UTC m=+5909.422847280" Jan 21 16:12:27 crc kubenswrapper[4902]: I0121 16:12:27.803548 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:12:27 crc kubenswrapper[4902]: I0121 16:12:27.804142 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" containerName="nova-api-api" containerID="cri-o://e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767" gracePeriod=30 Jan 21 16:12:27 crc kubenswrapper[4902]: I0121 16:12:27.804212 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" containerName="nova-api-log" containerID="cri-o://eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58" gracePeriod=30 Jan 21 16:12:28 crc kubenswrapper[4902]: I0121 16:12:28.386573 4902 generic.go:334] "Generic (PLEG): container finished" podID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" containerID="eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58" exitCode=143 Jan 21 16:12:28 crc kubenswrapper[4902]: I0121 16:12:28.387463 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"07c018ac-4b51-418d-8410-6f3f6e84d0b0","Type":"ContainerDied","Data":"eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58"} Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.387616 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.418489 4902 generic.go:334] "Generic (PLEG): container finished" podID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" containerID="e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767" exitCode=0 Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.418546 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.418541 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"07c018ac-4b51-418d-8410-6f3f6e84d0b0","Type":"ContainerDied","Data":"e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767"} Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.418625 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"07c018ac-4b51-418d-8410-6f3f6e84d0b0","Type":"ContainerDied","Data":"060c58d0d4654f115ba1bf8b0070a811e8ab56e3accae8cf59f2fe82e65ba5ea"} Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.418647 4902 scope.go:117] "RemoveContainer" containerID="e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.448452 4902 scope.go:117] "RemoveContainer" containerID="eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.473604 4902 scope.go:117] "RemoveContainer" containerID="e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767" Jan 21 16:12:31 crc kubenswrapper[4902]: E0121 16:12:31.477521 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767\": container with ID starting with e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767 not found: ID does not exist" containerID="e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.477579 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767"} err="failed to get container status \"e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767\": rpc error: code = NotFound desc = could not find container \"e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767\": container with ID starting with e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767 not found: ID does not exist" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.477615 4902 scope.go:117] "RemoveContainer" containerID="eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58" Jan 21 16:12:31 crc kubenswrapper[4902]: E0121 16:12:31.478162 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58\": container with ID starting with eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58 not found: ID does not exist" containerID="eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.478206 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58"} err="failed to get container status \"eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58\": rpc error: code = NotFound desc = could not find container \"eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58\": container with ID starting with eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58 not found: ID does not exist" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.496621 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c018ac-4b51-418d-8410-6f3f6e84d0b0-config-data\") pod \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.496789 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07c018ac-4b51-418d-8410-6f3f6e84d0b0-logs\") pod \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.496874 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c018ac-4b51-418d-8410-6f3f6e84d0b0-combined-ca-bundle\") pod \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.496941 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzc48\" (UniqueName: \"kubernetes.io/projected/07c018ac-4b51-418d-8410-6f3f6e84d0b0-kube-api-access-fzc48\") pod \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.497593 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07c018ac-4b51-418d-8410-6f3f6e84d0b0-logs" (OuterVolumeSpecName: "logs") pod "07c018ac-4b51-418d-8410-6f3f6e84d0b0" (UID: "07c018ac-4b51-418d-8410-6f3f6e84d0b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.512241 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c018ac-4b51-418d-8410-6f3f6e84d0b0-kube-api-access-fzc48" (OuterVolumeSpecName: "kube-api-access-fzc48") pod "07c018ac-4b51-418d-8410-6f3f6e84d0b0" (UID: "07c018ac-4b51-418d-8410-6f3f6e84d0b0"). InnerVolumeSpecName "kube-api-access-fzc48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.539661 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c018ac-4b51-418d-8410-6f3f6e84d0b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07c018ac-4b51-418d-8410-6f3f6e84d0b0" (UID: "07c018ac-4b51-418d-8410-6f3f6e84d0b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.564310 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c018ac-4b51-418d-8410-6f3f6e84d0b0-config-data" (OuterVolumeSpecName: "config-data") pod "07c018ac-4b51-418d-8410-6f3f6e84d0b0" (UID: "07c018ac-4b51-418d-8410-6f3f6e84d0b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.599303 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c018ac-4b51-418d-8410-6f3f6e84d0b0-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.599437 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07c018ac-4b51-418d-8410-6f3f6e84d0b0-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.599529 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c018ac-4b51-418d-8410-6f3f6e84d0b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.599603 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzc48\" (UniqueName: \"kubernetes.io/projected/07c018ac-4b51-418d-8410-6f3f6e84d0b0-kube-api-access-fzc48\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.761769 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.773297 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.789208 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 16:12:31 crc kubenswrapper[4902]: E0121 16:12:31.789654 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" containerName="nova-api-log" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.789679 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" containerName="nova-api-log" Jan 21 16:12:31 crc kubenswrapper[4902]: E0121 16:12:31.789699 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" containerName="nova-api-api" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.789706 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" containerName="nova-api-api" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.789942 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" containerName="nova-api-api" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.789969 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" containerName="nova-api-log" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.797399 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.802473 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.802688 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.802835 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.806010 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.906424 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8603f024-f71f-486b-93aa-e6397021aa48-logs\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.906577 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkspl\" (UniqueName: \"kubernetes.io/projected/8603f024-f71f-486b-93aa-e6397021aa48-kube-api-access-tkspl\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.906609 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8603f024-f71f-486b-93aa-e6397021aa48-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.906644 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8603f024-f71f-486b-93aa-e6397021aa48-config-data\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.906674 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8603f024-f71f-486b-93aa-e6397021aa48-public-tls-certs\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.906786 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8603f024-f71f-486b-93aa-e6397021aa48-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.008745 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkspl\" (UniqueName: \"kubernetes.io/projected/8603f024-f71f-486b-93aa-e6397021aa48-kube-api-access-tkspl\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.008791 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8603f024-f71f-486b-93aa-e6397021aa48-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.008810 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8603f024-f71f-486b-93aa-e6397021aa48-config-data\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.008831 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8603f024-f71f-486b-93aa-e6397021aa48-public-tls-certs\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.008890 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8603f024-f71f-486b-93aa-e6397021aa48-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.008956 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8603f024-f71f-486b-93aa-e6397021aa48-logs\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.009417 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8603f024-f71f-486b-93aa-e6397021aa48-logs\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.012899 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8603f024-f71f-486b-93aa-e6397021aa48-config-data\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.013380 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8603f024-f71f-486b-93aa-e6397021aa48-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.018565 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8603f024-f71f-486b-93aa-e6397021aa48-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.025565 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8603f024-f71f-486b-93aa-e6397021aa48-public-tls-certs\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.032626 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkspl\" (UniqueName: \"kubernetes.io/projected/8603f024-f71f-486b-93aa-e6397021aa48-kube-api-access-tkspl\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.117333 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.306290 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" path="/var/lib/kubelet/pods/07c018ac-4b51-418d-8410-6f3f6e84d0b0/volumes" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.548103 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:12:33 crc kubenswrapper[4902]: I0121 16:12:33.438444 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8603f024-f71f-486b-93aa-e6397021aa48","Type":"ContainerStarted","Data":"060fa514553911bcf1a8ecf95920d87c8506c854b2448d57b1049b90596e1104"} Jan 21 16:12:33 crc kubenswrapper[4902]: I0121 16:12:33.438760 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8603f024-f71f-486b-93aa-e6397021aa48","Type":"ContainerStarted","Data":"885e2c15d8f17437f5572b584efdda7193154e6764bc22c0f7d36b56f21e1062"} Jan 21 16:12:33 crc kubenswrapper[4902]: I0121 16:12:33.438774 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8603f024-f71f-486b-93aa-e6397021aa48","Type":"ContainerStarted","Data":"d288cd73ed0e00fc8ff66949a8c44c74e42cd9ed73e662d9d606ce587d26ae87"} Jan 21 16:12:33 crc kubenswrapper[4902]: I0121 16:12:33.472833 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.472807867 podStartE2EDuration="2.472807867s" podCreationTimestamp="2026-01-21 16:12:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:12:33.458079403 +0000 UTC m=+5915.534912432" watchObservedRunningTime="2026-01-21 16:12:33.472807867 +0000 UTC m=+5915.549640916" Jan 21 16:12:35 crc kubenswrapper[4902]: I0121 16:12:35.340190 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:35 crc kubenswrapper[4902]: I0121 16:12:35.401355 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66f49c7d99-gbqjj"] Jan 21 16:12:35 crc kubenswrapper[4902]: I0121 16:12:35.401659 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" podUID="b01a7675-d9b2-451e-8137-b069f892c1dd" containerName="dnsmasq-dns" containerID="cri-o://c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2" gracePeriod=10 Jan 21 16:12:35 crc kubenswrapper[4902]: I0121 16:12:35.891668 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:12:35 crc kubenswrapper[4902]: I0121 16:12:35.989723 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-ovsdbserver-nb\") pod \"b01a7675-d9b2-451e-8137-b069f892c1dd\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " Jan 21 16:12:35 crc kubenswrapper[4902]: I0121 16:12:35.989887 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-dns-svc\") pod \"b01a7675-d9b2-451e-8137-b069f892c1dd\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " Jan 21 16:12:35 crc kubenswrapper[4902]: I0121 16:12:35.989915 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-ovsdbserver-sb\") pod \"b01a7675-d9b2-451e-8137-b069f892c1dd\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " Jan 21 16:12:35 crc kubenswrapper[4902]: I0121 16:12:35.989964 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-config\") pod \"b01a7675-d9b2-451e-8137-b069f892c1dd\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " Jan 21 16:12:35 crc kubenswrapper[4902]: I0121 16:12:35.990079 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwjh7\" (UniqueName: \"kubernetes.io/projected/b01a7675-d9b2-451e-8137-b069f892c1dd-kube-api-access-nwjh7\") pod \"b01a7675-d9b2-451e-8137-b069f892c1dd\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.049849 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b01a7675-d9b2-451e-8137-b069f892c1dd-kube-api-access-nwjh7" (OuterVolumeSpecName: "kube-api-access-nwjh7") pod "b01a7675-d9b2-451e-8137-b069f892c1dd" (UID: "b01a7675-d9b2-451e-8137-b069f892c1dd"). InnerVolumeSpecName "kube-api-access-nwjh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.095000 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwjh7\" (UniqueName: \"kubernetes.io/projected/b01a7675-d9b2-451e-8137-b069f892c1dd-kube-api-access-nwjh7\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.131746 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-config" (OuterVolumeSpecName: "config") pod "b01a7675-d9b2-451e-8137-b069f892c1dd" (UID: "b01a7675-d9b2-451e-8137-b069f892c1dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.158502 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b01a7675-d9b2-451e-8137-b069f892c1dd" (UID: "b01a7675-d9b2-451e-8137-b069f892c1dd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.162267 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b01a7675-d9b2-451e-8137-b069f892c1dd" (UID: "b01a7675-d9b2-451e-8137-b069f892c1dd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.189649 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b01a7675-d9b2-451e-8137-b069f892c1dd" (UID: "b01a7675-d9b2-451e-8137-b069f892c1dd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.196806 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.196842 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.196851 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.196861 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.466133 4902 generic.go:334] "Generic (PLEG): container finished" podID="b01a7675-d9b2-451e-8137-b069f892c1dd" containerID="c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2" exitCode=0 Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.466483 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" event={"ID":"b01a7675-d9b2-451e-8137-b069f892c1dd","Type":"ContainerDied","Data":"c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2"} Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.466522 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" event={"ID":"b01a7675-d9b2-451e-8137-b069f892c1dd","Type":"ContainerDied","Data":"3e67c0e6a35688e67c9ae97a666562e7a06fee4e64265deb351ad6f1c7a1f81e"} Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.466551 4902 scope.go:117] "RemoveContainer" containerID="c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.466742 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.492698 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66f49c7d99-gbqjj"] Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.503147 4902 scope.go:117] "RemoveContainer" containerID="18ff4c16518d3370beabf9b4d3b009e9a9341f86cab33887ac50b593cc5cd8c4" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.507686 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66f49c7d99-gbqjj"] Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.565282 4902 scope.go:117] "RemoveContainer" containerID="c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2" Jan 21 16:12:36 crc kubenswrapper[4902]: E0121 16:12:36.565765 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2\": container with ID starting with c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2 not found: ID does not exist" containerID="c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.565813 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2"} err="failed to get container status \"c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2\": rpc error: code = NotFound desc = could not find container \"c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2\": container with ID starting with c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2 not found: ID does not exist" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.565845 4902 scope.go:117] "RemoveContainer" containerID="18ff4c16518d3370beabf9b4d3b009e9a9341f86cab33887ac50b593cc5cd8c4" Jan 21 16:12:36 crc kubenswrapper[4902]: E0121 16:12:36.566285 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18ff4c16518d3370beabf9b4d3b009e9a9341f86cab33887ac50b593cc5cd8c4\": container with ID starting with 18ff4c16518d3370beabf9b4d3b009e9a9341f86cab33887ac50b593cc5cd8c4 not found: ID does not exist" containerID="18ff4c16518d3370beabf9b4d3b009e9a9341f86cab33887ac50b593cc5cd8c4" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.566318 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ff4c16518d3370beabf9b4d3b009e9a9341f86cab33887ac50b593cc5cd8c4"} err="failed to get container status \"18ff4c16518d3370beabf9b4d3b009e9a9341f86cab33887ac50b593cc5cd8c4\": rpc error: code = NotFound desc = could not find container \"18ff4c16518d3370beabf9b4d3b009e9a9341f86cab33887ac50b593cc5cd8c4\": container with ID starting with 18ff4c16518d3370beabf9b4d3b009e9a9341f86cab33887ac50b593cc5cd8c4 not found: ID does not exist" Jan 21 16:12:38 crc kubenswrapper[4902]: I0121 16:12:38.305544 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b01a7675-d9b2-451e-8137-b069f892c1dd" path="/var/lib/kubelet/pods/b01a7675-d9b2-451e-8137-b069f892c1dd/volumes" Jan 21 16:12:42 crc kubenswrapper[4902]: I0121 16:12:42.118541 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:12:42 crc kubenswrapper[4902]: I0121 16:12:42.119138 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:12:43 crc kubenswrapper[4902]: I0121 16:12:43.136253 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8603f024-f71f-486b-93aa-e6397021aa48" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.89:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:12:43 crc kubenswrapper[4902]: I0121 16:12:43.136278 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8603f024-f71f-486b-93aa-e6397021aa48" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.89:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:12:52 crc kubenswrapper[4902]: I0121 16:12:52.125768 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 16:12:52 crc kubenswrapper[4902]: I0121 16:12:52.126656 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 16:12:52 crc kubenswrapper[4902]: I0121 16:12:52.128614 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 16:12:52 crc kubenswrapper[4902]: I0121 16:12:52.134772 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 16:12:52 crc kubenswrapper[4902]: I0121 16:12:52.596174 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 16:12:52 crc kubenswrapper[4902]: I0121 16:12:52.603608 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.265364 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9bqlx"] Jan 21 16:13:15 crc kubenswrapper[4902]: E0121 16:13:15.267847 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b01a7675-d9b2-451e-8137-b069f892c1dd" containerName="init" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.267967 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b01a7675-d9b2-451e-8137-b069f892c1dd" containerName="init" Jan 21 16:13:15 crc kubenswrapper[4902]: E0121 16:13:15.268059 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b01a7675-d9b2-451e-8137-b069f892c1dd" containerName="dnsmasq-dns" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.268165 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b01a7675-d9b2-451e-8137-b069f892c1dd" containerName="dnsmasq-dns" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.268465 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b01a7675-d9b2-451e-8137-b069f892c1dd" containerName="dnsmasq-dns" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.269373 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.271616 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-n9d6n" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.272196 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.272445 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.276947 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-qfhz4"] Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.278880 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.289701 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9bqlx"] Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.300779 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-qfhz4"] Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.412504 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdbhs\" (UniqueName: \"kubernetes.io/projected/cc475055-769c-4199-8486-3bdca7cd05bc-kube-api-access-pdbhs\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.412601 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cc475055-769c-4199-8486-3bdca7cd05bc-var-run\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.412624 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d120f671-59d9-42ef-a905-2a6203c5896c-scripts\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.412651 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc475055-769c-4199-8486-3bdca7cd05bc-var-run-ovn\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.413650 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d120f671-59d9-42ef-a905-2a6203c5896c-etc-ovs\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.413958 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc475055-769c-4199-8486-3bdca7cd05bc-scripts\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.413993 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d120f671-59d9-42ef-a905-2a6203c5896c-var-run\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.414037 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp62m\" (UniqueName: \"kubernetes.io/projected/d120f671-59d9-42ef-a905-2a6203c5896c-kube-api-access-xp62m\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.414067 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d120f671-59d9-42ef-a905-2a6203c5896c-var-log\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.414093 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc475055-769c-4199-8486-3bdca7cd05bc-ovn-controller-tls-certs\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.414115 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d120f671-59d9-42ef-a905-2a6203c5896c-var-lib\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.414144 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cc475055-769c-4199-8486-3bdca7cd05bc-var-log-ovn\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.414180 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc475055-769c-4199-8486-3bdca7cd05bc-combined-ca-bundle\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516003 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cc475055-769c-4199-8486-3bdca7cd05bc-var-log-ovn\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516195 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc475055-769c-4199-8486-3bdca7cd05bc-combined-ca-bundle\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516272 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdbhs\" (UniqueName: \"kubernetes.io/projected/cc475055-769c-4199-8486-3bdca7cd05bc-kube-api-access-pdbhs\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516359 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cc475055-769c-4199-8486-3bdca7cd05bc-var-run\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516390 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d120f671-59d9-42ef-a905-2a6203c5896c-scripts\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516405 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cc475055-769c-4199-8486-3bdca7cd05bc-var-log-ovn\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516433 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc475055-769c-4199-8486-3bdca7cd05bc-var-run-ovn\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516532 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc475055-769c-4199-8486-3bdca7cd05bc-var-run-ovn\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516568 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d120f671-59d9-42ef-a905-2a6203c5896c-etc-ovs\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516608 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cc475055-769c-4199-8486-3bdca7cd05bc-var-run\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516666 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc475055-769c-4199-8486-3bdca7cd05bc-scripts\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516722 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d120f671-59d9-42ef-a905-2a6203c5896c-var-run\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516671 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d120f671-59d9-42ef-a905-2a6203c5896c-etc-ovs\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516854 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d120f671-59d9-42ef-a905-2a6203c5896c-var-run\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516792 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp62m\" (UniqueName: \"kubernetes.io/projected/d120f671-59d9-42ef-a905-2a6203c5896c-kube-api-access-xp62m\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516905 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d120f671-59d9-42ef-a905-2a6203c5896c-var-log\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516944 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc475055-769c-4199-8486-3bdca7cd05bc-ovn-controller-tls-certs\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516981 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d120f671-59d9-42ef-a905-2a6203c5896c-var-lib\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.517138 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d120f671-59d9-42ef-a905-2a6203c5896c-var-log\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.517163 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d120f671-59d9-42ef-a905-2a6203c5896c-var-lib\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.518597 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d120f671-59d9-42ef-a905-2a6203c5896c-scripts\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.518791 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc475055-769c-4199-8486-3bdca7cd05bc-scripts\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.526752 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc475055-769c-4199-8486-3bdca7cd05bc-combined-ca-bundle\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.532366 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc475055-769c-4199-8486-3bdca7cd05bc-ovn-controller-tls-certs\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.537795 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp62m\" (UniqueName: \"kubernetes.io/projected/d120f671-59d9-42ef-a905-2a6203c5896c-kube-api-access-xp62m\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.538421 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdbhs\" (UniqueName: \"kubernetes.io/projected/cc475055-769c-4199-8486-3bdca7cd05bc-kube-api-access-pdbhs\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.592518 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.606583 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.162925 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9bqlx"] Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.597726 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-qfhz4"] Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.826428 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qfhz4" event={"ID":"d120f671-59d9-42ef-a905-2a6203c5896c","Type":"ContainerStarted","Data":"b26b283eb7d1e59a25a4f61b3ea06c7f221252f97d5d110a9c4dacf74d46eabd"} Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.827906 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9bqlx" event={"ID":"cc475055-769c-4199-8486-3bdca7cd05bc","Type":"ContainerStarted","Data":"e4ef1f4f28a63c1a02e263e246ae858214053fb560cc3ed694f2fb3240790bd4"} Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.827948 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9bqlx" event={"ID":"cc475055-769c-4199-8486-3bdca7cd05bc","Type":"ContainerStarted","Data":"e9101f8442efbbc5805c3ccdaf9d284fa3247eeb21cf4ae4e16db1b7420ea28b"} Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.828067 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.854649 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-9vx8r"] Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.855771 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-9bqlx" podStartSLOduration=1.855745984 podStartE2EDuration="1.855745984s" podCreationTimestamp="2026-01-21 16:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:13:16.853949914 +0000 UTC m=+5958.930782943" watchObservedRunningTime="2026-01-21 16:13:16.855745984 +0000 UTC m=+5958.932579013" Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.857232 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.859942 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.875131 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-9vx8r"] Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.957576 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f209787-a9f8-41df-8298-79c1381eecbb-config\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.957638 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f209787-a9f8-41df-8298-79c1381eecbb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.957780 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8f209787-a9f8-41df-8298-79c1381eecbb-ovn-rundir\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.958023 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8f209787-a9f8-41df-8298-79c1381eecbb-ovs-rundir\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.958212 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vldw\" (UniqueName: \"kubernetes.io/projected/8f209787-a9f8-41df-8298-79c1381eecbb-kube-api-access-2vldw\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.958401 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f209787-a9f8-41df-8298-79c1381eecbb-combined-ca-bundle\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.060025 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8f209787-a9f8-41df-8298-79c1381eecbb-ovs-rundir\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.060144 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vldw\" (UniqueName: \"kubernetes.io/projected/8f209787-a9f8-41df-8298-79c1381eecbb-kube-api-access-2vldw\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.060198 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f209787-a9f8-41df-8298-79c1381eecbb-combined-ca-bundle\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.060291 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f209787-a9f8-41df-8298-79c1381eecbb-config\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.060340 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f209787-a9f8-41df-8298-79c1381eecbb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.060366 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8f209787-a9f8-41df-8298-79c1381eecbb-ovn-rundir\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.060972 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8f209787-a9f8-41df-8298-79c1381eecbb-ovn-rundir\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.061582 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8f209787-a9f8-41df-8298-79c1381eecbb-ovs-rundir\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.061698 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f209787-a9f8-41df-8298-79c1381eecbb-config\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.066292 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f209787-a9f8-41df-8298-79c1381eecbb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.076018 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f209787-a9f8-41df-8298-79c1381eecbb-combined-ca-bundle\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.080066 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vldw\" (UniqueName: \"kubernetes.io/projected/8f209787-a9f8-41df-8298-79c1381eecbb-kube-api-access-2vldw\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.173833 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.662756 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-5zjhh"] Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.664234 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-5zjhh" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.678672 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-5zjhh"] Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.710724 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-9vx8r"] Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.774208 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xb6v\" (UniqueName: \"kubernetes.io/projected/507bf37f-b9da-4064-970b-89f9a27589fe-kube-api-access-5xb6v\") pod \"octavia-db-create-5zjhh\" (UID: \"507bf37f-b9da-4064-970b-89f9a27589fe\") " pod="openstack/octavia-db-create-5zjhh" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.774259 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/507bf37f-b9da-4064-970b-89f9a27589fe-operator-scripts\") pod \"octavia-db-create-5zjhh\" (UID: \"507bf37f-b9da-4064-970b-89f9a27589fe\") " pod="openstack/octavia-db-create-5zjhh" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.774393 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.774422 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.837262 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-9vx8r" event={"ID":"8f209787-a9f8-41df-8298-79c1381eecbb","Type":"ContainerStarted","Data":"c0412fb96fd0854ced90bef3895a5f41a5d7eb8d745c8feccf5b8535cd54d18f"} Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.839399 4902 generic.go:334] "Generic (PLEG): container finished" podID="d120f671-59d9-42ef-a905-2a6203c5896c" containerID="5b7f9af129d8661b92530cfc428a4f65d3db014e3bb5370887e02202ed36e6b3" exitCode=0 Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.839487 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qfhz4" event={"ID":"d120f671-59d9-42ef-a905-2a6203c5896c","Type":"ContainerDied","Data":"5b7f9af129d8661b92530cfc428a4f65d3db014e3bb5370887e02202ed36e6b3"} Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.876633 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xb6v\" (UniqueName: \"kubernetes.io/projected/507bf37f-b9da-4064-970b-89f9a27589fe-kube-api-access-5xb6v\") pod \"octavia-db-create-5zjhh\" (UID: \"507bf37f-b9da-4064-970b-89f9a27589fe\") " pod="openstack/octavia-db-create-5zjhh" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.876688 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/507bf37f-b9da-4064-970b-89f9a27589fe-operator-scripts\") pod \"octavia-db-create-5zjhh\" (UID: \"507bf37f-b9da-4064-970b-89f9a27589fe\") " pod="openstack/octavia-db-create-5zjhh" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.877652 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/507bf37f-b9da-4064-970b-89f9a27589fe-operator-scripts\") pod \"octavia-db-create-5zjhh\" (UID: \"507bf37f-b9da-4064-970b-89f9a27589fe\") " pod="openstack/octavia-db-create-5zjhh" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.893356 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xb6v\" (UniqueName: \"kubernetes.io/projected/507bf37f-b9da-4064-970b-89f9a27589fe-kube-api-access-5xb6v\") pod \"octavia-db-create-5zjhh\" (UID: \"507bf37f-b9da-4064-970b-89f9a27589fe\") " pod="openstack/octavia-db-create-5zjhh" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.984167 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-5zjhh" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.452147 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-5zjhh"] Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.708208 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-ae8b-account-create-update-q86xl"] Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.712176 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ae8b-account-create-update-q86xl" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.716578 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.725136 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-ae8b-account-create-update-q86xl"] Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.793884 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1baaefdd-ea47-4ac0-98d0-d370180b0eb0-operator-scripts\") pod \"octavia-ae8b-account-create-update-q86xl\" (UID: \"1baaefdd-ea47-4ac0-98d0-d370180b0eb0\") " pod="openstack/octavia-ae8b-account-create-update-q86xl" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.794645 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzsxd\" (UniqueName: \"kubernetes.io/projected/1baaefdd-ea47-4ac0-98d0-d370180b0eb0-kube-api-access-vzsxd\") pod \"octavia-ae8b-account-create-update-q86xl\" (UID: \"1baaefdd-ea47-4ac0-98d0-d370180b0eb0\") " pod="openstack/octavia-ae8b-account-create-update-q86xl" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.852471 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qfhz4" event={"ID":"d120f671-59d9-42ef-a905-2a6203c5896c","Type":"ContainerStarted","Data":"6f0918632c6894d0d7d441cff75d067d472926ab490d362382bcd7612118b755"} Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.852514 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qfhz4" event={"ID":"d120f671-59d9-42ef-a905-2a6203c5896c","Type":"ContainerStarted","Data":"b9b6b6a800006933e8879773b81bc7e69ea4c88fbe18ccf5825157d325f62e6d"} Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.853742 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.853772 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.855079 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-9vx8r" event={"ID":"8f209787-a9f8-41df-8298-79c1381eecbb","Type":"ContainerStarted","Data":"5a0698f4ddebf7c74aee9fede19c8ca29fc52747cac94bb9a373d7dbb4e29206"} Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.857191 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-5zjhh" event={"ID":"507bf37f-b9da-4064-970b-89f9a27589fe","Type":"ContainerStarted","Data":"5cf0b5bdbf01f12d44cd41471171a9c5244aec958a6477fc8835553eabc2f3b6"} Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.857223 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-5zjhh" event={"ID":"507bf37f-b9da-4064-970b-89f9a27589fe","Type":"ContainerStarted","Data":"5512edb3ab0e1de94b7daaa1b8379762d2fcf9c9f42594905428c6a97181ed95"} Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.884157 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-qfhz4" podStartSLOduration=3.884138127 podStartE2EDuration="3.884138127s" podCreationTimestamp="2026-01-21 16:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:13:18.874124936 +0000 UTC m=+5960.950957965" watchObservedRunningTime="2026-01-21 16:13:18.884138127 +0000 UTC m=+5960.960971146" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.896252 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1baaefdd-ea47-4ac0-98d0-d370180b0eb0-operator-scripts\") pod \"octavia-ae8b-account-create-update-q86xl\" (UID: \"1baaefdd-ea47-4ac0-98d0-d370180b0eb0\") " pod="openstack/octavia-ae8b-account-create-update-q86xl" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.896411 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzsxd\" (UniqueName: \"kubernetes.io/projected/1baaefdd-ea47-4ac0-98d0-d370180b0eb0-kube-api-access-vzsxd\") pod \"octavia-ae8b-account-create-update-q86xl\" (UID: \"1baaefdd-ea47-4ac0-98d0-d370180b0eb0\") " pod="openstack/octavia-ae8b-account-create-update-q86xl" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.897253 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1baaefdd-ea47-4ac0-98d0-d370180b0eb0-operator-scripts\") pod \"octavia-ae8b-account-create-update-q86xl\" (UID: \"1baaefdd-ea47-4ac0-98d0-d370180b0eb0\") " pod="openstack/octavia-ae8b-account-create-update-q86xl" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.897898 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-create-5zjhh" podStartSLOduration=1.8978594229999999 podStartE2EDuration="1.897859423s" podCreationTimestamp="2026-01-21 16:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:13:18.890234949 +0000 UTC m=+5960.967067988" watchObservedRunningTime="2026-01-21 16:13:18.897859423 +0000 UTC m=+5960.974692442" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.915544 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzsxd\" (UniqueName: \"kubernetes.io/projected/1baaefdd-ea47-4ac0-98d0-d370180b0eb0-kube-api-access-vzsxd\") pod \"octavia-ae8b-account-create-update-q86xl\" (UID: \"1baaefdd-ea47-4ac0-98d0-d370180b0eb0\") " pod="openstack/octavia-ae8b-account-create-update-q86xl" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.919844 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-9vx8r" podStartSLOduration=2.91981985 podStartE2EDuration="2.91981985s" podCreationTimestamp="2026-01-21 16:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:13:18.911159117 +0000 UTC m=+5960.987992166" watchObservedRunningTime="2026-01-21 16:13:18.91981985 +0000 UTC m=+5960.996652879" Jan 21 16:13:19 crc kubenswrapper[4902]: I0121 16:13:19.048045 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ae8b-account-create-update-q86xl" Jan 21 16:13:19 crc kubenswrapper[4902]: I0121 16:13:19.493917 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-ae8b-account-create-update-q86xl"] Jan 21 16:13:19 crc kubenswrapper[4902]: I0121 16:13:19.866499 4902 generic.go:334] "Generic (PLEG): container finished" podID="507bf37f-b9da-4064-970b-89f9a27589fe" containerID="5cf0b5bdbf01f12d44cd41471171a9c5244aec958a6477fc8835553eabc2f3b6" exitCode=0 Jan 21 16:13:19 crc kubenswrapper[4902]: I0121 16:13:19.866574 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-5zjhh" event={"ID":"507bf37f-b9da-4064-970b-89f9a27589fe","Type":"ContainerDied","Data":"5cf0b5bdbf01f12d44cd41471171a9c5244aec958a6477fc8835553eabc2f3b6"} Jan 21 16:13:19 crc kubenswrapper[4902]: I0121 16:13:19.869396 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ae8b-account-create-update-q86xl" event={"ID":"1baaefdd-ea47-4ac0-98d0-d370180b0eb0","Type":"ContainerStarted","Data":"bbd0af7b0e6a302b723bb3848d085087ea3fb23417c8175750a0c41598fe534f"} Jan 21 16:13:19 crc kubenswrapper[4902]: I0121 16:13:19.869456 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ae8b-account-create-update-q86xl" event={"ID":"1baaefdd-ea47-4ac0-98d0-d370180b0eb0","Type":"ContainerStarted","Data":"571726af135d0494e6bf747d3ebd77dbc4c44f1575d7cc9867388c2db0a1ab73"} Jan 21 16:13:19 crc kubenswrapper[4902]: I0121 16:13:19.901563 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-ae8b-account-create-update-q86xl" podStartSLOduration=1.901544653 podStartE2EDuration="1.901544653s" podCreationTimestamp="2026-01-21 16:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:13:19.895433941 +0000 UTC m=+5961.972266970" watchObservedRunningTime="2026-01-21 16:13:19.901544653 +0000 UTC m=+5961.978377682" Jan 21 16:13:20 crc kubenswrapper[4902]: I0121 16:13:20.882680 4902 generic.go:334] "Generic (PLEG): container finished" podID="1baaefdd-ea47-4ac0-98d0-d370180b0eb0" containerID="bbd0af7b0e6a302b723bb3848d085087ea3fb23417c8175750a0c41598fe534f" exitCode=0 Jan 21 16:13:20 crc kubenswrapper[4902]: I0121 16:13:20.882812 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ae8b-account-create-update-q86xl" event={"ID":"1baaefdd-ea47-4ac0-98d0-d370180b0eb0","Type":"ContainerDied","Data":"bbd0af7b0e6a302b723bb3848d085087ea3fb23417c8175750a0c41598fe534f"} Jan 21 16:13:21 crc kubenswrapper[4902]: I0121 16:13:21.247957 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-5zjhh" Jan 21 16:13:21 crc kubenswrapper[4902]: I0121 16:13:21.365114 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/507bf37f-b9da-4064-970b-89f9a27589fe-operator-scripts\") pod \"507bf37f-b9da-4064-970b-89f9a27589fe\" (UID: \"507bf37f-b9da-4064-970b-89f9a27589fe\") " Jan 21 16:13:21 crc kubenswrapper[4902]: I0121 16:13:21.365398 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xb6v\" (UniqueName: \"kubernetes.io/projected/507bf37f-b9da-4064-970b-89f9a27589fe-kube-api-access-5xb6v\") pod \"507bf37f-b9da-4064-970b-89f9a27589fe\" (UID: \"507bf37f-b9da-4064-970b-89f9a27589fe\") " Jan 21 16:13:21 crc kubenswrapper[4902]: I0121 16:13:21.365980 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/507bf37f-b9da-4064-970b-89f9a27589fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "507bf37f-b9da-4064-970b-89f9a27589fe" (UID: "507bf37f-b9da-4064-970b-89f9a27589fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:13:21 crc kubenswrapper[4902]: I0121 16:13:21.366639 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/507bf37f-b9da-4064-970b-89f9a27589fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:21 crc kubenswrapper[4902]: I0121 16:13:21.371575 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/507bf37f-b9da-4064-970b-89f9a27589fe-kube-api-access-5xb6v" (OuterVolumeSpecName: "kube-api-access-5xb6v") pod "507bf37f-b9da-4064-970b-89f9a27589fe" (UID: "507bf37f-b9da-4064-970b-89f9a27589fe"). InnerVolumeSpecName "kube-api-access-5xb6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:21 crc kubenswrapper[4902]: I0121 16:13:21.469215 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xb6v\" (UniqueName: \"kubernetes.io/projected/507bf37f-b9da-4064-970b-89f9a27589fe-kube-api-access-5xb6v\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:21 crc kubenswrapper[4902]: I0121 16:13:21.894128 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-5zjhh" event={"ID":"507bf37f-b9da-4064-970b-89f9a27589fe","Type":"ContainerDied","Data":"5512edb3ab0e1de94b7daaa1b8379762d2fcf9c9f42594905428c6a97181ed95"} Jan 21 16:13:21 crc kubenswrapper[4902]: I0121 16:13:21.894189 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5512edb3ab0e1de94b7daaa1b8379762d2fcf9c9f42594905428c6a97181ed95" Jan 21 16:13:21 crc kubenswrapper[4902]: I0121 16:13:21.896035 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-5zjhh" Jan 21 16:13:22 crc kubenswrapper[4902]: I0121 16:13:22.262369 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ae8b-account-create-update-q86xl" Jan 21 16:13:22 crc kubenswrapper[4902]: I0121 16:13:22.387295 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1baaefdd-ea47-4ac0-98d0-d370180b0eb0-operator-scripts\") pod \"1baaefdd-ea47-4ac0-98d0-d370180b0eb0\" (UID: \"1baaefdd-ea47-4ac0-98d0-d370180b0eb0\") " Jan 21 16:13:22 crc kubenswrapper[4902]: I0121 16:13:22.387384 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzsxd\" (UniqueName: \"kubernetes.io/projected/1baaefdd-ea47-4ac0-98d0-d370180b0eb0-kube-api-access-vzsxd\") pod \"1baaefdd-ea47-4ac0-98d0-d370180b0eb0\" (UID: \"1baaefdd-ea47-4ac0-98d0-d370180b0eb0\") " Jan 21 16:13:22 crc kubenswrapper[4902]: I0121 16:13:22.387899 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1baaefdd-ea47-4ac0-98d0-d370180b0eb0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1baaefdd-ea47-4ac0-98d0-d370180b0eb0" (UID: "1baaefdd-ea47-4ac0-98d0-d370180b0eb0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:13:22 crc kubenswrapper[4902]: I0121 16:13:22.393366 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1baaefdd-ea47-4ac0-98d0-d370180b0eb0-kube-api-access-vzsxd" (OuterVolumeSpecName: "kube-api-access-vzsxd") pod "1baaefdd-ea47-4ac0-98d0-d370180b0eb0" (UID: "1baaefdd-ea47-4ac0-98d0-d370180b0eb0"). InnerVolumeSpecName "kube-api-access-vzsxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:22 crc kubenswrapper[4902]: I0121 16:13:22.489467 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1baaefdd-ea47-4ac0-98d0-d370180b0eb0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:22 crc kubenswrapper[4902]: I0121 16:13:22.489505 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzsxd\" (UniqueName: \"kubernetes.io/projected/1baaefdd-ea47-4ac0-98d0-d370180b0eb0-kube-api-access-vzsxd\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:22 crc kubenswrapper[4902]: I0121 16:13:22.904324 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ae8b-account-create-update-q86xl" event={"ID":"1baaefdd-ea47-4ac0-98d0-d370180b0eb0","Type":"ContainerDied","Data":"571726af135d0494e6bf747d3ebd77dbc4c44f1575d7cc9867388c2db0a1ab73"} Jan 21 16:13:22 crc kubenswrapper[4902]: I0121 16:13:22.904368 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="571726af135d0494e6bf747d3ebd77dbc4c44f1575d7cc9867388c2db0a1ab73" Jan 21 16:13:22 crc kubenswrapper[4902]: I0121 16:13:22.904420 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ae8b-account-create-update-q86xl" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.403483 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-q8nvb"] Jan 21 16:13:24 crc kubenswrapper[4902]: E0121 16:13:24.404120 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="507bf37f-b9da-4064-970b-89f9a27589fe" containerName="mariadb-database-create" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.404134 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="507bf37f-b9da-4064-970b-89f9a27589fe" containerName="mariadb-database-create" Jan 21 16:13:24 crc kubenswrapper[4902]: E0121 16:13:24.404144 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1baaefdd-ea47-4ac0-98d0-d370180b0eb0" containerName="mariadb-account-create-update" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.404150 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1baaefdd-ea47-4ac0-98d0-d370180b0eb0" containerName="mariadb-account-create-update" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.404319 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="507bf37f-b9da-4064-970b-89f9a27589fe" containerName="mariadb-database-create" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.404344 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="1baaefdd-ea47-4ac0-98d0-d370180b0eb0" containerName="mariadb-account-create-update" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.404985 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-q8nvb" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.419573 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-q8nvb"] Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.546502 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4tm9\" (UniqueName: \"kubernetes.io/projected/f4a4e549-a509-40db-8756-e37432024793-kube-api-access-v4tm9\") pod \"octavia-persistence-db-create-q8nvb\" (UID: \"f4a4e549-a509-40db-8756-e37432024793\") " pod="openstack/octavia-persistence-db-create-q8nvb" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.547033 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4a4e549-a509-40db-8756-e37432024793-operator-scripts\") pod \"octavia-persistence-db-create-q8nvb\" (UID: \"f4a4e549-a509-40db-8756-e37432024793\") " pod="openstack/octavia-persistence-db-create-q8nvb" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.648595 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4a4e549-a509-40db-8756-e37432024793-operator-scripts\") pod \"octavia-persistence-db-create-q8nvb\" (UID: \"f4a4e549-a509-40db-8756-e37432024793\") " pod="openstack/octavia-persistence-db-create-q8nvb" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.648754 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4tm9\" (UniqueName: \"kubernetes.io/projected/f4a4e549-a509-40db-8756-e37432024793-kube-api-access-v4tm9\") pod \"octavia-persistence-db-create-q8nvb\" (UID: \"f4a4e549-a509-40db-8756-e37432024793\") " pod="openstack/octavia-persistence-db-create-q8nvb" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.650020 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4a4e549-a509-40db-8756-e37432024793-operator-scripts\") pod \"octavia-persistence-db-create-q8nvb\" (UID: \"f4a4e549-a509-40db-8756-e37432024793\") " pod="openstack/octavia-persistence-db-create-q8nvb" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.666279 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4tm9\" (UniqueName: \"kubernetes.io/projected/f4a4e549-a509-40db-8756-e37432024793-kube-api-access-v4tm9\") pod \"octavia-persistence-db-create-q8nvb\" (UID: \"f4a4e549-a509-40db-8756-e37432024793\") " pod="openstack/octavia-persistence-db-create-q8nvb" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.755321 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-q8nvb" Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.235953 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-q8nvb"] Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.683589 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-d1a6-account-create-update-cw969"] Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.685124 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-d1a6-account-create-update-cw969" Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.694331 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-d1a6-account-create-update-cw969"] Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.697079 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.879114 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502e21f3-ea57-4f04-8e23-9b45c7a07ca2-operator-scripts\") pod \"octavia-d1a6-account-create-update-cw969\" (UID: \"502e21f3-ea57-4f04-8e23-9b45c7a07ca2\") " pod="openstack/octavia-d1a6-account-create-update-cw969" Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.879334 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6629n\" (UniqueName: \"kubernetes.io/projected/502e21f3-ea57-4f04-8e23-9b45c7a07ca2-kube-api-access-6629n\") pod \"octavia-d1a6-account-create-update-cw969\" (UID: \"502e21f3-ea57-4f04-8e23-9b45c7a07ca2\") " pod="openstack/octavia-d1a6-account-create-update-cw969" Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.937550 4902 generic.go:334] "Generic (PLEG): container finished" podID="f4a4e549-a509-40db-8756-e37432024793" containerID="26b51b45f191ff662cf71fe75dfa0a28808489ff71c63772b28558abe727c5a5" exitCode=0 Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.937599 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-q8nvb" event={"ID":"f4a4e549-a509-40db-8756-e37432024793","Type":"ContainerDied","Data":"26b51b45f191ff662cf71fe75dfa0a28808489ff71c63772b28558abe727c5a5"} Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.937632 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-q8nvb" event={"ID":"f4a4e549-a509-40db-8756-e37432024793","Type":"ContainerStarted","Data":"8a42c0d43476f2283056f1b8cdd8149baed43b95cce73b87e2c9bcb4869745cd"} Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.981711 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6629n\" (UniqueName: \"kubernetes.io/projected/502e21f3-ea57-4f04-8e23-9b45c7a07ca2-kube-api-access-6629n\") pod \"octavia-d1a6-account-create-update-cw969\" (UID: \"502e21f3-ea57-4f04-8e23-9b45c7a07ca2\") " pod="openstack/octavia-d1a6-account-create-update-cw969" Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.981887 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502e21f3-ea57-4f04-8e23-9b45c7a07ca2-operator-scripts\") pod \"octavia-d1a6-account-create-update-cw969\" (UID: \"502e21f3-ea57-4f04-8e23-9b45c7a07ca2\") " pod="openstack/octavia-d1a6-account-create-update-cw969" Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.982902 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502e21f3-ea57-4f04-8e23-9b45c7a07ca2-operator-scripts\") pod \"octavia-d1a6-account-create-update-cw969\" (UID: \"502e21f3-ea57-4f04-8e23-9b45c7a07ca2\") " pod="openstack/octavia-d1a6-account-create-update-cw969" Jan 21 16:13:26 crc kubenswrapper[4902]: I0121 16:13:26.003253 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6629n\" (UniqueName: \"kubernetes.io/projected/502e21f3-ea57-4f04-8e23-9b45c7a07ca2-kube-api-access-6629n\") pod \"octavia-d1a6-account-create-update-cw969\" (UID: \"502e21f3-ea57-4f04-8e23-9b45c7a07ca2\") " pod="openstack/octavia-d1a6-account-create-update-cw969" Jan 21 16:13:26 crc kubenswrapper[4902]: I0121 16:13:26.299998 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-d1a6-account-create-update-cw969" Jan 21 16:13:26 crc kubenswrapper[4902]: I0121 16:13:26.768728 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-d1a6-account-create-update-cw969"] Jan 21 16:13:26 crc kubenswrapper[4902]: I0121 16:13:26.946656 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-d1a6-account-create-update-cw969" event={"ID":"502e21f3-ea57-4f04-8e23-9b45c7a07ca2","Type":"ContainerStarted","Data":"240574aacfc77d8f19c897fe77d38feb19c1b756f6a4d42a02116477198ec950"} Jan 21 16:13:27 crc kubenswrapper[4902]: I0121 16:13:27.367999 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-q8nvb" Jan 21 16:13:27 crc kubenswrapper[4902]: I0121 16:13:27.514297 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4tm9\" (UniqueName: \"kubernetes.io/projected/f4a4e549-a509-40db-8756-e37432024793-kube-api-access-v4tm9\") pod \"f4a4e549-a509-40db-8756-e37432024793\" (UID: \"f4a4e549-a509-40db-8756-e37432024793\") " Jan 21 16:13:27 crc kubenswrapper[4902]: I0121 16:13:27.514380 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4a4e549-a509-40db-8756-e37432024793-operator-scripts\") pod \"f4a4e549-a509-40db-8756-e37432024793\" (UID: \"f4a4e549-a509-40db-8756-e37432024793\") " Jan 21 16:13:27 crc kubenswrapper[4902]: I0121 16:13:27.515182 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4a4e549-a509-40db-8756-e37432024793-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f4a4e549-a509-40db-8756-e37432024793" (UID: "f4a4e549-a509-40db-8756-e37432024793"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:13:27 crc kubenswrapper[4902]: I0121 16:13:27.524824 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4a4e549-a509-40db-8756-e37432024793-kube-api-access-v4tm9" (OuterVolumeSpecName: "kube-api-access-v4tm9") pod "f4a4e549-a509-40db-8756-e37432024793" (UID: "f4a4e549-a509-40db-8756-e37432024793"). InnerVolumeSpecName "kube-api-access-v4tm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:27 crc kubenswrapper[4902]: I0121 16:13:27.618065 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4tm9\" (UniqueName: \"kubernetes.io/projected/f4a4e549-a509-40db-8756-e37432024793-kube-api-access-v4tm9\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:27 crc kubenswrapper[4902]: I0121 16:13:27.618427 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4a4e549-a509-40db-8756-e37432024793-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:27 crc kubenswrapper[4902]: I0121 16:13:27.956527 4902 generic.go:334] "Generic (PLEG): container finished" podID="502e21f3-ea57-4f04-8e23-9b45c7a07ca2" containerID="9ceea852acb3ca8b99175935197b72276107562be97cda3fb8e5495a3f66a192" exitCode=0 Jan 21 16:13:27 crc kubenswrapper[4902]: I0121 16:13:27.956593 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-d1a6-account-create-update-cw969" event={"ID":"502e21f3-ea57-4f04-8e23-9b45c7a07ca2","Type":"ContainerDied","Data":"9ceea852acb3ca8b99175935197b72276107562be97cda3fb8e5495a3f66a192"} Jan 21 16:13:27 crc kubenswrapper[4902]: I0121 16:13:27.958233 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-q8nvb" event={"ID":"f4a4e549-a509-40db-8756-e37432024793","Type":"ContainerDied","Data":"8a42c0d43476f2283056f1b8cdd8149baed43b95cce73b87e2c9bcb4869745cd"} Jan 21 16:13:27 crc kubenswrapper[4902]: I0121 16:13:27.958256 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a42c0d43476f2283056f1b8cdd8149baed43b95cce73b87e2c9bcb4869745cd" Jan 21 16:13:27 crc kubenswrapper[4902]: I0121 16:13:27.958300 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-q8nvb" Jan 21 16:13:29 crc kubenswrapper[4902]: I0121 16:13:29.315714 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-d1a6-account-create-update-cw969" Jan 21 16:13:29 crc kubenswrapper[4902]: I0121 16:13:29.365840 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502e21f3-ea57-4f04-8e23-9b45c7a07ca2-operator-scripts\") pod \"502e21f3-ea57-4f04-8e23-9b45c7a07ca2\" (UID: \"502e21f3-ea57-4f04-8e23-9b45c7a07ca2\") " Jan 21 16:13:29 crc kubenswrapper[4902]: I0121 16:13:29.366010 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6629n\" (UniqueName: \"kubernetes.io/projected/502e21f3-ea57-4f04-8e23-9b45c7a07ca2-kube-api-access-6629n\") pod \"502e21f3-ea57-4f04-8e23-9b45c7a07ca2\" (UID: \"502e21f3-ea57-4f04-8e23-9b45c7a07ca2\") " Jan 21 16:13:29 crc kubenswrapper[4902]: I0121 16:13:29.366516 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/502e21f3-ea57-4f04-8e23-9b45c7a07ca2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "502e21f3-ea57-4f04-8e23-9b45c7a07ca2" (UID: "502e21f3-ea57-4f04-8e23-9b45c7a07ca2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:13:29 crc kubenswrapper[4902]: I0121 16:13:29.372513 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/502e21f3-ea57-4f04-8e23-9b45c7a07ca2-kube-api-access-6629n" (OuterVolumeSpecName: "kube-api-access-6629n") pod "502e21f3-ea57-4f04-8e23-9b45c7a07ca2" (UID: "502e21f3-ea57-4f04-8e23-9b45c7a07ca2"). InnerVolumeSpecName "kube-api-access-6629n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:29 crc kubenswrapper[4902]: I0121 16:13:29.467929 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6629n\" (UniqueName: \"kubernetes.io/projected/502e21f3-ea57-4f04-8e23-9b45c7a07ca2-kube-api-access-6629n\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:29 crc kubenswrapper[4902]: I0121 16:13:29.467968 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502e21f3-ea57-4f04-8e23-9b45c7a07ca2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:29 crc kubenswrapper[4902]: I0121 16:13:29.976673 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-d1a6-account-create-update-cw969" event={"ID":"502e21f3-ea57-4f04-8e23-9b45c7a07ca2","Type":"ContainerDied","Data":"240574aacfc77d8f19c897fe77d38feb19c1b756f6a4d42a02116477198ec950"} Jan 21 16:13:29 crc kubenswrapper[4902]: I0121 16:13:29.976922 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="240574aacfc77d8f19c897fe77d38feb19c1b756f6a4d42a02116477198ec950" Jan 21 16:13:29 crc kubenswrapper[4902]: I0121 16:13:29.976753 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-d1a6-account-create-update-cw969" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.793465 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-68fdc4858c-f84fc"] Jan 21 16:13:31 crc kubenswrapper[4902]: E0121 16:13:31.794281 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="502e21f3-ea57-4f04-8e23-9b45c7a07ca2" containerName="mariadb-account-create-update" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.794299 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="502e21f3-ea57-4f04-8e23-9b45c7a07ca2" containerName="mariadb-account-create-update" Jan 21 16:13:31 crc kubenswrapper[4902]: E0121 16:13:31.794317 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4a4e549-a509-40db-8756-e37432024793" containerName="mariadb-database-create" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.794325 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4a4e549-a509-40db-8756-e37432024793" containerName="mariadb-database-create" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.794568 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4a4e549-a509-40db-8756-e37432024793" containerName="mariadb-database-create" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.794597 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="502e21f3-ea57-4f04-8e23-9b45c7a07ca2" containerName="mariadb-account-create-update" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.796135 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.798576 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.802818 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-68fdc4858c-f84fc"] Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.803423 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-d4crj" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.804606 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.804693 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-ovndbs" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.816943 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-ovndb-tls-certs\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.817031 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d51aa030-a37e-41cd-8552-491e33fe846f-config-data-merged\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.817302 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-scripts\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.817415 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-combined-ca-bundle\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.817455 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-config-data\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.817471 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/d51aa030-a37e-41cd-8552-491e33fe846f-octavia-run\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.919227 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-ovndb-tls-certs\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.919299 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d51aa030-a37e-41cd-8552-491e33fe846f-config-data-merged\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.919895 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d51aa030-a37e-41cd-8552-491e33fe846f-config-data-merged\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.919978 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-scripts\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.920564 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-combined-ca-bundle\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.920594 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-config-data\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.920610 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/d51aa030-a37e-41cd-8552-491e33fe846f-octavia-run\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.920882 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/d51aa030-a37e-41cd-8552-491e33fe846f-octavia-run\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.928813 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-scripts\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.928961 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-ovndb-tls-certs\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.929776 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-combined-ca-bundle\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.939231 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-config-data\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:32 crc kubenswrapper[4902]: I0121 16:13:32.116095 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:32 crc kubenswrapper[4902]: I0121 16:13:32.649579 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-68fdc4858c-f84fc"] Jan 21 16:13:32 crc kubenswrapper[4902]: I0121 16:13:32.663881 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:13:33 crc kubenswrapper[4902]: I0121 16:13:33.007227 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-68fdc4858c-f84fc" event={"ID":"d51aa030-a37e-41cd-8552-491e33fe846f","Type":"ContainerStarted","Data":"74e10b4cb1523503d4eeb2bf1bd717e5c53fb870c6fa0cbb79018c89de00319b"} Jan 21 16:13:44 crc kubenswrapper[4902]: I0121 16:13:44.065402 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-fd43-account-create-update-f6bm7"] Jan 21 16:13:44 crc kubenswrapper[4902]: I0121 16:13:44.076439 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-9fbzk"] Jan 21 16:13:44 crc kubenswrapper[4902]: I0121 16:13:44.097128 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-fd43-account-create-update-f6bm7"] Jan 21 16:13:44 crc kubenswrapper[4902]: I0121 16:13:44.105413 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-9fbzk"] Jan 21 16:13:44 crc kubenswrapper[4902]: I0121 16:13:44.305977 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b9f6374-66c7-4124-b410-c5d60c8f0d6b" path="/var/lib/kubelet/pods/0b9f6374-66c7-4124-b410-c5d60c8f0d6b/volumes" Jan 21 16:13:44 crc kubenswrapper[4902]: I0121 16:13:44.306832 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd3463ca-5f37-4a7e-9f53-c32f2abe3502" path="/var/lib/kubelet/pods/dd3463ca-5f37-4a7e-9f53-c32f2abe3502/volumes" Jan 21 16:13:47 crc kubenswrapper[4902]: I0121 16:13:47.498559 4902 generic.go:334] "Generic (PLEG): container finished" podID="d51aa030-a37e-41cd-8552-491e33fe846f" containerID="fc991874ee6a71ec6a7ac8920dba0cf1f3cecb0c9f70e1aa730945d643c88576" exitCode=0 Jan 21 16:13:47 crc kubenswrapper[4902]: I0121 16:13:47.498713 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-68fdc4858c-f84fc" event={"ID":"d51aa030-a37e-41cd-8552-491e33fe846f","Type":"ContainerDied","Data":"fc991874ee6a71ec6a7ac8920dba0cf1f3cecb0c9f70e1aa730945d643c88576"} Jan 21 16:13:47 crc kubenswrapper[4902]: I0121 16:13:47.770452 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:13:47 crc kubenswrapper[4902]: I0121 16:13:47.770509 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:13:48 crc kubenswrapper[4902]: I0121 16:13:48.509966 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-68fdc4858c-f84fc" event={"ID":"d51aa030-a37e-41cd-8552-491e33fe846f","Type":"ContainerStarted","Data":"85be75f32741c5a6c474bfc60d2bfd7e17468dd268de43bcd1ea2514c5405c0a"} Jan 21 16:13:48 crc kubenswrapper[4902]: I0121 16:13:48.510299 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-68fdc4858c-f84fc" event={"ID":"d51aa030-a37e-41cd-8552-491e33fe846f","Type":"ContainerStarted","Data":"5305efd4c47cb14e61d707c3da54bab682028a330018a10560da985768cafc0e"} Jan 21 16:13:48 crc kubenswrapper[4902]: I0121 16:13:48.510427 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:48 crc kubenswrapper[4902]: I0121 16:13:48.532879 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-68fdc4858c-f84fc" podStartSLOduration=3.493472542 podStartE2EDuration="17.532856911s" podCreationTimestamp="2026-01-21 16:13:31 +0000 UTC" firstStartedPulling="2026-01-21 16:13:32.66358222 +0000 UTC m=+5974.740415249" lastFinishedPulling="2026-01-21 16:13:46.702966589 +0000 UTC m=+5988.779799618" observedRunningTime="2026-01-21 16:13:48.525459433 +0000 UTC m=+5990.602292462" watchObservedRunningTime="2026-01-21 16:13:48.532856911 +0000 UTC m=+5990.609689940" Jan 21 16:13:49 crc kubenswrapper[4902]: I0121 16:13:49.520293 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.653601 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.665768 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.672538 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.800624 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9bqlx-config-78jk5"] Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.802173 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.805845 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.812129 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9bqlx-config-78jk5"] Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.905435 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1616834-8fce-44d6-9551-52dc5e1012e4-scripts\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.905776 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e1616834-8fce-44d6-9551-52dc5e1012e4-additional-scripts\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.905813 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-log-ovn\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.905903 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-run\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.905947 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnwkz\" (UniqueName: \"kubernetes.io/projected/e1616834-8fce-44d6-9551-52dc5e1012e4-kube-api-access-xnwkz\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.905971 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-run-ovn\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.008243 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-run\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.008308 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnwkz\" (UniqueName: \"kubernetes.io/projected/e1616834-8fce-44d6-9551-52dc5e1012e4-kube-api-access-xnwkz\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.008337 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-run-ovn\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.008411 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1616834-8fce-44d6-9551-52dc5e1012e4-scripts\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.008440 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e1616834-8fce-44d6-9551-52dc5e1012e4-additional-scripts\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.008467 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-log-ovn\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.008804 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-log-ovn\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.008869 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-run\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.008881 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-run-ovn\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.009984 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e1616834-8fce-44d6-9551-52dc5e1012e4-additional-scripts\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.012184 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1616834-8fce-44d6-9551-52dc5e1012e4-scripts\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.035492 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-m6jz2"] Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.036208 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnwkz\" (UniqueName: \"kubernetes.io/projected/e1616834-8fce-44d6-9551-52dc5e1012e4-kube-api-access-xnwkz\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.043459 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-m6jz2"] Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.127519 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.763639 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9bqlx-config-78jk5"] Jan 21 16:13:51 crc kubenswrapper[4902]: W0121 16:13:51.773894 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1616834_8fce_44d6_9551_52dc5e1012e4.slice/crio-b861559f35c5cbc3ba73d83f74230652d5459d73a1da1c35233a69e6609f5e46 WatchSource:0}: Error finding container b861559f35c5cbc3ba73d83f74230652d5459d73a1da1c35233a69e6609f5e46: Status 404 returned error can't find the container with id b861559f35c5cbc3ba73d83f74230652d5459d73a1da1c35233a69e6609f5e46 Jan 21 16:13:52 crc kubenswrapper[4902]: I0121 16:13:52.308020 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="072d9d46-6930-490e-9561-cd7e75f05451" path="/var/lib/kubelet/pods/072d9d46-6930-490e-9561-cd7e75f05451/volumes" Jan 21 16:13:52 crc kubenswrapper[4902]: I0121 16:13:52.553990 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9bqlx-config-78jk5" event={"ID":"e1616834-8fce-44d6-9551-52dc5e1012e4","Type":"ContainerStarted","Data":"e3e9c87d9d90cc49442da28637d2cced6b19e9645d80d76c03c98029e5898f54"} Jan 21 16:13:52 crc kubenswrapper[4902]: I0121 16:13:52.554077 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9bqlx-config-78jk5" event={"ID":"e1616834-8fce-44d6-9551-52dc5e1012e4","Type":"ContainerStarted","Data":"b861559f35c5cbc3ba73d83f74230652d5459d73a1da1c35233a69e6609f5e46"} Jan 21 16:13:52 crc kubenswrapper[4902]: I0121 16:13:52.578551 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-9bqlx-config-78jk5" podStartSLOduration=2.578530234 podStartE2EDuration="2.578530234s" podCreationTimestamp="2026-01-21 16:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:13:52.573124172 +0000 UTC m=+5994.649957211" watchObservedRunningTime="2026-01-21 16:13:52.578530234 +0000 UTC m=+5994.655363263" Jan 21 16:13:53 crc kubenswrapper[4902]: I0121 16:13:53.568122 4902 generic.go:334] "Generic (PLEG): container finished" podID="e1616834-8fce-44d6-9551-52dc5e1012e4" containerID="e3e9c87d9d90cc49442da28637d2cced6b19e9645d80d76c03c98029e5898f54" exitCode=0 Jan 21 16:13:53 crc kubenswrapper[4902]: I0121 16:13:53.568232 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9bqlx-config-78jk5" event={"ID":"e1616834-8fce-44d6-9551-52dc5e1012e4","Type":"ContainerDied","Data":"e3e9c87d9d90cc49442da28637d2cced6b19e9645d80d76c03c98029e5898f54"} Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.022319 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.193688 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnwkz\" (UniqueName: \"kubernetes.io/projected/e1616834-8fce-44d6-9551-52dc5e1012e4-kube-api-access-xnwkz\") pod \"e1616834-8fce-44d6-9551-52dc5e1012e4\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.193755 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-run\") pod \"e1616834-8fce-44d6-9551-52dc5e1012e4\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.193825 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-log-ovn\") pod \"e1616834-8fce-44d6-9551-52dc5e1012e4\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.193868 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1616834-8fce-44d6-9551-52dc5e1012e4-scripts\") pod \"e1616834-8fce-44d6-9551-52dc5e1012e4\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.193908 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e1616834-8fce-44d6-9551-52dc5e1012e4-additional-scripts\") pod \"e1616834-8fce-44d6-9551-52dc5e1012e4\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.194015 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-run-ovn\") pod \"e1616834-8fce-44d6-9551-52dc5e1012e4\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.194205 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e1616834-8fce-44d6-9551-52dc5e1012e4" (UID: "e1616834-8fce-44d6-9551-52dc5e1012e4"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.194240 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-run" (OuterVolumeSpecName: "var-run") pod "e1616834-8fce-44d6-9551-52dc5e1012e4" (UID: "e1616834-8fce-44d6-9551-52dc5e1012e4"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.194381 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e1616834-8fce-44d6-9551-52dc5e1012e4" (UID: "e1616834-8fce-44d6-9551-52dc5e1012e4"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.194700 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1616834-8fce-44d6-9551-52dc5e1012e4-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "e1616834-8fce-44d6-9551-52dc5e1012e4" (UID: "e1616834-8fce-44d6-9551-52dc5e1012e4"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.194992 4902 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.195017 4902 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e1616834-8fce-44d6-9551-52dc5e1012e4-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.195033 4902 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.195066 4902 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.195406 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1616834-8fce-44d6-9551-52dc5e1012e4-scripts" (OuterVolumeSpecName: "scripts") pod "e1616834-8fce-44d6-9551-52dc5e1012e4" (UID: "e1616834-8fce-44d6-9551-52dc5e1012e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.200640 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1616834-8fce-44d6-9551-52dc5e1012e4-kube-api-access-xnwkz" (OuterVolumeSpecName: "kube-api-access-xnwkz") pod "e1616834-8fce-44d6-9551-52dc5e1012e4" (UID: "e1616834-8fce-44d6-9551-52dc5e1012e4"). InnerVolumeSpecName "kube-api-access-xnwkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.296293 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1616834-8fce-44d6-9551-52dc5e1012e4-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.296321 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnwkz\" (UniqueName: \"kubernetes.io/projected/e1616834-8fce-44d6-9551-52dc5e1012e4-kube-api-access-xnwkz\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.590885 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9bqlx-config-78jk5" event={"ID":"e1616834-8fce-44d6-9551-52dc5e1012e4","Type":"ContainerDied","Data":"b861559f35c5cbc3ba73d83f74230652d5459d73a1da1c35233a69e6609f5e46"} Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.591593 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b861559f35c5cbc3ba73d83f74230652d5459d73a1da1c35233a69e6609f5e46" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.590912 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.671844 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-9bqlx-config-78jk5"] Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.684686 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-9bqlx-config-78jk5"] Jan 21 16:13:56 crc kubenswrapper[4902]: I0121 16:13:56.306213 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1616834-8fce-44d6-9551-52dc5e1012e4" path="/var/lib/kubelet/pods/e1616834-8fce-44d6-9551-52dc5e1012e4/volumes" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.666117 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-kn74s"] Jan 21 16:14:02 crc kubenswrapper[4902]: E0121 16:14:02.666835 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1616834-8fce-44d6-9551-52dc5e1012e4" containerName="ovn-config" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.666847 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1616834-8fce-44d6-9551-52dc5e1012e4" containerName="ovn-config" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.667020 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1616834-8fce-44d6-9551-52dc5e1012e4" containerName="ovn-config" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.667921 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.669972 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.670263 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.670610 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.687488 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-kn74s"] Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.856566 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/802fca2f-9dae-4f46-aaf3-c688c8ebbdfb-config-data-merged\") pod \"octavia-rsyslog-kn74s\" (UID: \"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb\") " pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.856750 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/802fca2f-9dae-4f46-aaf3-c688c8ebbdfb-config-data\") pod \"octavia-rsyslog-kn74s\" (UID: \"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb\") " pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.856790 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/802fca2f-9dae-4f46-aaf3-c688c8ebbdfb-hm-ports\") pod \"octavia-rsyslog-kn74s\" (UID: \"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb\") " pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.856935 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/802fca2f-9dae-4f46-aaf3-c688c8ebbdfb-scripts\") pod \"octavia-rsyslog-kn74s\" (UID: \"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb\") " pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.958754 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/802fca2f-9dae-4f46-aaf3-c688c8ebbdfb-config-data\") pod \"octavia-rsyslog-kn74s\" (UID: \"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb\") " pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.958795 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/802fca2f-9dae-4f46-aaf3-c688c8ebbdfb-hm-ports\") pod \"octavia-rsyslog-kn74s\" (UID: \"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb\") " pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.958877 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/802fca2f-9dae-4f46-aaf3-c688c8ebbdfb-scripts\") pod \"octavia-rsyslog-kn74s\" (UID: \"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb\") " pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.958930 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/802fca2f-9dae-4f46-aaf3-c688c8ebbdfb-config-data-merged\") pod \"octavia-rsyslog-kn74s\" (UID: \"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb\") " pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.959389 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/802fca2f-9dae-4f46-aaf3-c688c8ebbdfb-config-data-merged\") pod \"octavia-rsyslog-kn74s\" (UID: \"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb\") " pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.960150 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/802fca2f-9dae-4f46-aaf3-c688c8ebbdfb-hm-ports\") pod \"octavia-rsyslog-kn74s\" (UID: \"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb\") " pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.964681 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/802fca2f-9dae-4f46-aaf3-c688c8ebbdfb-config-data\") pod \"octavia-rsyslog-kn74s\" (UID: \"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb\") " pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.974852 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/802fca2f-9dae-4f46-aaf3-c688c8ebbdfb-scripts\") pod \"octavia-rsyslog-kn74s\" (UID: \"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb\") " pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.986658 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:03 crc kubenswrapper[4902]: I0121 16:14:03.622296 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-kn74s"] Jan 21 16:14:03 crc kubenswrapper[4902]: I0121 16:14:03.715462 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-zmb5b"] Jan 21 16:14:03 crc kubenswrapper[4902]: I0121 16:14:03.717405 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" Jan 21 16:14:03 crc kubenswrapper[4902]: I0121 16:14:03.725719 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Jan 21 16:14:03 crc kubenswrapper[4902]: I0121 16:14:03.727182 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-kn74s" event={"ID":"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb","Type":"ContainerStarted","Data":"f20088af3212fe9054842943e360bd5030a5079c52ee84fd40b8af92ab57aacf"} Jan 21 16:14:03 crc kubenswrapper[4902]: I0121 16:14:03.733723 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-zmb5b"] Jan 21 16:14:03 crc kubenswrapper[4902]: I0121 16:14:03.797610 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/e68cbe1e-2ace-4011-856c-5fa393f45b4b-amphora-image\") pod \"octavia-image-upload-7b97d6bc64-zmb5b\" (UID: \"e68cbe1e-2ace-4011-856c-5fa393f45b4b\") " pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" Jan 21 16:14:03 crc kubenswrapper[4902]: I0121 16:14:03.797733 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e68cbe1e-2ace-4011-856c-5fa393f45b4b-httpd-config\") pod \"octavia-image-upload-7b97d6bc64-zmb5b\" (UID: \"e68cbe1e-2ace-4011-856c-5fa393f45b4b\") " pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" Jan 21 16:14:03 crc kubenswrapper[4902]: I0121 16:14:03.899489 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/e68cbe1e-2ace-4011-856c-5fa393f45b4b-amphora-image\") pod \"octavia-image-upload-7b97d6bc64-zmb5b\" (UID: \"e68cbe1e-2ace-4011-856c-5fa393f45b4b\") " pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" Jan 21 16:14:03 crc kubenswrapper[4902]: I0121 16:14:03.899637 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e68cbe1e-2ace-4011-856c-5fa393f45b4b-httpd-config\") pod \"octavia-image-upload-7b97d6bc64-zmb5b\" (UID: \"e68cbe1e-2ace-4011-856c-5fa393f45b4b\") " pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" Jan 21 16:14:03 crc kubenswrapper[4902]: I0121 16:14:03.900073 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/e68cbe1e-2ace-4011-856c-5fa393f45b4b-amphora-image\") pod \"octavia-image-upload-7b97d6bc64-zmb5b\" (UID: \"e68cbe1e-2ace-4011-856c-5fa393f45b4b\") " pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" Jan 21 16:14:03 crc kubenswrapper[4902]: I0121 16:14:03.907001 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e68cbe1e-2ace-4011-856c-5fa393f45b4b-httpd-config\") pod \"octavia-image-upload-7b97d6bc64-zmb5b\" (UID: \"e68cbe1e-2ace-4011-856c-5fa393f45b4b\") " pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.059333 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.642161 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-vrr2k"] Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.644277 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.646575 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.674679 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-vrr2k"] Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.674980 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-scripts\") pod \"octavia-db-sync-vrr2k\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.675108 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-config-data-merged\") pod \"octavia-db-sync-vrr2k\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.675214 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-config-data\") pod \"octavia-db-sync-vrr2k\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.675308 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-combined-ca-bundle\") pod \"octavia-db-sync-vrr2k\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.715615 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-zmb5b"] Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.754321 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" event={"ID":"e68cbe1e-2ace-4011-856c-5fa393f45b4b","Type":"ContainerStarted","Data":"b6f108856fdfaa3ddd29d66fca750e5174c3b7beab2b9dd46ad0290fb86745d4"} Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.777471 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-scripts\") pod \"octavia-db-sync-vrr2k\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.777533 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-config-data-merged\") pod \"octavia-db-sync-vrr2k\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.777576 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-config-data\") pod \"octavia-db-sync-vrr2k\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.777613 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-combined-ca-bundle\") pod \"octavia-db-sync-vrr2k\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.779346 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-config-data-merged\") pod \"octavia-db-sync-vrr2k\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.783768 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-combined-ca-bundle\") pod \"octavia-db-sync-vrr2k\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.784456 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-config-data\") pod \"octavia-db-sync-vrr2k\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.799970 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-scripts\") pod \"octavia-db-sync-vrr2k\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.982519 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:05 crc kubenswrapper[4902]: I0121 16:14:05.037451 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-clvkp"] Jan 21 16:14:05 crc kubenswrapper[4902]: I0121 16:14:05.046256 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-clvkp"] Jan 21 16:14:05 crc kubenswrapper[4902]: I0121 16:14:05.836380 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-vrr2k"] Jan 21 16:14:05 crc kubenswrapper[4902]: W0121 16:14:05.847617 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60a6ab47_0bbe_428a_82f5_478fc4c52e8a.slice/crio-bc2c17c7cde44a49901c354fc86f24095282904510ad331016a89de7e5e63366 WatchSource:0}: Error finding container bc2c17c7cde44a49901c354fc86f24095282904510ad331016a89de7e5e63366: Status 404 returned error can't find the container with id bc2c17c7cde44a49901c354fc86f24095282904510ad331016a89de7e5e63366 Jan 21 16:14:06 crc kubenswrapper[4902]: I0121 16:14:06.308100 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="663c22ab-26c3-4d29-8965-255dc095eef2" path="/var/lib/kubelet/pods/663c22ab-26c3-4d29-8965-255dc095eef2/volumes" Jan 21 16:14:06 crc kubenswrapper[4902]: I0121 16:14:06.793350 4902 generic.go:334] "Generic (PLEG): container finished" podID="60a6ab47-0bbe-428a-82f5-478fc4c52e8a" containerID="abc9a540052a00b1952e4ccbff28d0fd5e66b03f552886a2028474527bd5343e" exitCode=0 Jan 21 16:14:06 crc kubenswrapper[4902]: I0121 16:14:06.793442 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-vrr2k" event={"ID":"60a6ab47-0bbe-428a-82f5-478fc4c52e8a","Type":"ContainerDied","Data":"abc9a540052a00b1952e4ccbff28d0fd5e66b03f552886a2028474527bd5343e"} Jan 21 16:14:06 crc kubenswrapper[4902]: I0121 16:14:06.793496 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-vrr2k" event={"ID":"60a6ab47-0bbe-428a-82f5-478fc4c52e8a","Type":"ContainerStarted","Data":"bc2c17c7cde44a49901c354fc86f24095282904510ad331016a89de7e5e63366"} Jan 21 16:14:06 crc kubenswrapper[4902]: I0121 16:14:06.799467 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-kn74s" event={"ID":"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb","Type":"ContainerStarted","Data":"125ce162fb1d06f6ce0ad368bac748d97ad4258b813971480c2bc437990b851b"} Jan 21 16:14:07 crc kubenswrapper[4902]: I0121 16:14:07.326782 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:14:07 crc kubenswrapper[4902]: I0121 16:14:07.391744 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:14:07 crc kubenswrapper[4902]: I0121 16:14:07.810298 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-vrr2k" event={"ID":"60a6ab47-0bbe-428a-82f5-478fc4c52e8a","Type":"ContainerStarted","Data":"f4cdf18149c84ac20ab00cae2362d90191fa45e99a1761f8508af240e2f326b6"} Jan 21 16:14:07 crc kubenswrapper[4902]: I0121 16:14:07.828998 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-vrr2k" podStartSLOduration=3.828981834 podStartE2EDuration="3.828981834s" podCreationTimestamp="2026-01-21 16:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:14:07.827915654 +0000 UTC m=+6009.904748693" watchObservedRunningTime="2026-01-21 16:14:07.828981834 +0000 UTC m=+6009.905814863" Jan 21 16:14:08 crc kubenswrapper[4902]: I0121 16:14:08.822564 4902 generic.go:334] "Generic (PLEG): container finished" podID="802fca2f-9dae-4f46-aaf3-c688c8ebbdfb" containerID="125ce162fb1d06f6ce0ad368bac748d97ad4258b813971480c2bc437990b851b" exitCode=0 Jan 21 16:14:08 crc kubenswrapper[4902]: I0121 16:14:08.822665 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-kn74s" event={"ID":"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb","Type":"ContainerDied","Data":"125ce162fb1d06f6ce0ad368bac748d97ad4258b813971480c2bc437990b851b"} Jan 21 16:14:10 crc kubenswrapper[4902]: I0121 16:14:10.839526 4902 generic.go:334] "Generic (PLEG): container finished" podID="60a6ab47-0bbe-428a-82f5-478fc4c52e8a" containerID="f4cdf18149c84ac20ab00cae2362d90191fa45e99a1761f8508af240e2f326b6" exitCode=0 Jan 21 16:14:10 crc kubenswrapper[4902]: I0121 16:14:10.840911 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-vrr2k" event={"ID":"60a6ab47-0bbe-428a-82f5-478fc4c52e8a","Type":"ContainerDied","Data":"f4cdf18149c84ac20ab00cae2362d90191fa45e99a1761f8508af240e2f326b6"} Jan 21 16:14:10 crc kubenswrapper[4902]: I0121 16:14:10.843697 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-kn74s" event={"ID":"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb","Type":"ContainerStarted","Data":"8b9722f803eddd28e0f8df0cf26577d7cd559dd8516a1f4276562320f20d3d16"} Jan 21 16:14:10 crc kubenswrapper[4902]: I0121 16:14:10.844742 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:10 crc kubenswrapper[4902]: I0121 16:14:10.882238 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-kn74s" podStartSLOduration=2.6046143649999998 podStartE2EDuration="8.882221652s" podCreationTimestamp="2026-01-21 16:14:02 +0000 UTC" firstStartedPulling="2026-01-21 16:14:03.647101182 +0000 UTC m=+6005.723934211" lastFinishedPulling="2026-01-21 16:14:09.924708469 +0000 UTC m=+6012.001541498" observedRunningTime="2026-01-21 16:14:10.877954282 +0000 UTC m=+6012.954787321" watchObservedRunningTime="2026-01-21 16:14:10.882221652 +0000 UTC m=+6012.959054681" Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.224813 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.329815 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-config-data-merged\") pod \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.329973 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-scripts\") pod \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.330073 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-config-data\") pod \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.330244 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-combined-ca-bundle\") pod \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.336805 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-config-data" (OuterVolumeSpecName: "config-data") pod "60a6ab47-0bbe-428a-82f5-478fc4c52e8a" (UID: "60a6ab47-0bbe-428a-82f5-478fc4c52e8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.337184 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-scripts" (OuterVolumeSpecName: "scripts") pod "60a6ab47-0bbe-428a-82f5-478fc4c52e8a" (UID: "60a6ab47-0bbe-428a-82f5-478fc4c52e8a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.358639 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "60a6ab47-0bbe-428a-82f5-478fc4c52e8a" (UID: "60a6ab47-0bbe-428a-82f5-478fc4c52e8a"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.358939 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60a6ab47-0bbe-428a-82f5-478fc4c52e8a" (UID: "60a6ab47-0bbe-428a-82f5-478fc4c52e8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.432639 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.432675 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.432685 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.432694 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.870399 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-vrr2k" event={"ID":"60a6ab47-0bbe-428a-82f5-478fc4c52e8a","Type":"ContainerDied","Data":"bc2c17c7cde44a49901c354fc86f24095282904510ad331016a89de7e5e63366"} Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.870498 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc2c17c7cde44a49901c354fc86f24095282904510ad331016a89de7e5e63366" Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.870501 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.503256 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-77584c4dc-lmbjv"] Jan 21 16:14:13 crc kubenswrapper[4902]: E0121 16:14:13.503653 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60a6ab47-0bbe-428a-82f5-478fc4c52e8a" containerName="init" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.503665 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="60a6ab47-0bbe-428a-82f5-478fc4c52e8a" containerName="init" Jan 21 16:14:13 crc kubenswrapper[4902]: E0121 16:14:13.503684 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60a6ab47-0bbe-428a-82f5-478fc4c52e8a" containerName="octavia-db-sync" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.503689 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="60a6ab47-0bbe-428a-82f5-478fc4c52e8a" containerName="octavia-db-sync" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.503875 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="60a6ab47-0bbe-428a-82f5-478fc4c52e8a" containerName="octavia-db-sync" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.505257 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.511670 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-public-svc" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.511791 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-internal-svc" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.527200 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-77584c4dc-lmbjv"] Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.556574 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/441cf475-eec9-4cee-84ab-7807e9ab0b75-config-data-merged\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.556865 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-ovndb-tls-certs\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.557176 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/441cf475-eec9-4cee-84ab-7807e9ab0b75-octavia-run\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.557235 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-config-data\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.557332 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-public-tls-certs\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.557440 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-internal-tls-certs\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.557461 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-scripts\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.557625 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-combined-ca-bundle\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.659305 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-internal-tls-certs\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.659349 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-scripts\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.659417 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-combined-ca-bundle\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.659466 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/441cf475-eec9-4cee-84ab-7807e9ab0b75-config-data-merged\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.659491 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-ovndb-tls-certs\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.659545 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/441cf475-eec9-4cee-84ab-7807e9ab0b75-octavia-run\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.659564 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-config-data\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.659590 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-public-tls-certs\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.660433 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/441cf475-eec9-4cee-84ab-7807e9ab0b75-config-data-merged\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.660753 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/441cf475-eec9-4cee-84ab-7807e9ab0b75-octavia-run\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.664537 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-public-tls-certs\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.664865 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-config-data\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.664957 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-combined-ca-bundle\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.665226 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-internal-tls-certs\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.666373 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-ovndb-tls-certs\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.677023 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-scripts\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.821720 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:14 crc kubenswrapper[4902]: I0121 16:14:14.275430 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-77584c4dc-lmbjv"] Jan 21 16:14:14 crc kubenswrapper[4902]: I0121 16:14:14.889335 4902 generic.go:334] "Generic (PLEG): container finished" podID="441cf475-eec9-4cee-84ab-7807e9ab0b75" containerID="ea2fe82cde03c6a78f9d553b103a1c371e701ec1b79c475d35f8a86034f94021" exitCode=0 Jan 21 16:14:14 crc kubenswrapper[4902]: I0121 16:14:14.889387 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-77584c4dc-lmbjv" event={"ID":"441cf475-eec9-4cee-84ab-7807e9ab0b75","Type":"ContainerDied","Data":"ea2fe82cde03c6a78f9d553b103a1c371e701ec1b79c475d35f8a86034f94021"} Jan 21 16:14:14 crc kubenswrapper[4902]: I0121 16:14:14.889648 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-77584c4dc-lmbjv" event={"ID":"441cf475-eec9-4cee-84ab-7807e9ab0b75","Type":"ContainerStarted","Data":"a793d9ac345fa82d74e2acc0874f3a61d56ae2812774c4b1aaeb14624d98850c"} Jan 21 16:14:15 crc kubenswrapper[4902]: E0121 16:14:15.129931 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/2d/2d21773ac1b6ba8fb68f6ae19d75d7f308e3df9e2075fcfc8572117006de3334?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTGR23ZTE6%2F20260121%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260121T161405Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=1a3ed4259338db31ffcff2399d8d4d8b6acc448f04f0502e93c639ecb435e60f®ion=us-east-1&namespace=gthiemonge&username=openshift-release-dev+ocm_access_1b89217552bc42d1be3fb06a1aed001a&repo_name=octavia-amphora-image&akamai_signature=exp=1769012945~hmac=9ea07ec601ef21c82c3e7bac9e3d6ff616e5094816368e2302f9299e27c52ce9\": net/http: TLS handshake timeout" image="quay.io/gthiemonge/octavia-amphora-image:latest" Jan 21 16:14:15 crc kubenswrapper[4902]: E0121 16:14:15.130359 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/gthiemonge/octavia-amphora-image,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DEST_DIR,Value:/usr/local/apache2/htdocs,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:amphora-image,ReadOnly:false,MountPath:/usr/local/apache2/htdocs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-image-upload-7b97d6bc64-zmb5b_openstack(e68cbe1e-2ace-4011-856c-5fa393f45b4b): ErrImagePull: parsing image configuration: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/2d/2d21773ac1b6ba8fb68f6ae19d75d7f308e3df9e2075fcfc8572117006de3334?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTGR23ZTE6%2F20260121%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260121T161405Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=1a3ed4259338db31ffcff2399d8d4d8b6acc448f04f0502e93c639ecb435e60f®ion=us-east-1&namespace=gthiemonge&username=openshift-release-dev+ocm_access_1b89217552bc42d1be3fb06a1aed001a&repo_name=octavia-amphora-image&akamai_signature=exp=1769012945~hmac=9ea07ec601ef21c82c3e7bac9e3d6ff616e5094816368e2302f9299e27c52ce9\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 16:14:15 crc kubenswrapper[4902]: E0121 16:14:15.131548 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"parsing image configuration: Get \\\"https://cdn01.quay.io/quayio-production-s3/sha256/2d/2d21773ac1b6ba8fb68f6ae19d75d7f308e3df9e2075fcfc8572117006de3334?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTGR23ZTE6%2F20260121%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260121T161405Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=1a3ed4259338db31ffcff2399d8d4d8b6acc448f04f0502e93c639ecb435e60f®ion=us-east-1&namespace=gthiemonge&username=openshift-release-dev+ocm_access_1b89217552bc42d1be3fb06a1aed001a&repo_name=octavia-amphora-image&akamai_signature=exp=1769012945~hmac=9ea07ec601ef21c82c3e7bac9e3d6ff616e5094816368e2302f9299e27c52ce9\\\": net/http: TLS handshake timeout\"" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" podUID="e68cbe1e-2ace-4011-856c-5fa393f45b4b" Jan 21 16:14:15 crc kubenswrapper[4902]: I0121 16:14:15.900715 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-77584c4dc-lmbjv" event={"ID":"441cf475-eec9-4cee-84ab-7807e9ab0b75","Type":"ContainerStarted","Data":"e93cb80329800d15e2b5ce77b2dc61ef058262b83f15dc3c21436462abf4dfe3"} Jan 21 16:14:15 crc kubenswrapper[4902]: I0121 16:14:15.900780 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-77584c4dc-lmbjv" event={"ID":"441cf475-eec9-4cee-84ab-7807e9ab0b75","Type":"ContainerStarted","Data":"56e119d97ab6875ebdbdb5528d15ca45102e19220044055c1bb7e639486d7373"} Jan 21 16:14:15 crc kubenswrapper[4902]: I0121 16:14:15.900805 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:15 crc kubenswrapper[4902]: I0121 16:14:15.900825 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:15 crc kubenswrapper[4902]: E0121 16:14:15.902698 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/gthiemonge/octavia-amphora-image\\\"\"" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" podUID="e68cbe1e-2ace-4011-856c-5fa393f45b4b" Jan 21 16:14:15 crc kubenswrapper[4902]: I0121 16:14:15.933254 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-77584c4dc-lmbjv" podStartSLOduration=2.933223673 podStartE2EDuration="2.933223673s" podCreationTimestamp="2026-01-21 16:14:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:14:15.926674869 +0000 UTC m=+6018.003507908" watchObservedRunningTime="2026-01-21 16:14:15.933223673 +0000 UTC m=+6018.010056722" Jan 21 16:14:17 crc kubenswrapper[4902]: I0121 16:14:17.769580 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:14:17 crc kubenswrapper[4902]: I0121 16:14:17.769855 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:14:17 crc kubenswrapper[4902]: I0121 16:14:17.769912 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 16:14:17 crc kubenswrapper[4902]: I0121 16:14:17.771467 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:14:17 crc kubenswrapper[4902]: I0121 16:14:17.771700 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" gracePeriod=600 Jan 21 16:14:17 crc kubenswrapper[4902]: E0121 16:14:17.911961 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:14:17 crc kubenswrapper[4902]: I0121 16:14:17.920951 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" exitCode=0 Jan 21 16:14:17 crc kubenswrapper[4902]: I0121 16:14:17.920992 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac"} Jan 21 16:14:17 crc kubenswrapper[4902]: I0121 16:14:17.921022 4902 scope.go:117] "RemoveContainer" containerID="db5e286ed12d5cdac8541e22aa5c6794629a15f27a4e802d85c369fc2b4f4f6b" Jan 21 16:14:17 crc kubenswrapper[4902]: I0121 16:14:17.921598 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:14:17 crc kubenswrapper[4902]: E0121 16:14:17.921825 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:14:18 crc kubenswrapper[4902]: I0121 16:14:18.032098 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:25 crc kubenswrapper[4902]: I0121 16:14:25.311518 4902 scope.go:117] "RemoveContainer" containerID="c898501a393ec12d8bdad3ffbecedd820d45983cad8f57e77c1b8bf1f2602ced" Jan 21 16:14:25 crc kubenswrapper[4902]: I0121 16:14:25.860086 4902 scope.go:117] "RemoveContainer" containerID="94e5637468147f71d442912ca57ee6a969ce1c74828b8408d61b57b6d26eda33" Jan 21 16:14:25 crc kubenswrapper[4902]: I0121 16:14:25.883526 4902 scope.go:117] "RemoveContainer" containerID="994f6f05fed4b0e62e48fa8578c2ecb21f387018408d5954555b07ebf19b3b49" Jan 21 16:14:25 crc kubenswrapper[4902]: I0121 16:14:25.932623 4902 scope.go:117] "RemoveContainer" containerID="8e7b81ffed093606aaee9fbef35f94103abd1548cced4aa289004fb371568398" Jan 21 16:14:29 crc kubenswrapper[4902]: I0121 16:14:29.294976 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:14:29 crc kubenswrapper[4902]: E0121 16:14:29.297033 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:14:32 crc kubenswrapper[4902]: I0121 16:14:32.938842 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:33 crc kubenswrapper[4902]: I0121 16:14:33.072331 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:33 crc kubenswrapper[4902]: I0121 16:14:33.152066 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-api-68fdc4858c-f84fc"] Jan 21 16:14:33 crc kubenswrapper[4902]: I0121 16:14:33.152366 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-api-68fdc4858c-f84fc" podUID="d51aa030-a37e-41cd-8552-491e33fe846f" containerName="octavia-api" containerID="cri-o://5305efd4c47cb14e61d707c3da54bab682028a330018a10560da985768cafc0e" gracePeriod=30 Jan 21 16:14:33 crc kubenswrapper[4902]: I0121 16:14:33.152865 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-api-68fdc4858c-f84fc" podUID="d51aa030-a37e-41cd-8552-491e33fe846f" containerName="octavia-api-provider-agent" containerID="cri-o://85be75f32741c5a6c474bfc60d2bfd7e17468dd268de43bcd1ea2514c5405c0a" gracePeriod=30 Jan 21 16:14:34 crc kubenswrapper[4902]: I0121 16:14:34.133756 4902 generic.go:334] "Generic (PLEG): container finished" podID="d51aa030-a37e-41cd-8552-491e33fe846f" containerID="85be75f32741c5a6c474bfc60d2bfd7e17468dd268de43bcd1ea2514c5405c0a" exitCode=0 Jan 21 16:14:34 crc kubenswrapper[4902]: I0121 16:14:34.133800 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-68fdc4858c-f84fc" event={"ID":"d51aa030-a37e-41cd-8552-491e33fe846f","Type":"ContainerDied","Data":"85be75f32741c5a6c474bfc60d2bfd7e17468dd268de43bcd1ea2514c5405c0a"} Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.167295 4902 generic.go:334] "Generic (PLEG): container finished" podID="d51aa030-a37e-41cd-8552-491e33fe846f" containerID="5305efd4c47cb14e61d707c3da54bab682028a330018a10560da985768cafc0e" exitCode=0 Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.167369 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-68fdc4858c-f84fc" event={"ID":"d51aa030-a37e-41cd-8552-491e33fe846f","Type":"ContainerDied","Data":"5305efd4c47cb14e61d707c3da54bab682028a330018a10560da985768cafc0e"} Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.445246 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.562547 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d51aa030-a37e-41cd-8552-491e33fe846f-config-data-merged\") pod \"d51aa030-a37e-41cd-8552-491e33fe846f\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.562613 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/d51aa030-a37e-41cd-8552-491e33fe846f-octavia-run\") pod \"d51aa030-a37e-41cd-8552-491e33fe846f\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.562728 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-combined-ca-bundle\") pod \"d51aa030-a37e-41cd-8552-491e33fe846f\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.562781 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-scripts\") pod \"d51aa030-a37e-41cd-8552-491e33fe846f\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.562801 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-ovndb-tls-certs\") pod \"d51aa030-a37e-41cd-8552-491e33fe846f\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.562827 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-config-data\") pod \"d51aa030-a37e-41cd-8552-491e33fe846f\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.563329 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d51aa030-a37e-41cd-8552-491e33fe846f-octavia-run" (OuterVolumeSpecName: "octavia-run") pod "d51aa030-a37e-41cd-8552-491e33fe846f" (UID: "d51aa030-a37e-41cd-8552-491e33fe846f"). InnerVolumeSpecName "octavia-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.567849 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-config-data" (OuterVolumeSpecName: "config-data") pod "d51aa030-a37e-41cd-8552-491e33fe846f" (UID: "d51aa030-a37e-41cd-8552-491e33fe846f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.576827 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-scripts" (OuterVolumeSpecName: "scripts") pod "d51aa030-a37e-41cd-8552-491e33fe846f" (UID: "d51aa030-a37e-41cd-8552-491e33fe846f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.618980 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d51aa030-a37e-41cd-8552-491e33fe846f" (UID: "d51aa030-a37e-41cd-8552-491e33fe846f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.625331 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d51aa030-a37e-41cd-8552-491e33fe846f-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "d51aa030-a37e-41cd-8552-491e33fe846f" (UID: "d51aa030-a37e-41cd-8552-491e33fe846f"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.665999 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.666090 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.666103 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.666115 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d51aa030-a37e-41cd-8552-491e33fe846f-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.666126 4902 reconciler_common.go:293] "Volume detached for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/d51aa030-a37e-41cd-8552-491e33fe846f-octavia-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.710330 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d51aa030-a37e-41cd-8552-491e33fe846f" (UID: "d51aa030-a37e-41cd-8552-491e33fe846f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.770250 4902 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:38 crc kubenswrapper[4902]: I0121 16:14:38.182297 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-68fdc4858c-f84fc" event={"ID":"d51aa030-a37e-41cd-8552-491e33fe846f","Type":"ContainerDied","Data":"74e10b4cb1523503d4eeb2bf1bd717e5c53fb870c6fa0cbb79018c89de00319b"} Jan 21 16:14:38 crc kubenswrapper[4902]: I0121 16:14:38.182363 4902 scope.go:117] "RemoveContainer" containerID="85be75f32741c5a6c474bfc60d2bfd7e17468dd268de43bcd1ea2514c5405c0a" Jan 21 16:14:38 crc kubenswrapper[4902]: I0121 16:14:38.182411 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:14:38 crc kubenswrapper[4902]: I0121 16:14:38.227242 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-api-68fdc4858c-f84fc"] Jan 21 16:14:38 crc kubenswrapper[4902]: I0121 16:14:38.236605 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-api-68fdc4858c-f84fc"] Jan 21 16:14:38 crc kubenswrapper[4902]: I0121 16:14:38.310871 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d51aa030-a37e-41cd-8552-491e33fe846f" path="/var/lib/kubelet/pods/d51aa030-a37e-41cd-8552-491e33fe846f/volumes" Jan 21 16:14:40 crc kubenswrapper[4902]: I0121 16:14:40.295570 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:14:40 crc kubenswrapper[4902]: E0121 16:14:40.296094 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:14:44 crc kubenswrapper[4902]: I0121 16:14:44.955526 4902 scope.go:117] "RemoveContainer" containerID="5305efd4c47cb14e61d707c3da54bab682028a330018a10560da985768cafc0e" Jan 21 16:14:45 crc kubenswrapper[4902]: I0121 16:14:45.225749 4902 scope.go:117] "RemoveContainer" containerID="fc991874ee6a71ec6a7ac8920dba0cf1f3cecb0c9f70e1aa730945d643c88576" Jan 21 16:14:46 crc kubenswrapper[4902]: I0121 16:14:46.266210 4902 generic.go:334] "Generic (PLEG): container finished" podID="e68cbe1e-2ace-4011-856c-5fa393f45b4b" containerID="69bb7c1dbb251c3de8ae57f07e730117aca9b6c44df49c90c6093a8ecc70f9e1" exitCode=0 Jan 21 16:14:46 crc kubenswrapper[4902]: I0121 16:14:46.266317 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" event={"ID":"e68cbe1e-2ace-4011-856c-5fa393f45b4b","Type":"ContainerDied","Data":"69bb7c1dbb251c3de8ae57f07e730117aca9b6c44df49c90c6093a8ecc70f9e1"} Jan 21 16:14:47 crc kubenswrapper[4902]: I0121 16:14:47.282195 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" event={"ID":"e68cbe1e-2ace-4011-856c-5fa393f45b4b","Type":"ContainerStarted","Data":"657dc03492b59a1f33c2bcfda57969b4fd0b8668563d2f702e94ee1a2f5b2099"} Jan 21 16:14:47 crc kubenswrapper[4902]: I0121 16:14:47.312718 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" podStartSLOduration=3.605133227 podStartE2EDuration="44.312694518s" podCreationTimestamp="2026-01-21 16:14:03 +0000 UTC" firstStartedPulling="2026-01-21 16:14:04.698937007 +0000 UTC m=+6006.775770036" lastFinishedPulling="2026-01-21 16:14:45.406498288 +0000 UTC m=+6047.483331327" observedRunningTime="2026-01-21 16:14:47.29392989 +0000 UTC m=+6049.370762919" watchObservedRunningTime="2026-01-21 16:14:47.312694518 +0000 UTC m=+6049.389527547" Jan 21 16:14:54 crc kubenswrapper[4902]: I0121 16:14:54.296018 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:14:54 crc kubenswrapper[4902]: E0121 16:14:54.296812 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.140622 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2"] Jan 21 16:15:00 crc kubenswrapper[4902]: E0121 16:15:00.141422 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d51aa030-a37e-41cd-8552-491e33fe846f" containerName="octavia-api" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.141435 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d51aa030-a37e-41cd-8552-491e33fe846f" containerName="octavia-api" Jan 21 16:15:00 crc kubenswrapper[4902]: E0121 16:15:00.141464 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d51aa030-a37e-41cd-8552-491e33fe846f" containerName="init" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.141470 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d51aa030-a37e-41cd-8552-491e33fe846f" containerName="init" Jan 21 16:15:00 crc kubenswrapper[4902]: E0121 16:15:00.141487 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d51aa030-a37e-41cd-8552-491e33fe846f" containerName="octavia-api-provider-agent" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.141494 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d51aa030-a37e-41cd-8552-491e33fe846f" containerName="octavia-api-provider-agent" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.141669 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d51aa030-a37e-41cd-8552-491e33fe846f" containerName="octavia-api-provider-agent" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.141696 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d51aa030-a37e-41cd-8552-491e33fe846f" containerName="octavia-api" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.143000 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.145793 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.147481 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.154184 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2"] Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.307538 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72h56\" (UniqueName: \"kubernetes.io/projected/c3234509-8b7b-4b77-9a80-f496d21a727e-kube-api-access-72h56\") pod \"collect-profiles-29483535-pvcf2\" (UID: \"c3234509-8b7b-4b77-9a80-f496d21a727e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.307734 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3234509-8b7b-4b77-9a80-f496d21a727e-config-volume\") pod \"collect-profiles-29483535-pvcf2\" (UID: \"c3234509-8b7b-4b77-9a80-f496d21a727e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.307788 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3234509-8b7b-4b77-9a80-f496d21a727e-secret-volume\") pod \"collect-profiles-29483535-pvcf2\" (UID: \"c3234509-8b7b-4b77-9a80-f496d21a727e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.415149 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72h56\" (UniqueName: \"kubernetes.io/projected/c3234509-8b7b-4b77-9a80-f496d21a727e-kube-api-access-72h56\") pod \"collect-profiles-29483535-pvcf2\" (UID: \"c3234509-8b7b-4b77-9a80-f496d21a727e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.415342 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3234509-8b7b-4b77-9a80-f496d21a727e-config-volume\") pod \"collect-profiles-29483535-pvcf2\" (UID: \"c3234509-8b7b-4b77-9a80-f496d21a727e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.415373 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3234509-8b7b-4b77-9a80-f496d21a727e-secret-volume\") pod \"collect-profiles-29483535-pvcf2\" (UID: \"c3234509-8b7b-4b77-9a80-f496d21a727e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.626778 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3234509-8b7b-4b77-9a80-f496d21a727e-config-volume\") pod \"collect-profiles-29483535-pvcf2\" (UID: \"c3234509-8b7b-4b77-9a80-f496d21a727e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.638132 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3234509-8b7b-4b77-9a80-f496d21a727e-secret-volume\") pod \"collect-profiles-29483535-pvcf2\" (UID: \"c3234509-8b7b-4b77-9a80-f496d21a727e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.638485 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72h56\" (UniqueName: \"kubernetes.io/projected/c3234509-8b7b-4b77-9a80-f496d21a727e-kube-api-access-72h56\") pod \"collect-profiles-29483535-pvcf2\" (UID: \"c3234509-8b7b-4b77-9a80-f496d21a727e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.927288 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:01 crc kubenswrapper[4902]: I0121 16:15:01.456139 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2"] Jan 21 16:15:01 crc kubenswrapper[4902]: W0121 16:15:01.463423 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3234509_8b7b_4b77_9a80_f496d21a727e.slice/crio-e9294568f3d6a90a9c619766f94d50453d679a9353790389fd2fc985e2041129 WatchSource:0}: Error finding container e9294568f3d6a90a9c619766f94d50453d679a9353790389fd2fc985e2041129: Status 404 returned error can't find the container with id e9294568f3d6a90a9c619766f94d50453d679a9353790389fd2fc985e2041129 Jan 21 16:15:02 crc kubenswrapper[4902]: I0121 16:15:02.503608 4902 generic.go:334] "Generic (PLEG): container finished" podID="c3234509-8b7b-4b77-9a80-f496d21a727e" containerID="4e8300ed14fa669d6234d502917b52e699b6641dda6ef60268cdbc2afafd8313" exitCode=0 Jan 21 16:15:02 crc kubenswrapper[4902]: I0121 16:15:02.503681 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" event={"ID":"c3234509-8b7b-4b77-9a80-f496d21a727e","Type":"ContainerDied","Data":"4e8300ed14fa669d6234d502917b52e699b6641dda6ef60268cdbc2afafd8313"} Jan 21 16:15:02 crc kubenswrapper[4902]: I0121 16:15:02.504198 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" event={"ID":"c3234509-8b7b-4b77-9a80-f496d21a727e","Type":"ContainerStarted","Data":"e9294568f3d6a90a9c619766f94d50453d679a9353790389fd2fc985e2041129"} Jan 21 16:15:03 crc kubenswrapper[4902]: I0121 16:15:03.912436 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:03 crc kubenswrapper[4902]: I0121 16:15:03.913333 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3234509-8b7b-4b77-9a80-f496d21a727e-config-volume\") pod \"c3234509-8b7b-4b77-9a80-f496d21a727e\" (UID: \"c3234509-8b7b-4b77-9a80-f496d21a727e\") " Jan 21 16:15:03 crc kubenswrapper[4902]: I0121 16:15:03.913409 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72h56\" (UniqueName: \"kubernetes.io/projected/c3234509-8b7b-4b77-9a80-f496d21a727e-kube-api-access-72h56\") pod \"c3234509-8b7b-4b77-9a80-f496d21a727e\" (UID: \"c3234509-8b7b-4b77-9a80-f496d21a727e\") " Jan 21 16:15:03 crc kubenswrapper[4902]: I0121 16:15:03.913448 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3234509-8b7b-4b77-9a80-f496d21a727e-secret-volume\") pod \"c3234509-8b7b-4b77-9a80-f496d21a727e\" (UID: \"c3234509-8b7b-4b77-9a80-f496d21a727e\") " Jan 21 16:15:03 crc kubenswrapper[4902]: I0121 16:15:03.913899 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3234509-8b7b-4b77-9a80-f496d21a727e-config-volume" (OuterVolumeSpecName: "config-volume") pod "c3234509-8b7b-4b77-9a80-f496d21a727e" (UID: "c3234509-8b7b-4b77-9a80-f496d21a727e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:15:03 crc kubenswrapper[4902]: I0121 16:15:03.918855 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3234509-8b7b-4b77-9a80-f496d21a727e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c3234509-8b7b-4b77-9a80-f496d21a727e" (UID: "c3234509-8b7b-4b77-9a80-f496d21a727e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:15:03 crc kubenswrapper[4902]: I0121 16:15:03.920407 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3234509-8b7b-4b77-9a80-f496d21a727e-kube-api-access-72h56" (OuterVolumeSpecName: "kube-api-access-72h56") pod "c3234509-8b7b-4b77-9a80-f496d21a727e" (UID: "c3234509-8b7b-4b77-9a80-f496d21a727e"). InnerVolumeSpecName "kube-api-access-72h56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:15:04 crc kubenswrapper[4902]: I0121 16:15:04.019133 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3234509-8b7b-4b77-9a80-f496d21a727e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:04 crc kubenswrapper[4902]: I0121 16:15:04.019177 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72h56\" (UniqueName: \"kubernetes.io/projected/c3234509-8b7b-4b77-9a80-f496d21a727e-kube-api-access-72h56\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:04 crc kubenswrapper[4902]: I0121 16:15:04.019190 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3234509-8b7b-4b77-9a80-f496d21a727e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:04 crc kubenswrapper[4902]: I0121 16:15:04.523815 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" event={"ID":"c3234509-8b7b-4b77-9a80-f496d21a727e","Type":"ContainerDied","Data":"e9294568f3d6a90a9c619766f94d50453d679a9353790389fd2fc985e2041129"} Jan 21 16:15:04 crc kubenswrapper[4902]: I0121 16:15:04.524449 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9294568f3d6a90a9c619766f94d50453d679a9353790389fd2fc985e2041129" Jan 21 16:15:04 crc kubenswrapper[4902]: I0121 16:15:04.524059 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:05 crc kubenswrapper[4902]: I0121 16:15:05.003240 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg"] Jan 21 16:15:05 crc kubenswrapper[4902]: I0121 16:15:05.010969 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg"] Jan 21 16:15:05 crc kubenswrapper[4902]: I0121 16:15:05.295386 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:15:05 crc kubenswrapper[4902]: E0121 16:15:05.295643 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:15:06 crc kubenswrapper[4902]: I0121 16:15:06.305889 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e93c6a82-9651-4ed2-a941-9414d9aff62c" path="/var/lib/kubelet/pods/e93c6a82-9651-4ed2-a941-9414d9aff62c/volumes" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.005991 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-vtnkx"] Jan 21 16:15:07 crc kubenswrapper[4902]: E0121 16:15:07.006562 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3234509-8b7b-4b77-9a80-f496d21a727e" containerName="collect-profiles" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.006585 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3234509-8b7b-4b77-9a80-f496d21a727e" containerName="collect-profiles" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.006851 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3234509-8b7b-4b77-9a80-f496d21a727e" containerName="collect-profiles" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.008307 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.011207 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.012010 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.016176 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.021991 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-vtnkx"] Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.151636 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-amphora-certs\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.151738 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-scripts\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.151761 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-config-data\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.151786 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-hm-ports\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.151913 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-config-data-merged\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.151946 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-combined-ca-bundle\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.253794 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-config-data-merged\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.253872 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-combined-ca-bundle\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.254021 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-amphora-certs\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.254148 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-scripts\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.254186 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-config-data\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.254219 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-hm-ports\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.255174 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-config-data-merged\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.256063 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-hm-ports\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.260699 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-combined-ca-bundle\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.260840 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-amphora-certs\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.261313 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-scripts\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.262699 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-config-data\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.336253 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:08 crc kubenswrapper[4902]: I0121 16:15:08.062751 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-vtnkx"] Jan 21 16:15:08 crc kubenswrapper[4902]: I0121 16:15:08.603444 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-vtnkx" event={"ID":"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39","Type":"ContainerStarted","Data":"c224aba44857e5091c950c0f81b22a1aa27e8329445ae3f4c7ba128689b62bc9"} Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.194413 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-pr9tl"] Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.196555 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.198498 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.200160 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.205250 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-pr9tl"] Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.324008 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-amphora-certs\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.324065 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-combined-ca-bundle\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.324173 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-config-data-merged\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.324335 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-scripts\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.324430 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-hm-ports\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.324491 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-config-data\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.425959 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-scripts\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.426013 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-hm-ports\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.427229 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-hm-ports\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.427336 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-config-data\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.427743 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-amphora-certs\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.427779 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-combined-ca-bundle\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.427853 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-config-data-merged\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.428818 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-config-data-merged\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.432212 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-scripts\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.433072 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-config-data\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.433449 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-amphora-certs\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.450832 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-combined-ca-bundle\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.530471 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.635759 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-vtnkx" event={"ID":"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39","Type":"ContainerStarted","Data":"935f678721148c181fa92b0136c39b6faa3d7b6fbdecfa689e4fa25697e2e514"} Jan 21 16:15:10 crc kubenswrapper[4902]: I0121 16:15:10.203022 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-pr9tl"] Jan 21 16:15:10 crc kubenswrapper[4902]: I0121 16:15:10.649320 4902 generic.go:334] "Generic (PLEG): container finished" podID="e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39" containerID="935f678721148c181fa92b0136c39b6faa3d7b6fbdecfa689e4fa25697e2e514" exitCode=0 Jan 21 16:15:10 crc kubenswrapper[4902]: I0121 16:15:10.649416 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-vtnkx" event={"ID":"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39","Type":"ContainerDied","Data":"935f678721148c181fa92b0136c39b6faa3d7b6fbdecfa689e4fa25697e2e514"} Jan 21 16:15:10 crc kubenswrapper[4902]: I0121 16:15:10.654859 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-pr9tl" event={"ID":"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008","Type":"ContainerStarted","Data":"334f3c92e4efbdc9609c4ad989fc4a2ffdc1a2767b989a8f926ee900339fdc9d"} Jan 21 16:15:10 crc kubenswrapper[4902]: I0121 16:15:10.912375 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-zmb5b"] Jan 21 16:15:10 crc kubenswrapper[4902]: I0121 16:15:10.912635 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" podUID="e68cbe1e-2ace-4011-856c-5fa393f45b4b" containerName="octavia-amphora-httpd" containerID="cri-o://657dc03492b59a1f33c2bcfda57969b4fd0b8668563d2f702e94ee1a2f5b2099" gracePeriod=30 Jan 21 16:15:10 crc kubenswrapper[4902]: I0121 16:15:10.926677 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-drv9p"] Jan 21 16:15:10 crc kubenswrapper[4902]: I0121 16:15:10.929727 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-drv9p" Jan 21 16:15:10 crc kubenswrapper[4902]: I0121 16:15:10.931805 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Jan 21 16:15:10 crc kubenswrapper[4902]: I0121 16:15:10.932183 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Jan 21 16:15:10 crc kubenswrapper[4902]: I0121 16:15:10.948820 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-drv9p"] Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.066667 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/646b20f3-5a05-4352-9645-69bed7f67dae-config-data-merged\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.067381 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/646b20f3-5a05-4352-9645-69bed7f67dae-scripts\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.067460 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646b20f3-5a05-4352-9645-69bed7f67dae-config-data\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.067498 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646b20f3-5a05-4352-9645-69bed7f67dae-combined-ca-bundle\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.067578 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/646b20f3-5a05-4352-9645-69bed7f67dae-amphora-certs\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.067722 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/646b20f3-5a05-4352-9645-69bed7f67dae-hm-ports\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.169072 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/646b20f3-5a05-4352-9645-69bed7f67dae-hm-ports\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.169204 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/646b20f3-5a05-4352-9645-69bed7f67dae-config-data-merged\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.169236 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/646b20f3-5a05-4352-9645-69bed7f67dae-scripts\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.169278 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646b20f3-5a05-4352-9645-69bed7f67dae-config-data\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.169303 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646b20f3-5a05-4352-9645-69bed7f67dae-combined-ca-bundle\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.169331 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/646b20f3-5a05-4352-9645-69bed7f67dae-amphora-certs\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.169702 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/646b20f3-5a05-4352-9645-69bed7f67dae-config-data-merged\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.171359 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/646b20f3-5a05-4352-9645-69bed7f67dae-hm-ports\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.174406 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/646b20f3-5a05-4352-9645-69bed7f67dae-amphora-certs\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.175364 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/646b20f3-5a05-4352-9645-69bed7f67dae-scripts\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.177438 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646b20f3-5a05-4352-9645-69bed7f67dae-combined-ca-bundle\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.178198 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646b20f3-5a05-4352-9645-69bed7f67dae-config-data\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.257199 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.664027 4902 generic.go:334] "Generic (PLEG): container finished" podID="e68cbe1e-2ace-4011-856c-5fa393f45b4b" containerID="657dc03492b59a1f33c2bcfda57969b4fd0b8668563d2f702e94ee1a2f5b2099" exitCode=0 Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.664281 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" event={"ID":"e68cbe1e-2ace-4011-856c-5fa393f45b4b","Type":"ContainerDied","Data":"657dc03492b59a1f33c2bcfda57969b4fd0b8668563d2f702e94ee1a2f5b2099"} Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.666181 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-vtnkx" event={"ID":"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39","Type":"ContainerStarted","Data":"a4f4237321451e31ad7db921d31a338e857137025761b91a07f74b4fefb71d19"} Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.666357 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.686875 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-vtnkx" podStartSLOduration=5.686849886 podStartE2EDuration="5.686849886s" podCreationTimestamp="2026-01-21 16:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:15:11.680443116 +0000 UTC m=+6073.757276155" watchObservedRunningTime="2026-01-21 16:15:11.686849886 +0000 UTC m=+6073.763682915" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.954400 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.090857 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e68cbe1e-2ace-4011-856c-5fa393f45b4b-httpd-config\") pod \"e68cbe1e-2ace-4011-856c-5fa393f45b4b\" (UID: \"e68cbe1e-2ace-4011-856c-5fa393f45b4b\") " Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.091102 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/e68cbe1e-2ace-4011-856c-5fa393f45b4b-amphora-image\") pod \"e68cbe1e-2ace-4011-856c-5fa393f45b4b\" (UID: \"e68cbe1e-2ace-4011-856c-5fa393f45b4b\") " Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.130524 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68cbe1e-2ace-4011-856c-5fa393f45b4b-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e68cbe1e-2ace-4011-856c-5fa393f45b4b" (UID: "e68cbe1e-2ace-4011-856c-5fa393f45b4b"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.191331 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e68cbe1e-2ace-4011-856c-5fa393f45b4b-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "e68cbe1e-2ace-4011-856c-5fa393f45b4b" (UID: "e68cbe1e-2ace-4011-856c-5fa393f45b4b"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.194183 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e68cbe1e-2ace-4011-856c-5fa393f45b4b-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.194215 4902 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/e68cbe1e-2ace-4011-856c-5fa393f45b4b-amphora-image\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.354156 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-drv9p"] Jan 21 16:15:12 crc kubenswrapper[4902]: E0121 16:15:12.363519 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode68cbe1e_2ace_4011_856c_5fa393f45b4b.slice/crio-b6f108856fdfaa3ddd29d66fca750e5174c3b7beab2b9dd46ad0290fb86745d4\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode68cbe1e_2ace_4011_856c_5fa393f45b4b.slice\": RecentStats: unable to find data in memory cache]" Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.676404 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.676396 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" event={"ID":"e68cbe1e-2ace-4011-856c-5fa393f45b4b","Type":"ContainerDied","Data":"b6f108856fdfaa3ddd29d66fca750e5174c3b7beab2b9dd46ad0290fb86745d4"} Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.676796 4902 scope.go:117] "RemoveContainer" containerID="657dc03492b59a1f33c2bcfda57969b4fd0b8668563d2f702e94ee1a2f5b2099" Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.680307 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-pr9tl" event={"ID":"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008","Type":"ContainerStarted","Data":"dd237234903b1e90630066f8f3b5624819590b40362e32208301679b925d26d9"} Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.686776 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-drv9p" event={"ID":"646b20f3-5a05-4352-9645-69bed7f67dae","Type":"ContainerStarted","Data":"88e41ff5dbb6abfe33e302ae3175cef222d065b428b1717c6bc5037869bd2e1b"} Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.703035 4902 scope.go:117] "RemoveContainer" containerID="69bb7c1dbb251c3de8ae57f07e730117aca9b6c44df49c90c6093a8ecc70f9e1" Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.708430 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-zmb5b"] Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.720105 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-zmb5b"] Jan 21 16:15:13 crc kubenswrapper[4902]: I0121 16:15:13.698285 4902 generic.go:334] "Generic (PLEG): container finished" podID="34cb5d58-0b3f-40eb-a5ee-b8ab812c8008" containerID="dd237234903b1e90630066f8f3b5624819590b40362e32208301679b925d26d9" exitCode=0 Jan 21 16:15:13 crc kubenswrapper[4902]: I0121 16:15:13.698353 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-pr9tl" event={"ID":"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008","Type":"ContainerDied","Data":"dd237234903b1e90630066f8f3b5624819590b40362e32208301679b925d26d9"} Jan 21 16:15:14 crc kubenswrapper[4902]: I0121 16:15:14.306818 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e68cbe1e-2ace-4011-856c-5fa393f45b4b" path="/var/lib/kubelet/pods/e68cbe1e-2ace-4011-856c-5fa393f45b4b/volumes" Jan 21 16:15:14 crc kubenswrapper[4902]: I0121 16:15:14.722061 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-pr9tl" event={"ID":"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008","Type":"ContainerStarted","Data":"94dc0ddc8010dc24e008edc4ffa025daf08a6bd4d7dead87a3434da79e0eab7d"} Jan 21 16:15:14 crc kubenswrapper[4902]: I0121 16:15:14.722164 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:14 crc kubenswrapper[4902]: I0121 16:15:14.724598 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-drv9p" event={"ID":"646b20f3-5a05-4352-9645-69bed7f67dae","Type":"ContainerStarted","Data":"ad2399bd80934ddbdfd2e4ae6471b91d072bef38d16c598a25fd79a08cd3c479"} Jan 21 16:15:14 crc kubenswrapper[4902]: I0121 16:15:14.740917 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-pr9tl" podStartSLOduration=4.08707319 podStartE2EDuration="5.740894678s" podCreationTimestamp="2026-01-21 16:15:09 +0000 UTC" firstStartedPulling="2026-01-21 16:15:10.209502734 +0000 UTC m=+6072.286335763" lastFinishedPulling="2026-01-21 16:15:11.863324222 +0000 UTC m=+6073.940157251" observedRunningTime="2026-01-21 16:15:14.740030433 +0000 UTC m=+6076.816863482" watchObservedRunningTime="2026-01-21 16:15:14.740894678 +0000 UTC m=+6076.817727707" Jan 21 16:15:15 crc kubenswrapper[4902]: I0121 16:15:15.738022 4902 generic.go:334] "Generic (PLEG): container finished" podID="646b20f3-5a05-4352-9645-69bed7f67dae" containerID="ad2399bd80934ddbdfd2e4ae6471b91d072bef38d16c598a25fd79a08cd3c479" exitCode=0 Jan 21 16:15:15 crc kubenswrapper[4902]: I0121 16:15:15.738173 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-drv9p" event={"ID":"646b20f3-5a05-4352-9645-69bed7f67dae","Type":"ContainerDied","Data":"ad2399bd80934ddbdfd2e4ae6471b91d072bef38d16c598a25fd79a08cd3c479"} Jan 21 16:15:16 crc kubenswrapper[4902]: I0121 16:15:16.749722 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-drv9p" event={"ID":"646b20f3-5a05-4352-9645-69bed7f67dae","Type":"ContainerStarted","Data":"2939b4e11c7b2d6987e7ace53097d49dab09fbb5238c00f5000c6f1e44443cde"} Jan 21 16:15:16 crc kubenswrapper[4902]: I0121 16:15:16.751240 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-drv9p" Jan 21 16:15:16 crc kubenswrapper[4902]: I0121 16:15:16.773946 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-drv9p" podStartSLOduration=5.422267501 podStartE2EDuration="6.773920758s" podCreationTimestamp="2026-01-21 16:15:10 +0000 UTC" firstStartedPulling="2026-01-21 16:15:12.358910978 +0000 UTC m=+6074.435744017" lastFinishedPulling="2026-01-21 16:15:13.710564245 +0000 UTC m=+6075.787397274" observedRunningTime="2026-01-21 16:15:16.770905144 +0000 UTC m=+6078.847738203" watchObservedRunningTime="2026-01-21 16:15:16.773920758 +0000 UTC m=+6078.850753807" Jan 21 16:15:19 crc kubenswrapper[4902]: I0121 16:15:19.295279 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:15:19 crc kubenswrapper[4902]: E0121 16:15:19.295808 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:15:22 crc kubenswrapper[4902]: I0121 16:15:22.378490 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:24 crc kubenswrapper[4902]: I0121 16:15:24.572281 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:26 crc kubenswrapper[4902]: I0121 16:15:26.111017 4902 scope.go:117] "RemoveContainer" containerID="2d74f71a998726973b118e0b0755aa5903f2b68cb19dc4c893a565df10186a56" Jan 21 16:15:26 crc kubenswrapper[4902]: I0121 16:15:26.286259 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-drv9p" Jan 21 16:15:31 crc kubenswrapper[4902]: I0121 16:15:31.295237 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:15:31 crc kubenswrapper[4902]: E0121 16:15:31.296231 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.803093 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s5tqc"] Jan 21 16:15:42 crc kubenswrapper[4902]: E0121 16:15:42.803893 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68cbe1e-2ace-4011-856c-5fa393f45b4b" containerName="octavia-amphora-httpd" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.803904 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68cbe1e-2ace-4011-856c-5fa393f45b4b" containerName="octavia-amphora-httpd" Jan 21 16:15:42 crc kubenswrapper[4902]: E0121 16:15:42.803913 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68cbe1e-2ace-4011-856c-5fa393f45b4b" containerName="init" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.803918 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68cbe1e-2ace-4011-856c-5fa393f45b4b" containerName="init" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.804103 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e68cbe1e-2ace-4011-856c-5fa393f45b4b" containerName="octavia-amphora-httpd" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.805455 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.815643 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s5tqc"] Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.892141 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2335693-82d8-44d9-93cb-8845da187fc4-catalog-content\") pod \"community-operators-s5tqc\" (UID: \"a2335693-82d8-44d9-93cb-8845da187fc4\") " pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.892357 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2335693-82d8-44d9-93cb-8845da187fc4-utilities\") pod \"community-operators-s5tqc\" (UID: \"a2335693-82d8-44d9-93cb-8845da187fc4\") " pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.892864 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rthxf\" (UniqueName: \"kubernetes.io/projected/a2335693-82d8-44d9-93cb-8845da187fc4-kube-api-access-rthxf\") pod \"community-operators-s5tqc\" (UID: \"a2335693-82d8-44d9-93cb-8845da187fc4\") " pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.996008 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rthxf\" (UniqueName: \"kubernetes.io/projected/a2335693-82d8-44d9-93cb-8845da187fc4-kube-api-access-rthxf\") pod \"community-operators-s5tqc\" (UID: \"a2335693-82d8-44d9-93cb-8845da187fc4\") " pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.996090 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2335693-82d8-44d9-93cb-8845da187fc4-catalog-content\") pod \"community-operators-s5tqc\" (UID: \"a2335693-82d8-44d9-93cb-8845da187fc4\") " pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.996164 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2335693-82d8-44d9-93cb-8845da187fc4-utilities\") pod \"community-operators-s5tqc\" (UID: \"a2335693-82d8-44d9-93cb-8845da187fc4\") " pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.997275 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2335693-82d8-44d9-93cb-8845da187fc4-catalog-content\") pod \"community-operators-s5tqc\" (UID: \"a2335693-82d8-44d9-93cb-8845da187fc4\") " pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.998901 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2335693-82d8-44d9-93cb-8845da187fc4-utilities\") pod \"community-operators-s5tqc\" (UID: \"a2335693-82d8-44d9-93cb-8845da187fc4\") " pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:43 crc kubenswrapper[4902]: I0121 16:15:43.022622 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rthxf\" (UniqueName: \"kubernetes.io/projected/a2335693-82d8-44d9-93cb-8845da187fc4-kube-api-access-rthxf\") pod \"community-operators-s5tqc\" (UID: \"a2335693-82d8-44d9-93cb-8845da187fc4\") " pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:43 crc kubenswrapper[4902]: I0121 16:15:43.130128 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:43 crc kubenswrapper[4902]: I0121 16:15:43.594927 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s5tqc"] Jan 21 16:15:44 crc kubenswrapper[4902]: I0121 16:15:44.024159 4902 generic.go:334] "Generic (PLEG): container finished" podID="a2335693-82d8-44d9-93cb-8845da187fc4" containerID="0f5fdb1f77ee5e53923e8edceba05628177b711a2533fe02fb33769c82576bcf" exitCode=0 Jan 21 16:15:44 crc kubenswrapper[4902]: I0121 16:15:44.024204 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5tqc" event={"ID":"a2335693-82d8-44d9-93cb-8845da187fc4","Type":"ContainerDied","Data":"0f5fdb1f77ee5e53923e8edceba05628177b711a2533fe02fb33769c82576bcf"} Jan 21 16:15:44 crc kubenswrapper[4902]: I0121 16:15:44.024231 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5tqc" event={"ID":"a2335693-82d8-44d9-93cb-8845da187fc4","Type":"ContainerStarted","Data":"0f43d77e4d6e94ad35f67feaf5658b556c28c0b92a382bc5e8acbbd44673a54c"} Jan 21 16:15:45 crc kubenswrapper[4902]: I0121 16:15:45.035410 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5tqc" event={"ID":"a2335693-82d8-44d9-93cb-8845da187fc4","Type":"ContainerStarted","Data":"d6b0eff18372cd37ddfe92c986fb4a923d9dbd3f107869f09f0fdbe8e2eaaa5c"} Jan 21 16:15:46 crc kubenswrapper[4902]: I0121 16:15:46.056289 4902 generic.go:334] "Generic (PLEG): container finished" podID="a2335693-82d8-44d9-93cb-8845da187fc4" containerID="d6b0eff18372cd37ddfe92c986fb4a923d9dbd3f107869f09f0fdbe8e2eaaa5c" exitCode=0 Jan 21 16:15:46 crc kubenswrapper[4902]: I0121 16:15:46.056588 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5tqc" event={"ID":"a2335693-82d8-44d9-93cb-8845da187fc4","Type":"ContainerDied","Data":"d6b0eff18372cd37ddfe92c986fb4a923d9dbd3f107869f09f0fdbe8e2eaaa5c"} Jan 21 16:15:46 crc kubenswrapper[4902]: I0121 16:15:46.300661 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:15:46 crc kubenswrapper[4902]: E0121 16:15:46.302147 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:15:47 crc kubenswrapper[4902]: I0121 16:15:47.069737 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5tqc" event={"ID":"a2335693-82d8-44d9-93cb-8845da187fc4","Type":"ContainerStarted","Data":"4c0781a19d8b6c48f488f12d1b70c08865b23377c77473fa64a8a1663801f2cb"} Jan 21 16:15:47 crc kubenswrapper[4902]: I0121 16:15:47.099634 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s5tqc" podStartSLOduration=2.6188767139999998 podStartE2EDuration="5.099613863s" podCreationTimestamp="2026-01-21 16:15:42 +0000 UTC" firstStartedPulling="2026-01-21 16:15:44.026625588 +0000 UTC m=+6106.103458617" lastFinishedPulling="2026-01-21 16:15:46.507362747 +0000 UTC m=+6108.584195766" observedRunningTime="2026-01-21 16:15:47.092485203 +0000 UTC m=+6109.169318232" watchObservedRunningTime="2026-01-21 16:15:47.099613863 +0000 UTC m=+6109.176446902" Jan 21 16:15:53 crc kubenswrapper[4902]: I0121 16:15:53.130622 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:53 crc kubenswrapper[4902]: I0121 16:15:53.131181 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:53 crc kubenswrapper[4902]: I0121 16:15:53.189580 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:54 crc kubenswrapper[4902]: I0121 16:15:54.179481 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:54 crc kubenswrapper[4902]: I0121 16:15:54.238031 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s5tqc"] Jan 21 16:15:56 crc kubenswrapper[4902]: I0121 16:15:56.187875 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s5tqc" podUID="a2335693-82d8-44d9-93cb-8845da187fc4" containerName="registry-server" containerID="cri-o://4c0781a19d8b6c48f488f12d1b70c08865b23377c77473fa64a8a1663801f2cb" gracePeriod=2 Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.198026 4902 generic.go:334] "Generic (PLEG): container finished" podID="a2335693-82d8-44d9-93cb-8845da187fc4" containerID="4c0781a19d8b6c48f488f12d1b70c08865b23377c77473fa64a8a1663801f2cb" exitCode=0 Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.198127 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5tqc" event={"ID":"a2335693-82d8-44d9-93cb-8845da187fc4","Type":"ContainerDied","Data":"4c0781a19d8b6c48f488f12d1b70c08865b23377c77473fa64a8a1663801f2cb"} Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.198342 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5tqc" event={"ID":"a2335693-82d8-44d9-93cb-8845da187fc4","Type":"ContainerDied","Data":"0f43d77e4d6e94ad35f67feaf5658b556c28c0b92a382bc5e8acbbd44673a54c"} Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.198360 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f43d77e4d6e94ad35f67feaf5658b556c28c0b92a382bc5e8acbbd44673a54c" Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.216476 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.318346 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2335693-82d8-44d9-93cb-8845da187fc4-utilities\") pod \"a2335693-82d8-44d9-93cb-8845da187fc4\" (UID: \"a2335693-82d8-44d9-93cb-8845da187fc4\") " Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.318400 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2335693-82d8-44d9-93cb-8845da187fc4-catalog-content\") pod \"a2335693-82d8-44d9-93cb-8845da187fc4\" (UID: \"a2335693-82d8-44d9-93cb-8845da187fc4\") " Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.318663 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rthxf\" (UniqueName: \"kubernetes.io/projected/a2335693-82d8-44d9-93cb-8845da187fc4-kube-api-access-rthxf\") pod \"a2335693-82d8-44d9-93cb-8845da187fc4\" (UID: \"a2335693-82d8-44d9-93cb-8845da187fc4\") " Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.319376 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2335693-82d8-44d9-93cb-8845da187fc4-utilities" (OuterVolumeSpecName: "utilities") pod "a2335693-82d8-44d9-93cb-8845da187fc4" (UID: "a2335693-82d8-44d9-93cb-8845da187fc4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.324874 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2335693-82d8-44d9-93cb-8845da187fc4-kube-api-access-rthxf" (OuterVolumeSpecName: "kube-api-access-rthxf") pod "a2335693-82d8-44d9-93cb-8845da187fc4" (UID: "a2335693-82d8-44d9-93cb-8845da187fc4"). InnerVolumeSpecName "kube-api-access-rthxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.366000 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2335693-82d8-44d9-93cb-8845da187fc4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2335693-82d8-44d9-93cb-8845da187fc4" (UID: "a2335693-82d8-44d9-93cb-8845da187fc4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.421558 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rthxf\" (UniqueName: \"kubernetes.io/projected/a2335693-82d8-44d9-93cb-8845da187fc4-kube-api-access-rthxf\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.421603 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2335693-82d8-44d9-93cb-8845da187fc4-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.421616 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2335693-82d8-44d9-93cb-8845da187fc4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:58 crc kubenswrapper[4902]: I0121 16:15:58.208956 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:58 crc kubenswrapper[4902]: I0121 16:15:58.264232 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s5tqc"] Jan 21 16:15:58 crc kubenswrapper[4902]: I0121 16:15:58.286197 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s5tqc"] Jan 21 16:15:58 crc kubenswrapper[4902]: I0121 16:15:58.306507 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2335693-82d8-44d9-93cb-8845da187fc4" path="/var/lib/kubelet/pods/a2335693-82d8-44d9-93cb-8845da187fc4/volumes" Jan 21 16:16:01 crc kubenswrapper[4902]: I0121 16:16:01.295854 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:16:01 crc kubenswrapper[4902]: E0121 16:16:01.296953 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.153085 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5998889f69-hx8w9"] Jan 21 16:16:14 crc kubenswrapper[4902]: E0121 16:16:14.156259 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2335693-82d8-44d9-93cb-8845da187fc4" containerName="registry-server" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.156286 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2335693-82d8-44d9-93cb-8845da187fc4" containerName="registry-server" Jan 21 16:16:14 crc kubenswrapper[4902]: E0121 16:16:14.156313 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2335693-82d8-44d9-93cb-8845da187fc4" containerName="extract-content" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.156321 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2335693-82d8-44d9-93cb-8845da187fc4" containerName="extract-content" Jan 21 16:16:14 crc kubenswrapper[4902]: E0121 16:16:14.156348 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2335693-82d8-44d9-93cb-8845da187fc4" containerName="extract-utilities" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.156361 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2335693-82d8-44d9-93cb-8845da187fc4" containerName="extract-utilities" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.156629 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2335693-82d8-44d9-93cb-8845da187fc4" containerName="registry-server" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.157988 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.161060 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-vm6cj" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.161077 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.161583 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.162524 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.182338 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5998889f69-hx8w9"] Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.214955 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.215448 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="621700c2-adff-4cf1-81a4-fb0213e5e919" containerName="glance-log" containerID="cri-o://d703f5632f2cbf952b8d8487e251807ade66f1d024b3d48fde5f54990b973dc3" gracePeriod=30 Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.215619 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="621700c2-adff-4cf1-81a4-fb0213e5e919" containerName="glance-httpd" containerID="cri-o://736f3facc63619fff931156c32623cacaeb743514ad4d9bc998e592c1498cea3" gracePeriod=30 Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.267625 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a336745-0278-402d-b4c1-3b58f8fa66e9-scripts\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.267881 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1a336745-0278-402d-b4c1-3b58f8fa66e9-config-data\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.267920 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npmcz\" (UniqueName: \"kubernetes.io/projected/1a336745-0278-402d-b4c1-3b58f8fa66e9-kube-api-access-npmcz\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.268164 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a336745-0278-402d-b4c1-3b58f8fa66e9-logs\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.268231 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1a336745-0278-402d-b4c1-3b58f8fa66e9-horizon-secret-key\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.280303 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.280597 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8a90211b-865e-43ee-a4d2-4435d5377cac" containerName="glance-log" containerID="cri-o://b5f9108bd4e377347ea43cf1022065cb061fb7505fcb4f124adde97f4fd9fe0c" gracePeriod=30 Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.281331 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8a90211b-865e-43ee-a4d2-4435d5377cac" containerName="glance-httpd" containerID="cri-o://59aec4d7b002f6bac7cebbdd58347eb07bbd6d976ee19de283329b9b2320f207" gracePeriod=30 Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.295288 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:16:14 crc kubenswrapper[4902]: E0121 16:16:14.295600 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.310431 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6cdc5859df-vpr9s"] Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.312357 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.321441 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cdc5859df-vpr9s"] Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.370415 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/941246aa-c88c-4447-95a9-0efe08817612-horizon-secret-key\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.370571 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/941246aa-c88c-4447-95a9-0efe08817612-logs\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.370638 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a336745-0278-402d-b4c1-3b58f8fa66e9-scripts\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.370702 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/941246aa-c88c-4447-95a9-0efe08817612-scripts\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.370783 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1a336745-0278-402d-b4c1-3b58f8fa66e9-config-data\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.370815 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npmcz\" (UniqueName: \"kubernetes.io/projected/1a336745-0278-402d-b4c1-3b58f8fa66e9-kube-api-access-npmcz\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.370883 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzbxg\" (UniqueName: \"kubernetes.io/projected/941246aa-c88c-4447-95a9-0efe08817612-kube-api-access-zzbxg\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.370909 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/941246aa-c88c-4447-95a9-0efe08817612-config-data\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.370959 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a336745-0278-402d-b4c1-3b58f8fa66e9-logs\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.371014 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1a336745-0278-402d-b4c1-3b58f8fa66e9-horizon-secret-key\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.371616 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a336745-0278-402d-b4c1-3b58f8fa66e9-scripts\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.371914 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a336745-0278-402d-b4c1-3b58f8fa66e9-logs\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.372295 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1a336745-0278-402d-b4c1-3b58f8fa66e9-config-data\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.377541 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1a336745-0278-402d-b4c1-3b58f8fa66e9-horizon-secret-key\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.386447 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npmcz\" (UniqueName: \"kubernetes.io/projected/1a336745-0278-402d-b4c1-3b58f8fa66e9-kube-api-access-npmcz\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.403277 4902 generic.go:334] "Generic (PLEG): container finished" podID="621700c2-adff-4cf1-81a4-fb0213e5e919" containerID="d703f5632f2cbf952b8d8487e251807ade66f1d024b3d48fde5f54990b973dc3" exitCode=143 Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.403332 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"621700c2-adff-4cf1-81a4-fb0213e5e919","Type":"ContainerDied","Data":"d703f5632f2cbf952b8d8487e251807ade66f1d024b3d48fde5f54990b973dc3"} Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.472311 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/941246aa-c88c-4447-95a9-0efe08817612-scripts\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.472381 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzbxg\" (UniqueName: \"kubernetes.io/projected/941246aa-c88c-4447-95a9-0efe08817612-kube-api-access-zzbxg\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.472400 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/941246aa-c88c-4447-95a9-0efe08817612-config-data\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.472462 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/941246aa-c88c-4447-95a9-0efe08817612-horizon-secret-key\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.472523 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/941246aa-c88c-4447-95a9-0efe08817612-logs\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.472896 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/941246aa-c88c-4447-95a9-0efe08817612-logs\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.473161 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/941246aa-c88c-4447-95a9-0efe08817612-scripts\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.474221 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/941246aa-c88c-4447-95a9-0efe08817612-config-data\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.476690 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/941246aa-c88c-4447-95a9-0efe08817612-horizon-secret-key\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.485175 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.492187 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzbxg\" (UniqueName: \"kubernetes.io/projected/941246aa-c88c-4447-95a9-0efe08817612-kube-api-access-zzbxg\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.778973 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.963003 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5998889f69-hx8w9"] Jan 21 16:16:15 crc kubenswrapper[4902]: I0121 16:16:15.229554 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cdc5859df-vpr9s"] Jan 21 16:16:15 crc kubenswrapper[4902]: I0121 16:16:15.417849 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5998889f69-hx8w9" event={"ID":"1a336745-0278-402d-b4c1-3b58f8fa66e9","Type":"ContainerStarted","Data":"6e949b6543efff5451a444f3bd5efb9a9f0312a98d89cdd384669d329bffb82f"} Jan 21 16:16:15 crc kubenswrapper[4902]: I0121 16:16:15.422216 4902 generic.go:334] "Generic (PLEG): container finished" podID="8a90211b-865e-43ee-a4d2-4435d5377cac" containerID="b5f9108bd4e377347ea43cf1022065cb061fb7505fcb4f124adde97f4fd9fe0c" exitCode=143 Jan 21 16:16:15 crc kubenswrapper[4902]: I0121 16:16:15.422304 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8a90211b-865e-43ee-a4d2-4435d5377cac","Type":"ContainerDied","Data":"b5f9108bd4e377347ea43cf1022065cb061fb7505fcb4f124adde97f4fd9fe0c"} Jan 21 16:16:15 crc kubenswrapper[4902]: I0121 16:16:15.423853 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cdc5859df-vpr9s" event={"ID":"941246aa-c88c-4447-95a9-0efe08817612","Type":"ContainerStarted","Data":"4b82e831be0b81da6f10c4a3ff492b70b47b678bffe68000c6d720bf8f6f3d32"} Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.491917 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5998889f69-hx8w9"] Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.523461 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7dd785d478-plbs7"] Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.526303 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.530592 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.541943 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7dd785d478-plbs7"] Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.588754 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6cdc5859df-vpr9s"] Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.620723 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-786f96566b-w596t"] Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.624430 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.653228 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-horizon-secret-key\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.655947 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-combined-ca-bundle\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.656405 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-horizon-tls-certs\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.656606 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-config-data\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.656645 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-scripts\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.656682 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbsdh\" (UniqueName: \"kubernetes.io/projected/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-kube-api-access-vbsdh\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.656873 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-logs\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.659928 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-786f96566b-w596t"] Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759259 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-combined-ca-bundle\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759336 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b772cd9d-83ce-4675-84de-09f40bdcabe3-config-data\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759376 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b772cd9d-83ce-4675-84de-09f40bdcabe3-logs\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759437 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-horizon-tls-certs\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759524 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-combined-ca-bundle\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759549 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-config-data\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759571 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-scripts\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759594 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b772cd9d-83ce-4675-84de-09f40bdcabe3-scripts\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759619 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbsdh\" (UniqueName: \"kubernetes.io/projected/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-kube-api-access-vbsdh\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759681 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-logs\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759704 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbc7c\" (UniqueName: \"kubernetes.io/projected/b772cd9d-83ce-4675-84de-09f40bdcabe3-kube-api-access-hbc7c\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759769 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-horizon-secret-key\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759801 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-horizon-tls-certs\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759851 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-horizon-secret-key\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.760484 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-logs\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.763680 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-scripts\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.763934 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-config-data\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.765774 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-horizon-secret-key\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.766411 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-combined-ca-bundle\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.767819 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-horizon-tls-certs\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.782803 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbsdh\" (UniqueName: \"kubernetes.io/projected/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-kube-api-access-vbsdh\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.859475 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.863877 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-combined-ca-bundle\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.863940 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b772cd9d-83ce-4675-84de-09f40bdcabe3-scripts\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.864015 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbc7c\" (UniqueName: \"kubernetes.io/projected/b772cd9d-83ce-4675-84de-09f40bdcabe3-kube-api-access-hbc7c\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.864098 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-horizon-secret-key\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.864133 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-horizon-tls-certs\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.864203 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b772cd9d-83ce-4675-84de-09f40bdcabe3-config-data\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.864236 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b772cd9d-83ce-4675-84de-09f40bdcabe3-logs\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.864740 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b772cd9d-83ce-4675-84de-09f40bdcabe3-logs\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.866448 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b772cd9d-83ce-4675-84de-09f40bdcabe3-config-data\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.876829 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-horizon-secret-key\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.876838 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-horizon-tls-certs\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.877352 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b772cd9d-83ce-4675-84de-09f40bdcabe3-scripts\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.878508 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-combined-ca-bundle\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.885881 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbc7c\" (UniqueName: \"kubernetes.io/projected/b772cd9d-83ce-4675-84de-09f40bdcabe3-kube-api-access-hbc7c\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.948410 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:17 crc kubenswrapper[4902]: I0121 16:16:17.340435 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7dd785d478-plbs7"] Jan 21 16:16:17 crc kubenswrapper[4902]: W0121 16:16:17.343392 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b8ff7ce_f44c_45d2_ac7c_ddebb604798c.slice/crio-4686f3e06661bcfec8e992129cddce590c37e819ce3cb3dd7fbf805003f4c581 WatchSource:0}: Error finding container 4686f3e06661bcfec8e992129cddce590c37e819ce3cb3dd7fbf805003f4c581: Status 404 returned error can't find the container with id 4686f3e06661bcfec8e992129cddce590c37e819ce3cb3dd7fbf805003f4c581 Jan 21 16:16:17 crc kubenswrapper[4902]: I0121 16:16:17.436105 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-786f96566b-w596t"] Jan 21 16:16:17 crc kubenswrapper[4902]: I0121 16:16:17.455071 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dd785d478-plbs7" event={"ID":"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c","Type":"ContainerStarted","Data":"4686f3e06661bcfec8e992129cddce590c37e819ce3cb3dd7fbf805003f4c581"} Jan 21 16:16:17 crc kubenswrapper[4902]: W0121 16:16:17.457605 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb772cd9d_83ce_4675_84de_09f40bdcabe3.slice/crio-3b11c2a3c705c7354a42e8da86cbb16b0b2109f8538bbf991bf39e340b5a23cd WatchSource:0}: Error finding container 3b11c2a3c705c7354a42e8da86cbb16b0b2109f8538bbf991bf39e340b5a23cd: Status 404 returned error can't find the container with id 3b11c2a3c705c7354a42e8da86cbb16b0b2109f8538bbf991bf39e340b5a23cd Jan 21 16:16:18 crc kubenswrapper[4902]: I0121 16:16:18.474729 4902 generic.go:334] "Generic (PLEG): container finished" podID="8a90211b-865e-43ee-a4d2-4435d5377cac" containerID="59aec4d7b002f6bac7cebbdd58347eb07bbd6d976ee19de283329b9b2320f207" exitCode=0 Jan 21 16:16:18 crc kubenswrapper[4902]: I0121 16:16:18.475186 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8a90211b-865e-43ee-a4d2-4435d5377cac","Type":"ContainerDied","Data":"59aec4d7b002f6bac7cebbdd58347eb07bbd6d976ee19de283329b9b2320f207"} Jan 21 16:16:18 crc kubenswrapper[4902]: I0121 16:16:18.481837 4902 generic.go:334] "Generic (PLEG): container finished" podID="621700c2-adff-4cf1-81a4-fb0213e5e919" containerID="736f3facc63619fff931156c32623cacaeb743514ad4d9bc998e592c1498cea3" exitCode=0 Jan 21 16:16:18 crc kubenswrapper[4902]: I0121 16:16:18.481892 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"621700c2-adff-4cf1-81a4-fb0213e5e919","Type":"ContainerDied","Data":"736f3facc63619fff931156c32623cacaeb743514ad4d9bc998e592c1498cea3"} Jan 21 16:16:18 crc kubenswrapper[4902]: I0121 16:16:18.483422 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-786f96566b-w596t" event={"ID":"b772cd9d-83ce-4675-84de-09f40bdcabe3","Type":"ContainerStarted","Data":"3b11c2a3c705c7354a42e8da86cbb16b0b2109f8538bbf991bf39e340b5a23cd"} Jan 21 16:16:20 crc kubenswrapper[4902]: I0121 16:16:20.057575 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-0136-account-create-update-k4cmq"] Jan 21 16:16:20 crc kubenswrapper[4902]: I0121 16:16:20.070311 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-85k9w"] Jan 21 16:16:20 crc kubenswrapper[4902]: I0121 16:16:20.080409 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-0136-account-create-update-k4cmq"] Jan 21 16:16:20 crc kubenswrapper[4902]: I0121 16:16:20.088605 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-85k9w"] Jan 21 16:16:20 crc kubenswrapper[4902]: I0121 16:16:20.307822 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8c6f518-fd8b-4c60-9f36-1eb57bd30b06" path="/var/lib/kubelet/pods/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06/volumes" Jan 21 16:16:20 crc kubenswrapper[4902]: I0121 16:16:20.308864 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8" path="/var/lib/kubelet/pods/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8/volumes" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.500365 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.508236 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.583434 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8a90211b-865e-43ee-a4d2-4435d5377cac","Type":"ContainerDied","Data":"9465028a66213606555e0f8ddd61e53e1a204236d21e0dbf53c9bae174755deb"} Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.583499 4902 scope.go:117] "RemoveContainer" containerID="59aec4d7b002f6bac7cebbdd58347eb07bbd6d976ee19de283329b9b2320f207" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.583677 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.611162 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"621700c2-adff-4cf1-81a4-fb0213e5e919","Type":"ContainerDied","Data":"fb6c969ddf6477f474e95f9c5c6fde9452e3279bb465fbf4b3d1c7ae5b80a349"} Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.611273 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695117 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-combined-ca-bundle\") pod \"621700c2-adff-4cf1-81a4-fb0213e5e919\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695284 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-public-tls-certs\") pod \"621700c2-adff-4cf1-81a4-fb0213e5e919\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695355 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-config-data\") pod \"8a90211b-865e-43ee-a4d2-4435d5377cac\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695508 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a90211b-865e-43ee-a4d2-4435d5377cac-httpd-run\") pod \"8a90211b-865e-43ee-a4d2-4435d5377cac\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695573 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-combined-ca-bundle\") pod \"8a90211b-865e-43ee-a4d2-4435d5377cac\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695600 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-internal-tls-certs\") pod \"8a90211b-865e-43ee-a4d2-4435d5377cac\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695643 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-config-data\") pod \"621700c2-adff-4cf1-81a4-fb0213e5e919\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695673 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/621700c2-adff-4cf1-81a4-fb0213e5e919-logs\") pod \"621700c2-adff-4cf1-81a4-fb0213e5e919\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695744 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/621700c2-adff-4cf1-81a4-fb0213e5e919-httpd-run\") pod \"621700c2-adff-4cf1-81a4-fb0213e5e919\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695767 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a90211b-865e-43ee-a4d2-4435d5377cac-logs\") pod \"8a90211b-865e-43ee-a4d2-4435d5377cac\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695809 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkkjx\" (UniqueName: \"kubernetes.io/projected/621700c2-adff-4cf1-81a4-fb0213e5e919-kube-api-access-kkkjx\") pod \"621700c2-adff-4cf1-81a4-fb0213e5e919\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695841 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-scripts\") pod \"8a90211b-865e-43ee-a4d2-4435d5377cac\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695921 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-scripts\") pod \"621700c2-adff-4cf1-81a4-fb0213e5e919\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695992 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm8hg\" (UniqueName: \"kubernetes.io/projected/8a90211b-865e-43ee-a4d2-4435d5377cac-kube-api-access-jm8hg\") pod \"8a90211b-865e-43ee-a4d2-4435d5377cac\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.696799 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a90211b-865e-43ee-a4d2-4435d5377cac-logs" (OuterVolumeSpecName: "logs") pod "8a90211b-865e-43ee-a4d2-4435d5377cac" (UID: "8a90211b-865e-43ee-a4d2-4435d5377cac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.696821 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/621700c2-adff-4cf1-81a4-fb0213e5e919-logs" (OuterVolumeSpecName: "logs") pod "621700c2-adff-4cf1-81a4-fb0213e5e919" (UID: "621700c2-adff-4cf1-81a4-fb0213e5e919"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.696999 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/621700c2-adff-4cf1-81a4-fb0213e5e919-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "621700c2-adff-4cf1-81a4-fb0213e5e919" (UID: "621700c2-adff-4cf1-81a4-fb0213e5e919"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.697178 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a90211b-865e-43ee-a4d2-4435d5377cac-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8a90211b-865e-43ee-a4d2-4435d5377cac" (UID: "8a90211b-865e-43ee-a4d2-4435d5377cac"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.704887 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/621700c2-adff-4cf1-81a4-fb0213e5e919-kube-api-access-kkkjx" (OuterVolumeSpecName: "kube-api-access-kkkjx") pod "621700c2-adff-4cf1-81a4-fb0213e5e919" (UID: "621700c2-adff-4cf1-81a4-fb0213e5e919"). InnerVolumeSpecName "kube-api-access-kkkjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.705079 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-scripts" (OuterVolumeSpecName: "scripts") pod "8a90211b-865e-43ee-a4d2-4435d5377cac" (UID: "8a90211b-865e-43ee-a4d2-4435d5377cac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.711654 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-scripts" (OuterVolumeSpecName: "scripts") pod "621700c2-adff-4cf1-81a4-fb0213e5e919" (UID: "621700c2-adff-4cf1-81a4-fb0213e5e919"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.716175 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a90211b-865e-43ee-a4d2-4435d5377cac-kube-api-access-jm8hg" (OuterVolumeSpecName: "kube-api-access-jm8hg") pod "8a90211b-865e-43ee-a4d2-4435d5377cac" (UID: "8a90211b-865e-43ee-a4d2-4435d5377cac"). InnerVolumeSpecName "kube-api-access-jm8hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.735949 4902 scope.go:117] "RemoveContainer" containerID="b5f9108bd4e377347ea43cf1022065cb061fb7505fcb4f124adde97f4fd9fe0c" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.746459 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a90211b-865e-43ee-a4d2-4435d5377cac" (UID: "8a90211b-865e-43ee-a4d2-4435d5377cac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.768308 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "621700c2-adff-4cf1-81a4-fb0213e5e919" (UID: "621700c2-adff-4cf1-81a4-fb0213e5e919"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.783102 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-config-data" (OuterVolumeSpecName: "config-data") pod "8a90211b-865e-43ee-a4d2-4435d5377cac" (UID: "8a90211b-865e-43ee-a4d2-4435d5377cac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.792726 4902 scope.go:117] "RemoveContainer" containerID="736f3facc63619fff931156c32623cacaeb743514ad4d9bc998e592c1498cea3" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.798573 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a90211b-865e-43ee-a4d2-4435d5377cac-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.798601 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.798612 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/621700c2-adff-4cf1-81a4-fb0213e5e919-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.798623 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/621700c2-adff-4cf1-81a4-fb0213e5e919-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.798633 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a90211b-865e-43ee-a4d2-4435d5377cac-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.798644 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkkjx\" (UniqueName: \"kubernetes.io/projected/621700c2-adff-4cf1-81a4-fb0213e5e919-kube-api-access-kkkjx\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.798654 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.798666 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.798676 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm8hg\" (UniqueName: \"kubernetes.io/projected/8a90211b-865e-43ee-a4d2-4435d5377cac-kube-api-access-jm8hg\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.798687 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.798699 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.805973 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-config-data" (OuterVolumeSpecName: "config-data") pod "621700c2-adff-4cf1-81a4-fb0213e5e919" (UID: "621700c2-adff-4cf1-81a4-fb0213e5e919"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.806257 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8a90211b-865e-43ee-a4d2-4435d5377cac" (UID: "8a90211b-865e-43ee-a4d2-4435d5377cac"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.828176 4902 scope.go:117] "RemoveContainer" containerID="d703f5632f2cbf952b8d8487e251807ade66f1d024b3d48fde5f54990b973dc3" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.865019 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "621700c2-adff-4cf1-81a4-fb0213e5e919" (UID: "621700c2-adff-4cf1-81a4-fb0213e5e919"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.901547 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.901596 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.901610 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.962451 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.985180 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.008225 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.018184 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:16:23 crc kubenswrapper[4902]: E0121 16:16:23.018701 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621700c2-adff-4cf1-81a4-fb0213e5e919" containerName="glance-log" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.018717 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="621700c2-adff-4cf1-81a4-fb0213e5e919" containerName="glance-log" Jan 21 16:16:23 crc kubenswrapper[4902]: E0121 16:16:23.018734 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a90211b-865e-43ee-a4d2-4435d5377cac" containerName="glance-log" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.018740 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a90211b-865e-43ee-a4d2-4435d5377cac" containerName="glance-log" Jan 21 16:16:23 crc kubenswrapper[4902]: E0121 16:16:23.018749 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a90211b-865e-43ee-a4d2-4435d5377cac" containerName="glance-httpd" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.018756 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a90211b-865e-43ee-a4d2-4435d5377cac" containerName="glance-httpd" Jan 21 16:16:23 crc kubenswrapper[4902]: E0121 16:16:23.018763 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621700c2-adff-4cf1-81a4-fb0213e5e919" containerName="glance-httpd" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.018770 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="621700c2-adff-4cf1-81a4-fb0213e5e919" containerName="glance-httpd" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.018942 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="621700c2-adff-4cf1-81a4-fb0213e5e919" containerName="glance-log" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.018960 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="621700c2-adff-4cf1-81a4-fb0213e5e919" containerName="glance-httpd" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.018970 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a90211b-865e-43ee-a4d2-4435d5377cac" containerName="glance-log" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.018981 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a90211b-865e-43ee-a4d2-4435d5377cac" containerName="glance-httpd" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.020028 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.023185 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.023403 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.023556 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.024064 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mn7jp" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.034671 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.043304 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.056673 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.058629 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.062674 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.062860 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.081115 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215303 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqj6v\" (UniqueName: \"kubernetes.io/projected/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-kube-api-access-tqj6v\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215367 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215409 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43059835-649d-40c9-bf13-f46c9d6b65a6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215446 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215477 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcjw5\" (UniqueName: \"kubernetes.io/projected/43059835-649d-40c9-bf13-f46c9d6b65a6-kube-api-access-qcjw5\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215545 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43059835-649d-40c9-bf13-f46c9d6b65a6-logs\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215583 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43059835-649d-40c9-bf13-f46c9d6b65a6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215669 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43059835-649d-40c9-bf13-f46c9d6b65a6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215719 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215767 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-logs\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215814 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43059835-649d-40c9-bf13-f46c9d6b65a6-scripts\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215859 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215891 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215928 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43059835-649d-40c9-bf13-f46c9d6b65a6-config-data\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318167 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43059835-649d-40c9-bf13-f46c9d6b65a6-config-data\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318248 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqj6v\" (UniqueName: \"kubernetes.io/projected/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-kube-api-access-tqj6v\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318288 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318322 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43059835-649d-40c9-bf13-f46c9d6b65a6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318357 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318384 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcjw5\" (UniqueName: \"kubernetes.io/projected/43059835-649d-40c9-bf13-f46c9d6b65a6-kube-api-access-qcjw5\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318428 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43059835-649d-40c9-bf13-f46c9d6b65a6-logs\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318461 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43059835-649d-40c9-bf13-f46c9d6b65a6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318532 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43059835-649d-40c9-bf13-f46c9d6b65a6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318580 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318619 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-logs\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318654 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43059835-649d-40c9-bf13-f46c9d6b65a6-scripts\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318698 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318727 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.319190 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43059835-649d-40c9-bf13-f46c9d6b65a6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.319261 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43059835-649d-40c9-bf13-f46c9d6b65a6-logs\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.319511 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-logs\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.319858 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.326761 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43059835-649d-40c9-bf13-f46c9d6b65a6-scripts\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.327317 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.328558 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.329567 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43059835-649d-40c9-bf13-f46c9d6b65a6-config-data\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.337103 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.338483 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43059835-649d-40c9-bf13-f46c9d6b65a6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.338783 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43059835-649d-40c9-bf13-f46c9d6b65a6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.339066 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.340988 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcjw5\" (UniqueName: \"kubernetes.io/projected/43059835-649d-40c9-bf13-f46c9d6b65a6-kube-api-access-qcjw5\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.341747 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqj6v\" (UniqueName: \"kubernetes.io/projected/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-kube-api-access-tqj6v\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.356902 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.431209 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.637963 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dd785d478-plbs7" event={"ID":"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c","Type":"ContainerStarted","Data":"f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b"} Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.638342 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dd785d478-plbs7" event={"ID":"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c","Type":"ContainerStarted","Data":"4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f"} Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.645863 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cdc5859df-vpr9s" event={"ID":"941246aa-c88c-4447-95a9-0efe08817612","Type":"ContainerStarted","Data":"7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d"} Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.645902 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cdc5859df-vpr9s" event={"ID":"941246aa-c88c-4447-95a9-0efe08817612","Type":"ContainerStarted","Data":"65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4"} Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.646006 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6cdc5859df-vpr9s" podUID="941246aa-c88c-4447-95a9-0efe08817612" containerName="horizon-log" containerID="cri-o://65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4" gracePeriod=30 Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.646099 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6cdc5859df-vpr9s" podUID="941246aa-c88c-4447-95a9-0efe08817612" containerName="horizon" containerID="cri-o://7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d" gracePeriod=30 Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.654086 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-786f96566b-w596t" event={"ID":"b772cd9d-83ce-4675-84de-09f40bdcabe3","Type":"ContainerStarted","Data":"b5b92e7f1cc27fed5221f05667fdb25b332ac410148a8012346660a03a7b0fdf"} Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.654121 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-786f96566b-w596t" event={"ID":"b772cd9d-83ce-4675-84de-09f40bdcabe3","Type":"ContainerStarted","Data":"dd9c814774718de26b2a6f5f159c980f718ec5bd198d471d2426d82a67f32ddd"} Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.665460 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7dd785d478-plbs7" podStartSLOduration=2.553676663 podStartE2EDuration="7.665442399s" podCreationTimestamp="2026-01-21 16:16:16 +0000 UTC" firstStartedPulling="2026-01-21 16:16:17.346012919 +0000 UTC m=+6139.422845958" lastFinishedPulling="2026-01-21 16:16:22.457778665 +0000 UTC m=+6144.534611694" observedRunningTime="2026-01-21 16:16:23.661758046 +0000 UTC m=+6145.738591075" watchObservedRunningTime="2026-01-21 16:16:23.665442399 +0000 UTC m=+6145.742275428" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.669997 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5998889f69-hx8w9" event={"ID":"1a336745-0278-402d-b4c1-3b58f8fa66e9","Type":"ContainerStarted","Data":"ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2"} Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.670081 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5998889f69-hx8w9" event={"ID":"1a336745-0278-402d-b4c1-3b58f8fa66e9","Type":"ContainerStarted","Data":"60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890"} Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.670201 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5998889f69-hx8w9" podUID="1a336745-0278-402d-b4c1-3b58f8fa66e9" containerName="horizon-log" containerID="cri-o://60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890" gracePeriod=30 Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.670248 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5998889f69-hx8w9" podUID="1a336745-0278-402d-b4c1-3b58f8fa66e9" containerName="horizon" containerID="cri-o://ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2" gracePeriod=30 Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.697235 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-786f96566b-w596t" podStartSLOduration=2.645509457 podStartE2EDuration="7.697210323s" podCreationTimestamp="2026-01-21 16:16:16 +0000 UTC" firstStartedPulling="2026-01-21 16:16:17.462302071 +0000 UTC m=+6139.539135100" lastFinishedPulling="2026-01-21 16:16:22.514002937 +0000 UTC m=+6144.590835966" observedRunningTime="2026-01-21 16:16:23.6786251 +0000 UTC m=+6145.755458129" watchObservedRunningTime="2026-01-21 16:16:23.697210323 +0000 UTC m=+6145.774043352" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.737168 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5998889f69-hx8w9" podStartSLOduration=2.248045731 podStartE2EDuration="9.737148297s" podCreationTimestamp="2026-01-21 16:16:14 +0000 UTC" firstStartedPulling="2026-01-21 16:16:14.968896085 +0000 UTC m=+6137.045729114" lastFinishedPulling="2026-01-21 16:16:22.457998651 +0000 UTC m=+6144.534831680" observedRunningTime="2026-01-21 16:16:23.734449931 +0000 UTC m=+6145.811282960" watchObservedRunningTime="2026-01-21 16:16:23.737148297 +0000 UTC m=+6145.813981326" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.744497 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6cdc5859df-vpr9s" podStartSLOduration=2.4556462740000002 podStartE2EDuration="9.744477014s" podCreationTimestamp="2026-01-21 16:16:14 +0000 UTC" firstStartedPulling="2026-01-21 16:16:15.226269698 +0000 UTC m=+6137.303102747" lastFinishedPulling="2026-01-21 16:16:22.515100458 +0000 UTC m=+6144.591933487" observedRunningTime="2026-01-21 16:16:23.71521062 +0000 UTC m=+6145.792043649" watchObservedRunningTime="2026-01-21 16:16:23.744477014 +0000 UTC m=+6145.821310043" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.943260 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:16:24 crc kubenswrapper[4902]: I0121 16:16:24.053618 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:16:24 crc kubenswrapper[4902]: I0121 16:16:24.464865 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="621700c2-adff-4cf1-81a4-fb0213e5e919" path="/var/lib/kubelet/pods/621700c2-adff-4cf1-81a4-fb0213e5e919/volumes" Jan 21 16:16:24 crc kubenswrapper[4902]: I0121 16:16:24.466680 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a90211b-865e-43ee-a4d2-4435d5377cac" path="/var/lib/kubelet/pods/8a90211b-865e-43ee-a4d2-4435d5377cac/volumes" Jan 21 16:16:24 crc kubenswrapper[4902]: I0121 16:16:24.486054 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:24 crc kubenswrapper[4902]: I0121 16:16:24.687962 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5","Type":"ContainerStarted","Data":"9cfdffd840702e0e6c02aa2ed7cbe476730e81f72919f796c530733b81c2799e"} Jan 21 16:16:24 crc kubenswrapper[4902]: I0121 16:16:24.689584 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43059835-649d-40c9-bf13-f46c9d6b65a6","Type":"ContainerStarted","Data":"1e691a386ce0ecf5954a8e4ca1743adc7e19bac2c3d561d3031c3780f28a4e42"} Jan 21 16:16:24 crc kubenswrapper[4902]: I0121 16:16:24.780100 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:25 crc kubenswrapper[4902]: I0121 16:16:25.700573 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43059835-649d-40c9-bf13-f46c9d6b65a6","Type":"ContainerStarted","Data":"920df749f27a24b9a35bb974f78f3bdaa0871ed3eb4daa706c1cfd5b95ffdd08"} Jan 21 16:16:25 crc kubenswrapper[4902]: I0121 16:16:25.708726 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5","Type":"ContainerStarted","Data":"763c1eb0c0214c1476a67572a18370dd785222d6aa922d4881d3926263c40c17"} Jan 21 16:16:26 crc kubenswrapper[4902]: I0121 16:16:26.198139 4902 scope.go:117] "RemoveContainer" containerID="e7ae920f7061533fd1ae5c5eabfd18124e9c27f0aad7594a5b9ba20211753b38" Jan 21 16:16:26 crc kubenswrapper[4902]: I0121 16:16:26.240147 4902 scope.go:117] "RemoveContainer" containerID="c71cec8eacda47056c7a215f2b04bc9d493e2cbfdf871841495ef07bfb7eb7a5" Jan 21 16:16:26 crc kubenswrapper[4902]: I0121 16:16:26.294960 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:16:26 crc kubenswrapper[4902]: E0121 16:16:26.295309 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:16:26 crc kubenswrapper[4902]: I0121 16:16:26.722937 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43059835-649d-40c9-bf13-f46c9d6b65a6","Type":"ContainerStarted","Data":"f841d7ec3a945d5629fca6062dbd1bdbf1b8d411ab3b317f83b99177f1306350"} Jan 21 16:16:26 crc kubenswrapper[4902]: I0121 16:16:26.725647 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5","Type":"ContainerStarted","Data":"3c71f8800c06357ab47db802223123c8f2e58c9e194b664e56104f1339bdbacf"} Jan 21 16:16:26 crc kubenswrapper[4902]: I0121 16:16:26.762320 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.762295666 podStartE2EDuration="4.762295666s" podCreationTimestamp="2026-01-21 16:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:16:26.743670402 +0000 UTC m=+6148.820503451" watchObservedRunningTime="2026-01-21 16:16:26.762295666 +0000 UTC m=+6148.839128695" Jan 21 16:16:26 crc kubenswrapper[4902]: I0121 16:16:26.778721 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.778697237 podStartE2EDuration="4.778697237s" podCreationTimestamp="2026-01-21 16:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:16:26.768441979 +0000 UTC m=+6148.845275008" watchObservedRunningTime="2026-01-21 16:16:26.778697237 +0000 UTC m=+6148.855530266" Jan 21 16:16:26 crc kubenswrapper[4902]: I0121 16:16:26.860935 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:26 crc kubenswrapper[4902]: I0121 16:16:26.861000 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:26 crc kubenswrapper[4902]: I0121 16:16:26.949229 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:26 crc kubenswrapper[4902]: I0121 16:16:26.949294 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:27 crc kubenswrapper[4902]: I0121 16:16:27.060157 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-7k4p6"] Jan 21 16:16:27 crc kubenswrapper[4902]: I0121 16:16:27.072291 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-7k4p6"] Jan 21 16:16:28 crc kubenswrapper[4902]: I0121 16:16:28.307569 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2" path="/var/lib/kubelet/pods/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2/volumes" Jan 21 16:16:33 crc kubenswrapper[4902]: I0121 16:16:33.358677 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 16:16:33 crc kubenswrapper[4902]: I0121 16:16:33.359464 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 16:16:33 crc kubenswrapper[4902]: I0121 16:16:33.402618 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 16:16:33 crc kubenswrapper[4902]: I0121 16:16:33.421879 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 16:16:33 crc kubenswrapper[4902]: I0121 16:16:33.437405 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 16:16:33 crc kubenswrapper[4902]: I0121 16:16:33.438358 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 16:16:33 crc kubenswrapper[4902]: I0121 16:16:33.492312 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 16:16:33 crc kubenswrapper[4902]: I0121 16:16:33.506643 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 16:16:34 crc kubenswrapper[4902]: I0121 16:16:34.164773 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 16:16:34 crc kubenswrapper[4902]: I0121 16:16:34.165110 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 16:16:34 crc kubenswrapper[4902]: I0121 16:16:34.171097 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 16:16:34 crc kubenswrapper[4902]: I0121 16:16:34.171537 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 16:16:36 crc kubenswrapper[4902]: I0121 16:16:36.181131 4902 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:16:36 crc kubenswrapper[4902]: I0121 16:16:36.181373 4902 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:16:36 crc kubenswrapper[4902]: I0121 16:16:36.359930 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 16:16:36 crc kubenswrapper[4902]: I0121 16:16:36.372558 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 16:16:36 crc kubenswrapper[4902]: I0121 16:16:36.372664 4902 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:16:36 crc kubenswrapper[4902]: I0121 16:16:36.417392 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 16:16:36 crc kubenswrapper[4902]: I0121 16:16:36.468071 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 16:16:36 crc kubenswrapper[4902]: I0121 16:16:36.861823 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7dd785d478-plbs7" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.110:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.110:8443: connect: connection refused" Jan 21 16:16:36 crc kubenswrapper[4902]: I0121 16:16:36.950957 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-786f96566b-w596t" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.111:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.111:8443: connect: connection refused" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.384863 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rgzx7"] Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.416209 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgzx7"] Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.416365 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.515327 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e51e251d-3170-44e4-aaf6-4d288115b5c3-utilities\") pod \"redhat-marketplace-rgzx7\" (UID: \"e51e251d-3170-44e4-aaf6-4d288115b5c3\") " pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.515839 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e51e251d-3170-44e4-aaf6-4d288115b5c3-catalog-content\") pod \"redhat-marketplace-rgzx7\" (UID: \"e51e251d-3170-44e4-aaf6-4d288115b5c3\") " pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.516060 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z8nq\" (UniqueName: \"kubernetes.io/projected/e51e251d-3170-44e4-aaf6-4d288115b5c3-kube-api-access-7z8nq\") pod \"redhat-marketplace-rgzx7\" (UID: \"e51e251d-3170-44e4-aaf6-4d288115b5c3\") " pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.577782 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qqcwn"] Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.583156 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.592488 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qqcwn"] Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.620233 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77h79\" (UniqueName: \"kubernetes.io/projected/2c558c0f-33c5-4584-b548-fc5af8cee89e-kube-api-access-77h79\") pod \"redhat-operators-qqcwn\" (UID: \"2c558c0f-33c5-4584-b548-fc5af8cee89e\") " pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.620328 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z8nq\" (UniqueName: \"kubernetes.io/projected/e51e251d-3170-44e4-aaf6-4d288115b5c3-kube-api-access-7z8nq\") pod \"redhat-marketplace-rgzx7\" (UID: \"e51e251d-3170-44e4-aaf6-4d288115b5c3\") " pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.620391 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c558c0f-33c5-4584-b548-fc5af8cee89e-utilities\") pod \"redhat-operators-qqcwn\" (UID: \"2c558c0f-33c5-4584-b548-fc5af8cee89e\") " pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.620435 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c558c0f-33c5-4584-b548-fc5af8cee89e-catalog-content\") pod \"redhat-operators-qqcwn\" (UID: \"2c558c0f-33c5-4584-b548-fc5af8cee89e\") " pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.620494 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e51e251d-3170-44e4-aaf6-4d288115b5c3-utilities\") pod \"redhat-marketplace-rgzx7\" (UID: \"e51e251d-3170-44e4-aaf6-4d288115b5c3\") " pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.620658 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e51e251d-3170-44e4-aaf6-4d288115b5c3-catalog-content\") pod \"redhat-marketplace-rgzx7\" (UID: \"e51e251d-3170-44e4-aaf6-4d288115b5c3\") " pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.621235 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e51e251d-3170-44e4-aaf6-4d288115b5c3-utilities\") pod \"redhat-marketplace-rgzx7\" (UID: \"e51e251d-3170-44e4-aaf6-4d288115b5c3\") " pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.621732 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e51e251d-3170-44e4-aaf6-4d288115b5c3-catalog-content\") pod \"redhat-marketplace-rgzx7\" (UID: \"e51e251d-3170-44e4-aaf6-4d288115b5c3\") " pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.659781 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z8nq\" (UniqueName: \"kubernetes.io/projected/e51e251d-3170-44e4-aaf6-4d288115b5c3-kube-api-access-7z8nq\") pod \"redhat-marketplace-rgzx7\" (UID: \"e51e251d-3170-44e4-aaf6-4d288115b5c3\") " pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.723232 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c558c0f-33c5-4584-b548-fc5af8cee89e-utilities\") pod \"redhat-operators-qqcwn\" (UID: \"2c558c0f-33c5-4584-b548-fc5af8cee89e\") " pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.723687 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c558c0f-33c5-4584-b548-fc5af8cee89e-catalog-content\") pod \"redhat-operators-qqcwn\" (UID: \"2c558c0f-33c5-4584-b548-fc5af8cee89e\") " pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.723854 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77h79\" (UniqueName: \"kubernetes.io/projected/2c558c0f-33c5-4584-b548-fc5af8cee89e-kube-api-access-77h79\") pod \"redhat-operators-qqcwn\" (UID: \"2c558c0f-33c5-4584-b548-fc5af8cee89e\") " pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.723888 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c558c0f-33c5-4584-b548-fc5af8cee89e-utilities\") pod \"redhat-operators-qqcwn\" (UID: \"2c558c0f-33c5-4584-b548-fc5af8cee89e\") " pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.724350 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c558c0f-33c5-4584-b548-fc5af8cee89e-catalog-content\") pod \"redhat-operators-qqcwn\" (UID: \"2c558c0f-33c5-4584-b548-fc5af8cee89e\") " pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.737766 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.748563 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77h79\" (UniqueName: \"kubernetes.io/projected/2c558c0f-33c5-4584-b548-fc5af8cee89e-kube-api-access-77h79\") pod \"redhat-operators-qqcwn\" (UID: \"2c558c0f-33c5-4584-b548-fc5af8cee89e\") " pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.908505 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:38 crc kubenswrapper[4902]: I0121 16:16:38.249635 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgzx7"] Jan 21 16:16:38 crc kubenswrapper[4902]: I0121 16:16:38.465225 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qqcwn"] Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.245535 4902 generic.go:334] "Generic (PLEG): container finished" podID="e51e251d-3170-44e4-aaf6-4d288115b5c3" containerID="a5edfafdeacf21f426cc5bd6281b1cd868d12717fac023895ab55ea3fbcafc1e" exitCode=0 Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.245649 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzx7" event={"ID":"e51e251d-3170-44e4-aaf6-4d288115b5c3","Type":"ContainerDied","Data":"a5edfafdeacf21f426cc5bd6281b1cd868d12717fac023895ab55ea3fbcafc1e"} Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.245851 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzx7" event={"ID":"e51e251d-3170-44e4-aaf6-4d288115b5c3","Type":"ContainerStarted","Data":"7431d53a67478bc2afebf8017741886b32171eb148f4ba07b41d4ca2523dd676"} Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.247845 4902 generic.go:334] "Generic (PLEG): container finished" podID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerID="3978acdb017791c813d4f5337aced828704d5e523e45d415b1601e3ec73ed790" exitCode=0 Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.247890 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqcwn" event={"ID":"2c558c0f-33c5-4584-b548-fc5af8cee89e","Type":"ContainerDied","Data":"3978acdb017791c813d4f5337aced828704d5e523e45d415b1601e3ec73ed790"} Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.247917 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqcwn" event={"ID":"2c558c0f-33c5-4584-b548-fc5af8cee89e","Type":"ContainerStarted","Data":"dee8922d5520e1f0c611f74ca26c6dc79b0da6d5a6e133ff12dfae78dbe2c30a"} Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.296071 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:16:39 crc kubenswrapper[4902]: E0121 16:16:39.296397 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.778372 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v5hnt"] Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.780495 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.788258 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk5jl\" (UniqueName: \"kubernetes.io/projected/91829544-e720-43f3-b3dd-3f1240beb6f6-kube-api-access-bk5jl\") pod \"certified-operators-v5hnt\" (UID: \"91829544-e720-43f3-b3dd-3f1240beb6f6\") " pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.788571 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91829544-e720-43f3-b3dd-3f1240beb6f6-utilities\") pod \"certified-operators-v5hnt\" (UID: \"91829544-e720-43f3-b3dd-3f1240beb6f6\") " pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.788620 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91829544-e720-43f3-b3dd-3f1240beb6f6-catalog-content\") pod \"certified-operators-v5hnt\" (UID: \"91829544-e720-43f3-b3dd-3f1240beb6f6\") " pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.795459 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v5hnt"] Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.890475 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91829544-e720-43f3-b3dd-3f1240beb6f6-utilities\") pod \"certified-operators-v5hnt\" (UID: \"91829544-e720-43f3-b3dd-3f1240beb6f6\") " pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.890528 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91829544-e720-43f3-b3dd-3f1240beb6f6-catalog-content\") pod \"certified-operators-v5hnt\" (UID: \"91829544-e720-43f3-b3dd-3f1240beb6f6\") " pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.890555 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk5jl\" (UniqueName: \"kubernetes.io/projected/91829544-e720-43f3-b3dd-3f1240beb6f6-kube-api-access-bk5jl\") pod \"certified-operators-v5hnt\" (UID: \"91829544-e720-43f3-b3dd-3f1240beb6f6\") " pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.891169 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91829544-e720-43f3-b3dd-3f1240beb6f6-utilities\") pod \"certified-operators-v5hnt\" (UID: \"91829544-e720-43f3-b3dd-3f1240beb6f6\") " pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.891191 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91829544-e720-43f3-b3dd-3f1240beb6f6-catalog-content\") pod \"certified-operators-v5hnt\" (UID: \"91829544-e720-43f3-b3dd-3f1240beb6f6\") " pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.913598 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk5jl\" (UniqueName: \"kubernetes.io/projected/91829544-e720-43f3-b3dd-3f1240beb6f6-kube-api-access-bk5jl\") pod \"certified-operators-v5hnt\" (UID: \"91829544-e720-43f3-b3dd-3f1240beb6f6\") " pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:40 crc kubenswrapper[4902]: I0121 16:16:40.106571 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:40 crc kubenswrapper[4902]: I0121 16:16:40.750386 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v5hnt"] Jan 21 16:16:40 crc kubenswrapper[4902]: W0121 16:16:40.764019 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91829544_e720_43f3_b3dd_3f1240beb6f6.slice/crio-ff69022f1f995adb49589ec367e0815a70cf064bef008a8fa059b4c649c81ff9 WatchSource:0}: Error finding container ff69022f1f995adb49589ec367e0815a70cf064bef008a8fa059b4c649c81ff9: Status 404 returned error can't find the container with id ff69022f1f995adb49589ec367e0815a70cf064bef008a8fa059b4c649c81ff9 Jan 21 16:16:41 crc kubenswrapper[4902]: I0121 16:16:41.270253 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5hnt" event={"ID":"91829544-e720-43f3-b3dd-3f1240beb6f6","Type":"ContainerStarted","Data":"ff69022f1f995adb49589ec367e0815a70cf064bef008a8fa059b4c649c81ff9"} Jan 21 16:16:41 crc kubenswrapper[4902]: I0121 16:16:41.272284 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzx7" event={"ID":"e51e251d-3170-44e4-aaf6-4d288115b5c3","Type":"ContainerStarted","Data":"7bdedfb5108f3ffecf10a0859392a7cf8d5159f213fdc4909c0c06024f91b0c1"} Jan 21 16:16:41 crc kubenswrapper[4902]: I0121 16:16:41.274464 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqcwn" event={"ID":"2c558c0f-33c5-4584-b548-fc5af8cee89e","Type":"ContainerStarted","Data":"0b15326eb4c064e7851c30f18a27df97dd90b66b41ed4359185f31df9de1590c"} Jan 21 16:16:42 crc kubenswrapper[4902]: I0121 16:16:42.289681 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5hnt" event={"ID":"91829544-e720-43f3-b3dd-3f1240beb6f6","Type":"ContainerStarted","Data":"6329a41f5316fabec7507736cb3292b9c613095ba918a47cee6ecb04cb936c97"} Jan 21 16:16:42 crc kubenswrapper[4902]: I0121 16:16:42.304822 4902 generic.go:334] "Generic (PLEG): container finished" podID="e51e251d-3170-44e4-aaf6-4d288115b5c3" containerID="7bdedfb5108f3ffecf10a0859392a7cf8d5159f213fdc4909c0c06024f91b0c1" exitCode=0 Jan 21 16:16:42 crc kubenswrapper[4902]: I0121 16:16:42.320929 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzx7" event={"ID":"e51e251d-3170-44e4-aaf6-4d288115b5c3","Type":"ContainerDied","Data":"7bdedfb5108f3ffecf10a0859392a7cf8d5159f213fdc4909c0c06024f91b0c1"} Jan 21 16:16:43 crc kubenswrapper[4902]: I0121 16:16:43.319261 4902 generic.go:334] "Generic (PLEG): container finished" podID="91829544-e720-43f3-b3dd-3f1240beb6f6" containerID="6329a41f5316fabec7507736cb3292b9c613095ba918a47cee6ecb04cb936c97" exitCode=0 Jan 21 16:16:43 crc kubenswrapper[4902]: I0121 16:16:43.319371 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5hnt" event={"ID":"91829544-e720-43f3-b3dd-3f1240beb6f6","Type":"ContainerDied","Data":"6329a41f5316fabec7507736cb3292b9c613095ba918a47cee6ecb04cb936c97"} Jan 21 16:16:43 crc kubenswrapper[4902]: I0121 16:16:43.323890 4902 generic.go:334] "Generic (PLEG): container finished" podID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerID="0b15326eb4c064e7851c30f18a27df97dd90b66b41ed4359185f31df9de1590c" exitCode=0 Jan 21 16:16:43 crc kubenswrapper[4902]: I0121 16:16:43.323929 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqcwn" event={"ID":"2c558c0f-33c5-4584-b548-fc5af8cee89e","Type":"ContainerDied","Data":"0b15326eb4c064e7851c30f18a27df97dd90b66b41ed4359185f31df9de1590c"} Jan 21 16:16:44 crc kubenswrapper[4902]: I0121 16:16:44.344117 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzx7" event={"ID":"e51e251d-3170-44e4-aaf6-4d288115b5c3","Type":"ContainerStarted","Data":"69c36b0bae9178724a6d97de46722cf5b0cc80d59e4ce8f2e0554584489171d5"} Jan 21 16:16:44 crc kubenswrapper[4902]: I0121 16:16:44.349442 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqcwn" event={"ID":"2c558c0f-33c5-4584-b548-fc5af8cee89e","Type":"ContainerStarted","Data":"ef3174310ae77ae7733c59eda9f2154edea9a69da8c15f2f7b007132379ea630"} Jan 21 16:16:44 crc kubenswrapper[4902]: I0121 16:16:44.391189 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qqcwn" podStartSLOduration=2.615243393 podStartE2EDuration="7.391172229s" podCreationTimestamp="2026-01-21 16:16:37 +0000 UTC" firstStartedPulling="2026-01-21 16:16:39.249653265 +0000 UTC m=+6161.326486294" lastFinishedPulling="2026-01-21 16:16:44.025582101 +0000 UTC m=+6166.102415130" observedRunningTime="2026-01-21 16:16:44.384637875 +0000 UTC m=+6166.461470914" watchObservedRunningTime="2026-01-21 16:16:44.391172229 +0000 UTC m=+6166.468005258" Jan 21 16:16:44 crc kubenswrapper[4902]: I0121 16:16:44.391421 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rgzx7" podStartSLOduration=3.892141964 podStartE2EDuration="7.391415895s" podCreationTimestamp="2026-01-21 16:16:37 +0000 UTC" firstStartedPulling="2026-01-21 16:16:39.248602525 +0000 UTC m=+6161.325435554" lastFinishedPulling="2026-01-21 16:16:42.747876456 +0000 UTC m=+6164.824709485" observedRunningTime="2026-01-21 16:16:44.365354782 +0000 UTC m=+6166.442187811" watchObservedRunningTime="2026-01-21 16:16:44.391415895 +0000 UTC m=+6166.468248914" Jan 21 16:16:45 crc kubenswrapper[4902]: I0121 16:16:45.361033 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5hnt" event={"ID":"91829544-e720-43f3-b3dd-3f1240beb6f6","Type":"ContainerStarted","Data":"0b5ea8e0864ea4a4799bd6a5f588f36cddb6c93d274d93f03c01db02fde37139"} Jan 21 16:16:46 crc kubenswrapper[4902]: I0121 16:16:46.374005 4902 generic.go:334] "Generic (PLEG): container finished" podID="91829544-e720-43f3-b3dd-3f1240beb6f6" containerID="0b5ea8e0864ea4a4799bd6a5f588f36cddb6c93d274d93f03c01db02fde37139" exitCode=0 Jan 21 16:16:46 crc kubenswrapper[4902]: I0121 16:16:46.374307 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5hnt" event={"ID":"91829544-e720-43f3-b3dd-3f1240beb6f6","Type":"ContainerDied","Data":"0b5ea8e0864ea4a4799bd6a5f588f36cddb6c93d274d93f03c01db02fde37139"} Jan 21 16:16:47 crc kubenswrapper[4902]: I0121 16:16:47.393376 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5hnt" event={"ID":"91829544-e720-43f3-b3dd-3f1240beb6f6","Type":"ContainerStarted","Data":"92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb"} Jan 21 16:16:47 crc kubenswrapper[4902]: I0121 16:16:47.424588 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v5hnt" podStartSLOduration=4.9246807409999995 podStartE2EDuration="8.424564369s" podCreationTimestamp="2026-01-21 16:16:39 +0000 UTC" firstStartedPulling="2026-01-21 16:16:43.321616842 +0000 UTC m=+6165.398449881" lastFinishedPulling="2026-01-21 16:16:46.82150048 +0000 UTC m=+6168.898333509" observedRunningTime="2026-01-21 16:16:47.419493336 +0000 UTC m=+6169.496326375" watchObservedRunningTime="2026-01-21 16:16:47.424564369 +0000 UTC m=+6169.501397398" Jan 21 16:16:47 crc kubenswrapper[4902]: I0121 16:16:47.738231 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:47 crc kubenswrapper[4902]: I0121 16:16:47.738562 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:47 crc kubenswrapper[4902]: I0121 16:16:47.793105 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:47 crc kubenswrapper[4902]: I0121 16:16:47.909009 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:47 crc kubenswrapper[4902]: I0121 16:16:47.909945 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:48 crc kubenswrapper[4902]: I0121 16:16:48.458832 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:48 crc kubenswrapper[4902]: I0121 16:16:48.731248 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:48 crc kubenswrapper[4902]: I0121 16:16:48.904702 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:48 crc kubenswrapper[4902]: I0121 16:16:48.954362 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qqcwn" podUID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerName="registry-server" probeResult="failure" output=< Jan 21 16:16:48 crc kubenswrapper[4902]: timeout: failed to connect service ":50051" within 1s Jan 21 16:16:48 crc kubenswrapper[4902]: > Jan 21 16:16:49 crc kubenswrapper[4902]: I0121 16:16:49.050020 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-w8j46"] Jan 21 16:16:49 crc kubenswrapper[4902]: I0121 16:16:49.070912 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-w8j46"] Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.039772 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cb7a-account-create-update-qqdxl"] Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.048600 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-cb7a-account-create-update-qqdxl"] Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.108382 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.108447 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.158807 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.305189 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b91136e9-5bad-4d5c-8eff-8a77985a1726" path="/var/lib/kubelet/pods/b91136e9-5bad-4d5c-8eff-8a77985a1726/volumes" Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.305966 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3ecff7c-0bbc-47c7-82b4-fbdce132c94b" path="/var/lib/kubelet/pods/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b/volumes" Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.368644 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.835901 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.938251 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7dd785d478-plbs7"] Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.938532 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7dd785d478-plbs7" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerName="horizon-log" containerID="cri-o://4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f" gracePeriod=30 Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.939145 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7dd785d478-plbs7" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerName="horizon" containerID="cri-o://f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b" gracePeriod=30 Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.981179 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgzx7"] Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.981667 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rgzx7" podUID="e51e251d-3170-44e4-aaf6-4d288115b5c3" containerName="registry-server" containerID="cri-o://69c36b0bae9178724a6d97de46722cf5b0cc80d59e4ce8f2e0554584489171d5" gracePeriod=2 Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.436366 4902 generic.go:334] "Generic (PLEG): container finished" podID="e51e251d-3170-44e4-aaf6-4d288115b5c3" containerID="69c36b0bae9178724a6d97de46722cf5b0cc80d59e4ce8f2e0554584489171d5" exitCode=0 Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.436445 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzx7" event={"ID":"e51e251d-3170-44e4-aaf6-4d288115b5c3","Type":"ContainerDied","Data":"69c36b0bae9178724a6d97de46722cf5b0cc80d59e4ce8f2e0554584489171d5"} Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.437509 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzx7" event={"ID":"e51e251d-3170-44e4-aaf6-4d288115b5c3","Type":"ContainerDied","Data":"7431d53a67478bc2afebf8017741886b32171eb148f4ba07b41d4ca2523dd676"} Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.437588 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7431d53a67478bc2afebf8017741886b32171eb148f4ba07b41d4ca2523dd676" Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.494596 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.557077 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e51e251d-3170-44e4-aaf6-4d288115b5c3-utilities\") pod \"e51e251d-3170-44e4-aaf6-4d288115b5c3\" (UID: \"e51e251d-3170-44e4-aaf6-4d288115b5c3\") " Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.557219 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z8nq\" (UniqueName: \"kubernetes.io/projected/e51e251d-3170-44e4-aaf6-4d288115b5c3-kube-api-access-7z8nq\") pod \"e51e251d-3170-44e4-aaf6-4d288115b5c3\" (UID: \"e51e251d-3170-44e4-aaf6-4d288115b5c3\") " Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.557751 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e51e251d-3170-44e4-aaf6-4d288115b5c3-catalog-content\") pod \"e51e251d-3170-44e4-aaf6-4d288115b5c3\" (UID: \"e51e251d-3170-44e4-aaf6-4d288115b5c3\") " Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.558015 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e51e251d-3170-44e4-aaf6-4d288115b5c3-utilities" (OuterVolumeSpecName: "utilities") pod "e51e251d-3170-44e4-aaf6-4d288115b5c3" (UID: "e51e251d-3170-44e4-aaf6-4d288115b5c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.558995 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e51e251d-3170-44e4-aaf6-4d288115b5c3-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.563154 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e51e251d-3170-44e4-aaf6-4d288115b5c3-kube-api-access-7z8nq" (OuterVolumeSpecName: "kube-api-access-7z8nq") pod "e51e251d-3170-44e4-aaf6-4d288115b5c3" (UID: "e51e251d-3170-44e4-aaf6-4d288115b5c3"). InnerVolumeSpecName "kube-api-access-7z8nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.583123 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e51e251d-3170-44e4-aaf6-4d288115b5c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e51e251d-3170-44e4-aaf6-4d288115b5c3" (UID: "e51e251d-3170-44e4-aaf6-4d288115b5c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.661222 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z8nq\" (UniqueName: \"kubernetes.io/projected/e51e251d-3170-44e4-aaf6-4d288115b5c3-kube-api-access-7z8nq\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.661259 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e51e251d-3170-44e4-aaf6-4d288115b5c3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:52 crc kubenswrapper[4902]: I0121 16:16:52.451019 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:52 crc kubenswrapper[4902]: I0121 16:16:52.479574 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgzx7"] Jan 21 16:16:52 crc kubenswrapper[4902]: I0121 16:16:52.495088 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgzx7"] Jan 21 16:16:53 crc kubenswrapper[4902]: I0121 16:16:53.294850 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:16:53 crc kubenswrapper[4902]: E0121 16:16:53.295659 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.185701 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.197973 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.304767 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e51e251d-3170-44e4-aaf6-4d288115b5c3" path="/var/lib/kubelet/pods/e51e251d-3170-44e4-aaf6-4d288115b5c3/volumes" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.322115 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzbxg\" (UniqueName: \"kubernetes.io/projected/941246aa-c88c-4447-95a9-0efe08817612-kube-api-access-zzbxg\") pod \"941246aa-c88c-4447-95a9-0efe08817612\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.322559 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/941246aa-c88c-4447-95a9-0efe08817612-horizon-secret-key\") pod \"941246aa-c88c-4447-95a9-0efe08817612\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.322713 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/941246aa-c88c-4447-95a9-0efe08817612-scripts\") pod \"941246aa-c88c-4447-95a9-0efe08817612\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.322865 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a336745-0278-402d-b4c1-3b58f8fa66e9-scripts\") pod \"1a336745-0278-402d-b4c1-3b58f8fa66e9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.322979 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/941246aa-c88c-4447-95a9-0efe08817612-logs\") pod \"941246aa-c88c-4447-95a9-0efe08817612\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.323072 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1a336745-0278-402d-b4c1-3b58f8fa66e9-horizon-secret-key\") pod \"1a336745-0278-402d-b4c1-3b58f8fa66e9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.323420 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/941246aa-c88c-4447-95a9-0efe08817612-logs" (OuterVolumeSpecName: "logs") pod "941246aa-c88c-4447-95a9-0efe08817612" (UID: "941246aa-c88c-4447-95a9-0efe08817612"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.324321 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/941246aa-c88c-4447-95a9-0efe08817612-config-data\") pod \"941246aa-c88c-4447-95a9-0efe08817612\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.324407 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npmcz\" (UniqueName: \"kubernetes.io/projected/1a336745-0278-402d-b4c1-3b58f8fa66e9-kube-api-access-npmcz\") pod \"1a336745-0278-402d-b4c1-3b58f8fa66e9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.324573 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a336745-0278-402d-b4c1-3b58f8fa66e9-logs\") pod \"1a336745-0278-402d-b4c1-3b58f8fa66e9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.324741 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1a336745-0278-402d-b4c1-3b58f8fa66e9-config-data\") pod \"1a336745-0278-402d-b4c1-3b58f8fa66e9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.325506 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/941246aa-c88c-4447-95a9-0efe08817612-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.327870 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a336745-0278-402d-b4c1-3b58f8fa66e9-logs" (OuterVolumeSpecName: "logs") pod "1a336745-0278-402d-b4c1-3b58f8fa66e9" (UID: "1a336745-0278-402d-b4c1-3b58f8fa66e9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.328059 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a336745-0278-402d-b4c1-3b58f8fa66e9-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1a336745-0278-402d-b4c1-3b58f8fa66e9" (UID: "1a336745-0278-402d-b4c1-3b58f8fa66e9"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.328336 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/941246aa-c88c-4447-95a9-0efe08817612-kube-api-access-zzbxg" (OuterVolumeSpecName: "kube-api-access-zzbxg") pod "941246aa-c88c-4447-95a9-0efe08817612" (UID: "941246aa-c88c-4447-95a9-0efe08817612"). InnerVolumeSpecName "kube-api-access-zzbxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.329592 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/941246aa-c88c-4447-95a9-0efe08817612-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "941246aa-c88c-4447-95a9-0efe08817612" (UID: "941246aa-c88c-4447-95a9-0efe08817612"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.330111 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a336745-0278-402d-b4c1-3b58f8fa66e9-kube-api-access-npmcz" (OuterVolumeSpecName: "kube-api-access-npmcz") pod "1a336745-0278-402d-b4c1-3b58f8fa66e9" (UID: "1a336745-0278-402d-b4c1-3b58f8fa66e9"). InnerVolumeSpecName "kube-api-access-npmcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.347925 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941246aa-c88c-4447-95a9-0efe08817612-config-data" (OuterVolumeSpecName: "config-data") pod "941246aa-c88c-4447-95a9-0efe08817612" (UID: "941246aa-c88c-4447-95a9-0efe08817612"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.351583 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a336745-0278-402d-b4c1-3b58f8fa66e9-config-data" (OuterVolumeSpecName: "config-data") pod "1a336745-0278-402d-b4c1-3b58f8fa66e9" (UID: "1a336745-0278-402d-b4c1-3b58f8fa66e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.359410 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941246aa-c88c-4447-95a9-0efe08817612-scripts" (OuterVolumeSpecName: "scripts") pod "941246aa-c88c-4447-95a9-0efe08817612" (UID: "941246aa-c88c-4447-95a9-0efe08817612"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.362191 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a336745-0278-402d-b4c1-3b58f8fa66e9-scripts" (OuterVolumeSpecName: "scripts") pod "1a336745-0278-402d-b4c1-3b58f8fa66e9" (UID: "1a336745-0278-402d-b4c1-3b58f8fa66e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.428059 4902 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/941246aa-c88c-4447-95a9-0efe08817612-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.428096 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/941246aa-c88c-4447-95a9-0efe08817612-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.428106 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a336745-0278-402d-b4c1-3b58f8fa66e9-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.428114 4902 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1a336745-0278-402d-b4c1-3b58f8fa66e9-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.428122 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/941246aa-c88c-4447-95a9-0efe08817612-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.428130 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npmcz\" (UniqueName: \"kubernetes.io/projected/1a336745-0278-402d-b4c1-3b58f8fa66e9-kube-api-access-npmcz\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.428139 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a336745-0278-402d-b4c1-3b58f8fa66e9-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.428149 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1a336745-0278-402d-b4c1-3b58f8fa66e9-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.428162 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzbxg\" (UniqueName: \"kubernetes.io/projected/941246aa-c88c-4447-95a9-0efe08817612-kube-api-access-zzbxg\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.485369 4902 generic.go:334] "Generic (PLEG): container finished" podID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerID="f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b" exitCode=0 Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.485422 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dd785d478-plbs7" event={"ID":"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c","Type":"ContainerDied","Data":"f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b"} Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.487211 4902 generic.go:334] "Generic (PLEG): container finished" podID="941246aa-c88c-4447-95a9-0efe08817612" containerID="7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d" exitCode=137 Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.487233 4902 generic.go:334] "Generic (PLEG): container finished" podID="941246aa-c88c-4447-95a9-0efe08817612" containerID="65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4" exitCode=137 Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.487260 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cdc5859df-vpr9s" event={"ID":"941246aa-c88c-4447-95a9-0efe08817612","Type":"ContainerDied","Data":"7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d"} Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.487278 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cdc5859df-vpr9s" event={"ID":"941246aa-c88c-4447-95a9-0efe08817612","Type":"ContainerDied","Data":"65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4"} Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.487288 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cdc5859df-vpr9s" event={"ID":"941246aa-c88c-4447-95a9-0efe08817612","Type":"ContainerDied","Data":"4b82e831be0b81da6f10c4a3ff492b70b47b678bffe68000c6d720bf8f6f3d32"} Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.487303 4902 scope.go:117] "RemoveContainer" containerID="7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.487420 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.493704 4902 generic.go:334] "Generic (PLEG): container finished" podID="1a336745-0278-402d-b4c1-3b58f8fa66e9" containerID="ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2" exitCode=137 Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.493983 4902 generic.go:334] "Generic (PLEG): container finished" podID="1a336745-0278-402d-b4c1-3b58f8fa66e9" containerID="60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890" exitCode=137 Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.494093 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5998889f69-hx8w9" event={"ID":"1a336745-0278-402d-b4c1-3b58f8fa66e9","Type":"ContainerDied","Data":"ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2"} Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.494216 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5998889f69-hx8w9" event={"ID":"1a336745-0278-402d-b4c1-3b58f8fa66e9","Type":"ContainerDied","Data":"60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890"} Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.494292 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5998889f69-hx8w9" event={"ID":"1a336745-0278-402d-b4c1-3b58f8fa66e9","Type":"ContainerDied","Data":"6e949b6543efff5451a444f3bd5efb9a9f0312a98d89cdd384669d329bffb82f"} Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.494401 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.526353 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6cdc5859df-vpr9s"] Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.537077 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6cdc5859df-vpr9s"] Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.553371 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5998889f69-hx8w9"] Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.591096 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5998889f69-hx8w9"] Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.717248 4902 scope.go:117] "RemoveContainer" containerID="65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.793393 4902 scope.go:117] "RemoveContainer" containerID="7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d" Jan 21 16:16:54 crc kubenswrapper[4902]: E0121 16:16:54.793872 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d\": container with ID starting with 7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d not found: ID does not exist" containerID="7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.793929 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d"} err="failed to get container status \"7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d\": rpc error: code = NotFound desc = could not find container \"7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d\": container with ID starting with 7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d not found: ID does not exist" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.793957 4902 scope.go:117] "RemoveContainer" containerID="65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4" Jan 21 16:16:54 crc kubenswrapper[4902]: E0121 16:16:54.794534 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4\": container with ID starting with 65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4 not found: ID does not exist" containerID="65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.794581 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4"} err="failed to get container status \"65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4\": rpc error: code = NotFound desc = could not find container \"65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4\": container with ID starting with 65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4 not found: ID does not exist" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.794615 4902 scope.go:117] "RemoveContainer" containerID="7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.794909 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d"} err="failed to get container status \"7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d\": rpc error: code = NotFound desc = could not find container \"7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d\": container with ID starting with 7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d not found: ID does not exist" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.794931 4902 scope.go:117] "RemoveContainer" containerID="65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.795319 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4"} err="failed to get container status \"65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4\": rpc error: code = NotFound desc = could not find container \"65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4\": container with ID starting with 65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4 not found: ID does not exist" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.795346 4902 scope.go:117] "RemoveContainer" containerID="ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.986594 4902 scope.go:117] "RemoveContainer" containerID="60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890" Jan 21 16:16:55 crc kubenswrapper[4902]: I0121 16:16:55.008902 4902 scope.go:117] "RemoveContainer" containerID="ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2" Jan 21 16:16:55 crc kubenswrapper[4902]: E0121 16:16:55.009421 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2\": container with ID starting with ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2 not found: ID does not exist" containerID="ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2" Jan 21 16:16:55 crc kubenswrapper[4902]: I0121 16:16:55.009472 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2"} err="failed to get container status \"ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2\": rpc error: code = NotFound desc = could not find container \"ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2\": container with ID starting with ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2 not found: ID does not exist" Jan 21 16:16:55 crc kubenswrapper[4902]: I0121 16:16:55.009498 4902 scope.go:117] "RemoveContainer" containerID="60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890" Jan 21 16:16:55 crc kubenswrapper[4902]: E0121 16:16:55.010106 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890\": container with ID starting with 60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890 not found: ID does not exist" containerID="60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890" Jan 21 16:16:55 crc kubenswrapper[4902]: I0121 16:16:55.010145 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890"} err="failed to get container status \"60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890\": rpc error: code = NotFound desc = could not find container \"60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890\": container with ID starting with 60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890 not found: ID does not exist" Jan 21 16:16:55 crc kubenswrapper[4902]: I0121 16:16:55.010173 4902 scope.go:117] "RemoveContainer" containerID="ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2" Jan 21 16:16:55 crc kubenswrapper[4902]: I0121 16:16:55.010454 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2"} err="failed to get container status \"ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2\": rpc error: code = NotFound desc = could not find container \"ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2\": container with ID starting with ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2 not found: ID does not exist" Jan 21 16:16:55 crc kubenswrapper[4902]: I0121 16:16:55.010483 4902 scope.go:117] "RemoveContainer" containerID="60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890" Jan 21 16:16:55 crc kubenswrapper[4902]: I0121 16:16:55.010753 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890"} err="failed to get container status \"60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890\": rpc error: code = NotFound desc = could not find container \"60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890\": container with ID starting with 60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890 not found: ID does not exist" Jan 21 16:16:56 crc kubenswrapper[4902]: I0121 16:16:56.308935 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a336745-0278-402d-b4c1-3b58f8fa66e9" path="/var/lib/kubelet/pods/1a336745-0278-402d-b4c1-3b58f8fa66e9/volumes" Jan 21 16:16:56 crc kubenswrapper[4902]: I0121 16:16:56.310247 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="941246aa-c88c-4447-95a9-0efe08817612" path="/var/lib/kubelet/pods/941246aa-c88c-4447-95a9-0efe08817612/volumes" Jan 21 16:16:56 crc kubenswrapper[4902]: I0121 16:16:56.860670 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7dd785d478-plbs7" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.110:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.110:8443: connect: connection refused" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.005975 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6845bd7746-jd2dk"] Jan 21 16:16:58 crc kubenswrapper[4902]: E0121 16:16:58.006345 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e51e251d-3170-44e4-aaf6-4d288115b5c3" containerName="registry-server" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.006358 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e51e251d-3170-44e4-aaf6-4d288115b5c3" containerName="registry-server" Jan 21 16:16:58 crc kubenswrapper[4902]: E0121 16:16:58.006372 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="941246aa-c88c-4447-95a9-0efe08817612" containerName="horizon" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.006378 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="941246aa-c88c-4447-95a9-0efe08817612" containerName="horizon" Jan 21 16:16:58 crc kubenswrapper[4902]: E0121 16:16:58.006391 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a336745-0278-402d-b4c1-3b58f8fa66e9" containerName="horizon" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.006398 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a336745-0278-402d-b4c1-3b58f8fa66e9" containerName="horizon" Jan 21 16:16:58 crc kubenswrapper[4902]: E0121 16:16:58.006415 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e51e251d-3170-44e4-aaf6-4d288115b5c3" containerName="extract-utilities" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.006421 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e51e251d-3170-44e4-aaf6-4d288115b5c3" containerName="extract-utilities" Jan 21 16:16:58 crc kubenswrapper[4902]: E0121 16:16:58.006437 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e51e251d-3170-44e4-aaf6-4d288115b5c3" containerName="extract-content" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.006443 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e51e251d-3170-44e4-aaf6-4d288115b5c3" containerName="extract-content" Jan 21 16:16:58 crc kubenswrapper[4902]: E0121 16:16:58.006453 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="941246aa-c88c-4447-95a9-0efe08817612" containerName="horizon-log" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.006459 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="941246aa-c88c-4447-95a9-0efe08817612" containerName="horizon-log" Jan 21 16:16:58 crc kubenswrapper[4902]: E0121 16:16:58.006472 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a336745-0278-402d-b4c1-3b58f8fa66e9" containerName="horizon-log" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.006479 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a336745-0278-402d-b4c1-3b58f8fa66e9" containerName="horizon-log" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.006666 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a336745-0278-402d-b4c1-3b58f8fa66e9" containerName="horizon" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.006680 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a336745-0278-402d-b4c1-3b58f8fa66e9" containerName="horizon-log" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.006698 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e51e251d-3170-44e4-aaf6-4d288115b5c3" containerName="registry-server" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.006707 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="941246aa-c88c-4447-95a9-0efe08817612" containerName="horizon" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.006719 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="941246aa-c88c-4447-95a9-0efe08817612" containerName="horizon-log" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.007680 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.026397 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6845bd7746-jd2dk"] Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.068141 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-fj4nd"] Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.078789 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-fj4nd"] Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.104665 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d71e079c-1163-4e7e-ac94-0e92a0b602ad-horizon-tls-certs\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.104994 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmhtc\" (UniqueName: \"kubernetes.io/projected/d71e079c-1163-4e7e-ac94-0e92a0b602ad-kube-api-access-lmhtc\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.105184 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d71e079c-1163-4e7e-ac94-0e92a0b602ad-config-data\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.105392 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d71e079c-1163-4e7e-ac94-0e92a0b602ad-logs\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.105443 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d71e079c-1163-4e7e-ac94-0e92a0b602ad-horizon-secret-key\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.105500 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d71e079c-1163-4e7e-ac94-0e92a0b602ad-combined-ca-bundle\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.105595 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d71e079c-1163-4e7e-ac94-0e92a0b602ad-scripts\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.208394 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d71e079c-1163-4e7e-ac94-0e92a0b602ad-logs\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.208452 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d71e079c-1163-4e7e-ac94-0e92a0b602ad-horizon-secret-key\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.208492 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d71e079c-1163-4e7e-ac94-0e92a0b602ad-combined-ca-bundle\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.208554 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d71e079c-1163-4e7e-ac94-0e92a0b602ad-scripts\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.208663 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d71e079c-1163-4e7e-ac94-0e92a0b602ad-horizon-tls-certs\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.208712 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmhtc\" (UniqueName: \"kubernetes.io/projected/d71e079c-1163-4e7e-ac94-0e92a0b602ad-kube-api-access-lmhtc\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.208740 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d71e079c-1163-4e7e-ac94-0e92a0b602ad-config-data\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.209612 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d71e079c-1163-4e7e-ac94-0e92a0b602ad-scripts\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.209869 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d71e079c-1163-4e7e-ac94-0e92a0b602ad-logs\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.210147 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d71e079c-1163-4e7e-ac94-0e92a0b602ad-config-data\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.215110 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d71e079c-1163-4e7e-ac94-0e92a0b602ad-combined-ca-bundle\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.216424 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d71e079c-1163-4e7e-ac94-0e92a0b602ad-horizon-secret-key\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.217418 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d71e079c-1163-4e7e-ac94-0e92a0b602ad-horizon-tls-certs\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.232564 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmhtc\" (UniqueName: \"kubernetes.io/projected/d71e079c-1163-4e7e-ac94-0e92a0b602ad-kube-api-access-lmhtc\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.333833 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.350015 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97a9d8bd-92b5-42ef-b945-6b3ccc65b48b" path="/var/lib/kubelet/pods/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b/volumes" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:58.664519 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6845bd7746-jd2dk"] Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:58.978013 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qqcwn" podUID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerName="registry-server" probeResult="failure" output=< Jan 21 16:16:59 crc kubenswrapper[4902]: timeout: failed to connect service ":50051" within 1s Jan 21 16:16:59 crc kubenswrapper[4902]: > Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.422407 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-bjrq8"] Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.423857 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-bjrq8" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.446875 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-bjrq8"] Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.540553 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c-operator-scripts\") pod \"heat-db-create-bjrq8\" (UID: \"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c\") " pod="openstack/heat-db-create-bjrq8" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.540817 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlb9x\" (UniqueName: \"kubernetes.io/projected/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c-kube-api-access-rlb9x\") pod \"heat-db-create-bjrq8\" (UID: \"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c\") " pod="openstack/heat-db-create-bjrq8" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.611383 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6845bd7746-jd2dk" event={"ID":"d71e079c-1163-4e7e-ac94-0e92a0b602ad","Type":"ContainerStarted","Data":"74319219889ae992328ba3bb88887722d9a59a9fb9b6d7bc638fc9ef5c4bbc13"} Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.611440 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6845bd7746-jd2dk" event={"ID":"d71e079c-1163-4e7e-ac94-0e92a0b602ad","Type":"ContainerStarted","Data":"45961fffe29aeb66e030e936b7281223fa1ca4ff3c15da5cb24296c85e4cc52f"} Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.611456 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6845bd7746-jd2dk" event={"ID":"d71e079c-1163-4e7e-ac94-0e92a0b602ad","Type":"ContainerStarted","Data":"22d498506777dee603fed38e172392f2d8ac28ff01f8b9ba12ba8e248aa24e72"} Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.643038 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlb9x\" (UniqueName: \"kubernetes.io/projected/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c-kube-api-access-rlb9x\") pod \"heat-db-create-bjrq8\" (UID: \"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c\") " pod="openstack/heat-db-create-bjrq8" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.643101 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c-operator-scripts\") pod \"heat-db-create-bjrq8\" (UID: \"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c\") " pod="openstack/heat-db-create-bjrq8" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.644789 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c-operator-scripts\") pod \"heat-db-create-bjrq8\" (UID: \"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c\") " pod="openstack/heat-db-create-bjrq8" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.649713 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-11d1-account-create-update-c7r42"] Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.651094 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-11d1-account-create-update-c7r42" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.653633 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.665085 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-11d1-account-create-update-c7r42"] Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.665517 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6845bd7746-jd2dk" podStartSLOduration=2.6654939239999997 podStartE2EDuration="2.665493924s" podCreationTimestamp="2026-01-21 16:16:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:16:59.644366689 +0000 UTC m=+6181.721199718" watchObservedRunningTime="2026-01-21 16:16:59.665493924 +0000 UTC m=+6181.742326953" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.684094 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlb9x\" (UniqueName: \"kubernetes.io/projected/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c-kube-api-access-rlb9x\") pod \"heat-db-create-bjrq8\" (UID: \"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c\") " pod="openstack/heat-db-create-bjrq8" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.745120 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff9e17b7-5e08-4042-9b1b-ccad64651eef-operator-scripts\") pod \"heat-11d1-account-create-update-c7r42\" (UID: \"ff9e17b7-5e08-4042-9b1b-ccad64651eef\") " pod="openstack/heat-11d1-account-create-update-c7r42" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.745180 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbhnz\" (UniqueName: \"kubernetes.io/projected/ff9e17b7-5e08-4042-9b1b-ccad64651eef-kube-api-access-cbhnz\") pod \"heat-11d1-account-create-update-c7r42\" (UID: \"ff9e17b7-5e08-4042-9b1b-ccad64651eef\") " pod="openstack/heat-11d1-account-create-update-c7r42" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.806661 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-bjrq8" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.847714 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff9e17b7-5e08-4042-9b1b-ccad64651eef-operator-scripts\") pod \"heat-11d1-account-create-update-c7r42\" (UID: \"ff9e17b7-5e08-4042-9b1b-ccad64651eef\") " pod="openstack/heat-11d1-account-create-update-c7r42" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.848036 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbhnz\" (UniqueName: \"kubernetes.io/projected/ff9e17b7-5e08-4042-9b1b-ccad64651eef-kube-api-access-cbhnz\") pod \"heat-11d1-account-create-update-c7r42\" (UID: \"ff9e17b7-5e08-4042-9b1b-ccad64651eef\") " pod="openstack/heat-11d1-account-create-update-c7r42" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.848587 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff9e17b7-5e08-4042-9b1b-ccad64651eef-operator-scripts\") pod \"heat-11d1-account-create-update-c7r42\" (UID: \"ff9e17b7-5e08-4042-9b1b-ccad64651eef\") " pod="openstack/heat-11d1-account-create-update-c7r42" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.872731 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbhnz\" (UniqueName: \"kubernetes.io/projected/ff9e17b7-5e08-4042-9b1b-ccad64651eef-kube-api-access-cbhnz\") pod \"heat-11d1-account-create-update-c7r42\" (UID: \"ff9e17b7-5e08-4042-9b1b-ccad64651eef\") " pod="openstack/heat-11d1-account-create-update-c7r42" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.970705 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-11d1-account-create-update-c7r42" Jan 21 16:17:00 crc kubenswrapper[4902]: I0121 16:17:00.187692 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:17:00 crc kubenswrapper[4902]: I0121 16:17:00.272128 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v5hnt"] Jan 21 16:17:00 crc kubenswrapper[4902]: I0121 16:17:00.364394 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-bjrq8"] Jan 21 16:17:00 crc kubenswrapper[4902]: I0121 16:17:00.515471 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-11d1-account-create-update-c7r42"] Jan 21 16:17:00 crc kubenswrapper[4902]: I0121 16:17:00.651552 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-bjrq8" event={"ID":"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c","Type":"ContainerStarted","Data":"745a40ea71fa0659b994aca7c2aff73301bd6c551946f45e224b9ab71b71e18f"} Jan 21 16:17:00 crc kubenswrapper[4902]: I0121 16:17:00.651922 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-bjrq8" event={"ID":"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c","Type":"ContainerStarted","Data":"a98983d796b14519949744b954be43459febb454e85432558a7cb24d6e5aa795"} Jan 21 16:17:00 crc kubenswrapper[4902]: I0121 16:17:00.654486 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-11d1-account-create-update-c7r42" event={"ID":"ff9e17b7-5e08-4042-9b1b-ccad64651eef","Type":"ContainerStarted","Data":"8796f7c4e100512924c5fa2afe8b23fc4c173a893529ebcc3ca8923c22390fcb"} Jan 21 16:17:00 crc kubenswrapper[4902]: I0121 16:17:00.654821 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v5hnt" podUID="91829544-e720-43f3-b3dd-3f1240beb6f6" containerName="registry-server" containerID="cri-o://92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb" gracePeriod=2 Jan 21 16:17:00 crc kubenswrapper[4902]: I0121 16:17:00.672491 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-bjrq8" podStartSLOduration=1.672467591 podStartE2EDuration="1.672467591s" podCreationTimestamp="2026-01-21 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:17:00.669211229 +0000 UTC m=+6182.746044258" watchObservedRunningTime="2026-01-21 16:17:00.672467591 +0000 UTC m=+6182.749300620" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.025683 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.074506 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk5jl\" (UniqueName: \"kubernetes.io/projected/91829544-e720-43f3-b3dd-3f1240beb6f6-kube-api-access-bk5jl\") pod \"91829544-e720-43f3-b3dd-3f1240beb6f6\" (UID: \"91829544-e720-43f3-b3dd-3f1240beb6f6\") " Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.074709 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91829544-e720-43f3-b3dd-3f1240beb6f6-utilities\") pod \"91829544-e720-43f3-b3dd-3f1240beb6f6\" (UID: \"91829544-e720-43f3-b3dd-3f1240beb6f6\") " Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.074753 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91829544-e720-43f3-b3dd-3f1240beb6f6-catalog-content\") pod \"91829544-e720-43f3-b3dd-3f1240beb6f6\" (UID: \"91829544-e720-43f3-b3dd-3f1240beb6f6\") " Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.077366 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91829544-e720-43f3-b3dd-3f1240beb6f6-utilities" (OuterVolumeSpecName: "utilities") pod "91829544-e720-43f3-b3dd-3f1240beb6f6" (UID: "91829544-e720-43f3-b3dd-3f1240beb6f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.082457 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91829544-e720-43f3-b3dd-3f1240beb6f6-kube-api-access-bk5jl" (OuterVolumeSpecName: "kube-api-access-bk5jl") pod "91829544-e720-43f3-b3dd-3f1240beb6f6" (UID: "91829544-e720-43f3-b3dd-3f1240beb6f6"). InnerVolumeSpecName "kube-api-access-bk5jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.128627 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91829544-e720-43f3-b3dd-3f1240beb6f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91829544-e720-43f3-b3dd-3f1240beb6f6" (UID: "91829544-e720-43f3-b3dd-3f1240beb6f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.177136 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91829544-e720-43f3-b3dd-3f1240beb6f6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.177179 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91829544-e720-43f3-b3dd-3f1240beb6f6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.177192 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk5jl\" (UniqueName: \"kubernetes.io/projected/91829544-e720-43f3-b3dd-3f1240beb6f6-kube-api-access-bk5jl\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.664768 4902 generic.go:334] "Generic (PLEG): container finished" podID="91829544-e720-43f3-b3dd-3f1240beb6f6" containerID="92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb" exitCode=0 Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.664896 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.664921 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5hnt" event={"ID":"91829544-e720-43f3-b3dd-3f1240beb6f6","Type":"ContainerDied","Data":"92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb"} Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.669118 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5hnt" event={"ID":"91829544-e720-43f3-b3dd-3f1240beb6f6","Type":"ContainerDied","Data":"ff69022f1f995adb49589ec367e0815a70cf064bef008a8fa059b4c649c81ff9"} Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.669154 4902 scope.go:117] "RemoveContainer" containerID="92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.671527 4902 generic.go:334] "Generic (PLEG): container finished" podID="217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c" containerID="745a40ea71fa0659b994aca7c2aff73301bd6c551946f45e224b9ab71b71e18f" exitCode=0 Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.671584 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-bjrq8" event={"ID":"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c","Type":"ContainerDied","Data":"745a40ea71fa0659b994aca7c2aff73301bd6c551946f45e224b9ab71b71e18f"} Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.677610 4902 generic.go:334] "Generic (PLEG): container finished" podID="ff9e17b7-5e08-4042-9b1b-ccad64651eef" containerID="8e3ea4085f3e9419958669812fbb80d867719697fa5d6f29fd25013487806482" exitCode=0 Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.677665 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-11d1-account-create-update-c7r42" event={"ID":"ff9e17b7-5e08-4042-9b1b-ccad64651eef","Type":"ContainerDied","Data":"8e3ea4085f3e9419958669812fbb80d867719697fa5d6f29fd25013487806482"} Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.700947 4902 scope.go:117] "RemoveContainer" containerID="0b5ea8e0864ea4a4799bd6a5f588f36cddb6c93d274d93f03c01db02fde37139" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.724195 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v5hnt"] Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.727728 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v5hnt"] Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.736984 4902 scope.go:117] "RemoveContainer" containerID="6329a41f5316fabec7507736cb3292b9c613095ba918a47cee6ecb04cb936c97" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.769462 4902 scope.go:117] "RemoveContainer" containerID="92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb" Jan 21 16:17:01 crc kubenswrapper[4902]: E0121 16:17:01.769945 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb\": container with ID starting with 92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb not found: ID does not exist" containerID="92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.769977 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb"} err="failed to get container status \"92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb\": rpc error: code = NotFound desc = could not find container \"92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb\": container with ID starting with 92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb not found: ID does not exist" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.770000 4902 scope.go:117] "RemoveContainer" containerID="0b5ea8e0864ea4a4799bd6a5f588f36cddb6c93d274d93f03c01db02fde37139" Jan 21 16:17:01 crc kubenswrapper[4902]: E0121 16:17:01.770273 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b5ea8e0864ea4a4799bd6a5f588f36cddb6c93d274d93f03c01db02fde37139\": container with ID starting with 0b5ea8e0864ea4a4799bd6a5f588f36cddb6c93d274d93f03c01db02fde37139 not found: ID does not exist" containerID="0b5ea8e0864ea4a4799bd6a5f588f36cddb6c93d274d93f03c01db02fde37139" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.770325 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b5ea8e0864ea4a4799bd6a5f588f36cddb6c93d274d93f03c01db02fde37139"} err="failed to get container status \"0b5ea8e0864ea4a4799bd6a5f588f36cddb6c93d274d93f03c01db02fde37139\": rpc error: code = NotFound desc = could not find container \"0b5ea8e0864ea4a4799bd6a5f588f36cddb6c93d274d93f03c01db02fde37139\": container with ID starting with 0b5ea8e0864ea4a4799bd6a5f588f36cddb6c93d274d93f03c01db02fde37139 not found: ID does not exist" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.770340 4902 scope.go:117] "RemoveContainer" containerID="6329a41f5316fabec7507736cb3292b9c613095ba918a47cee6ecb04cb936c97" Jan 21 16:17:01 crc kubenswrapper[4902]: E0121 16:17:01.770587 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6329a41f5316fabec7507736cb3292b9c613095ba918a47cee6ecb04cb936c97\": container with ID starting with 6329a41f5316fabec7507736cb3292b9c613095ba918a47cee6ecb04cb936c97 not found: ID does not exist" containerID="6329a41f5316fabec7507736cb3292b9c613095ba918a47cee6ecb04cb936c97" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.770607 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6329a41f5316fabec7507736cb3292b9c613095ba918a47cee6ecb04cb936c97"} err="failed to get container status \"6329a41f5316fabec7507736cb3292b9c613095ba918a47cee6ecb04cb936c97\": rpc error: code = NotFound desc = could not find container \"6329a41f5316fabec7507736cb3292b9c613095ba918a47cee6ecb04cb936c97\": container with ID starting with 6329a41f5316fabec7507736cb3292b9c613095ba918a47cee6ecb04cb936c97 not found: ID does not exist" Jan 21 16:17:02 crc kubenswrapper[4902]: I0121 16:17:02.310620 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91829544-e720-43f3-b3dd-3f1240beb6f6" path="/var/lib/kubelet/pods/91829544-e720-43f3-b3dd-3f1240beb6f6/volumes" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.204023 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-11d1-account-create-update-c7r42" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.211696 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-bjrq8" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.316526 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlb9x\" (UniqueName: \"kubernetes.io/projected/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c-kube-api-access-rlb9x\") pod \"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c\" (UID: \"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c\") " Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.316674 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff9e17b7-5e08-4042-9b1b-ccad64651eef-operator-scripts\") pod \"ff9e17b7-5e08-4042-9b1b-ccad64651eef\" (UID: \"ff9e17b7-5e08-4042-9b1b-ccad64651eef\") " Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.316720 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbhnz\" (UniqueName: \"kubernetes.io/projected/ff9e17b7-5e08-4042-9b1b-ccad64651eef-kube-api-access-cbhnz\") pod \"ff9e17b7-5e08-4042-9b1b-ccad64651eef\" (UID: \"ff9e17b7-5e08-4042-9b1b-ccad64651eef\") " Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.316849 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c-operator-scripts\") pod \"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c\" (UID: \"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c\") " Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.317232 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff9e17b7-5e08-4042-9b1b-ccad64651eef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff9e17b7-5e08-4042-9b1b-ccad64651eef" (UID: "ff9e17b7-5e08-4042-9b1b-ccad64651eef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.317619 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c" (UID: "217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.317905 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff9e17b7-5e08-4042-9b1b-ccad64651eef-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.317932 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.323710 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c-kube-api-access-rlb9x" (OuterVolumeSpecName: "kube-api-access-rlb9x") pod "217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c" (UID: "217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c"). InnerVolumeSpecName "kube-api-access-rlb9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.324506 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff9e17b7-5e08-4042-9b1b-ccad64651eef-kube-api-access-cbhnz" (OuterVolumeSpecName: "kube-api-access-cbhnz") pod "ff9e17b7-5e08-4042-9b1b-ccad64651eef" (UID: "ff9e17b7-5e08-4042-9b1b-ccad64651eef"). InnerVolumeSpecName "kube-api-access-cbhnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.420233 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlb9x\" (UniqueName: \"kubernetes.io/projected/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c-kube-api-access-rlb9x\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.420263 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbhnz\" (UniqueName: \"kubernetes.io/projected/ff9e17b7-5e08-4042-9b1b-ccad64651eef-kube-api-access-cbhnz\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.705219 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-11d1-account-create-update-c7r42" event={"ID":"ff9e17b7-5e08-4042-9b1b-ccad64651eef","Type":"ContainerDied","Data":"8796f7c4e100512924c5fa2afe8b23fc4c173a893529ebcc3ca8923c22390fcb"} Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.705253 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-11d1-account-create-update-c7r42" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.705278 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8796f7c4e100512924c5fa2afe8b23fc4c173a893529ebcc3ca8923c22390fcb" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.710888 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-bjrq8" event={"ID":"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c","Type":"ContainerDied","Data":"a98983d796b14519949744b954be43459febb454e85432558a7cb24d6e5aa795"} Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.710938 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a98983d796b14519949744b954be43459febb454e85432558a7cb24d6e5aa795" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.710993 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-bjrq8" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.794133 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-5zjtz"] Jan 21 16:17:04 crc kubenswrapper[4902]: E0121 16:17:04.794902 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91829544-e720-43f3-b3dd-3f1240beb6f6" containerName="extract-utilities" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.794917 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="91829544-e720-43f3-b3dd-3f1240beb6f6" containerName="extract-utilities" Jan 21 16:17:04 crc kubenswrapper[4902]: E0121 16:17:04.794927 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff9e17b7-5e08-4042-9b1b-ccad64651eef" containerName="mariadb-account-create-update" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.794933 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff9e17b7-5e08-4042-9b1b-ccad64651eef" containerName="mariadb-account-create-update" Jan 21 16:17:04 crc kubenswrapper[4902]: E0121 16:17:04.794959 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91829544-e720-43f3-b3dd-3f1240beb6f6" containerName="extract-content" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.794966 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="91829544-e720-43f3-b3dd-3f1240beb6f6" containerName="extract-content" Jan 21 16:17:04 crc kubenswrapper[4902]: E0121 16:17:04.794980 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c" containerName="mariadb-database-create" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.794986 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c" containerName="mariadb-database-create" Jan 21 16:17:04 crc kubenswrapper[4902]: E0121 16:17:04.795001 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91829544-e720-43f3-b3dd-3f1240beb6f6" containerName="registry-server" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.795007 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="91829544-e720-43f3-b3dd-3f1240beb6f6" containerName="registry-server" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.795198 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c" containerName="mariadb-database-create" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.795212 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff9e17b7-5e08-4042-9b1b-ccad64651eef" containerName="mariadb-account-create-update" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.795231 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="91829544-e720-43f3-b3dd-3f1240beb6f6" containerName="registry-server" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.795917 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.811146 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-q7twz" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.811271 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.811406 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-5zjtz"] Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.949174 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxmnr\" (UniqueName: \"kubernetes.io/projected/b1a02641-de79-49cd-91a4-d689c669a38c-kube-api-access-hxmnr\") pod \"heat-db-sync-5zjtz\" (UID: \"b1a02641-de79-49cd-91a4-d689c669a38c\") " pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.949340 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a02641-de79-49cd-91a4-d689c669a38c-combined-ca-bundle\") pod \"heat-db-sync-5zjtz\" (UID: \"b1a02641-de79-49cd-91a4-d689c669a38c\") " pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.949432 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1a02641-de79-49cd-91a4-d689c669a38c-config-data\") pod \"heat-db-sync-5zjtz\" (UID: \"b1a02641-de79-49cd-91a4-d689c669a38c\") " pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:05 crc kubenswrapper[4902]: I0121 16:17:05.051336 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxmnr\" (UniqueName: \"kubernetes.io/projected/b1a02641-de79-49cd-91a4-d689c669a38c-kube-api-access-hxmnr\") pod \"heat-db-sync-5zjtz\" (UID: \"b1a02641-de79-49cd-91a4-d689c669a38c\") " pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:05 crc kubenswrapper[4902]: I0121 16:17:05.051449 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a02641-de79-49cd-91a4-d689c669a38c-combined-ca-bundle\") pod \"heat-db-sync-5zjtz\" (UID: \"b1a02641-de79-49cd-91a4-d689c669a38c\") " pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:05 crc kubenswrapper[4902]: I0121 16:17:05.051514 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1a02641-de79-49cd-91a4-d689c669a38c-config-data\") pod \"heat-db-sync-5zjtz\" (UID: \"b1a02641-de79-49cd-91a4-d689c669a38c\") " pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:05 crc kubenswrapper[4902]: I0121 16:17:05.058303 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1a02641-de79-49cd-91a4-d689c669a38c-config-data\") pod \"heat-db-sync-5zjtz\" (UID: \"b1a02641-de79-49cd-91a4-d689c669a38c\") " pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:05 crc kubenswrapper[4902]: I0121 16:17:05.059001 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a02641-de79-49cd-91a4-d689c669a38c-combined-ca-bundle\") pod \"heat-db-sync-5zjtz\" (UID: \"b1a02641-de79-49cd-91a4-d689c669a38c\") " pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:05 crc kubenswrapper[4902]: I0121 16:17:05.080498 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxmnr\" (UniqueName: \"kubernetes.io/projected/b1a02641-de79-49cd-91a4-d689c669a38c-kube-api-access-hxmnr\") pod \"heat-db-sync-5zjtz\" (UID: \"b1a02641-de79-49cd-91a4-d689c669a38c\") " pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:05 crc kubenswrapper[4902]: I0121 16:17:05.131546 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:05 crc kubenswrapper[4902]: I0121 16:17:05.295035 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:17:05 crc kubenswrapper[4902]: E0121 16:17:05.295556 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:17:05 crc kubenswrapper[4902]: I0121 16:17:05.602469 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-5zjtz"] Jan 21 16:17:05 crc kubenswrapper[4902]: I0121 16:17:05.731467 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-5zjtz" event={"ID":"b1a02641-de79-49cd-91a4-d689c669a38c","Type":"ContainerStarted","Data":"f45d4ff100cf62cbe14d607f941d4754955d6a683f1577c68cfa3d2ab9bbff49"} Jan 21 16:17:06 crc kubenswrapper[4902]: I0121 16:17:06.861352 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7dd785d478-plbs7" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.110:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.110:8443: connect: connection refused" Jan 21 16:17:07 crc kubenswrapper[4902]: I0121 16:17:07.966272 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:17:08 crc kubenswrapper[4902]: I0121 16:17:08.036809 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:17:08 crc kubenswrapper[4902]: I0121 16:17:08.334892 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:17:08 crc kubenswrapper[4902]: I0121 16:17:08.335237 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:17:08 crc kubenswrapper[4902]: I0121 16:17:08.605806 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qqcwn"] Jan 21 16:17:09 crc kubenswrapper[4902]: I0121 16:17:09.798741 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qqcwn" podUID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerName="registry-server" containerID="cri-o://ef3174310ae77ae7733c59eda9f2154edea9a69da8c15f2f7b007132379ea630" gracePeriod=2 Jan 21 16:17:10 crc kubenswrapper[4902]: I0121 16:17:10.833006 4902 generic.go:334] "Generic (PLEG): container finished" podID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerID="ef3174310ae77ae7733c59eda9f2154edea9a69da8c15f2f7b007132379ea630" exitCode=0 Jan 21 16:17:10 crc kubenswrapper[4902]: I0121 16:17:10.833067 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqcwn" event={"ID":"2c558c0f-33c5-4584-b548-fc5af8cee89e","Type":"ContainerDied","Data":"ef3174310ae77ae7733c59eda9f2154edea9a69da8c15f2f7b007132379ea630"} Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.532921 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.591802 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c558c0f-33c5-4584-b548-fc5af8cee89e-catalog-content\") pod \"2c558c0f-33c5-4584-b548-fc5af8cee89e\" (UID: \"2c558c0f-33c5-4584-b548-fc5af8cee89e\") " Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.591853 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c558c0f-33c5-4584-b548-fc5af8cee89e-utilities\") pod \"2c558c0f-33c5-4584-b548-fc5af8cee89e\" (UID: \"2c558c0f-33c5-4584-b548-fc5af8cee89e\") " Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.592083 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77h79\" (UniqueName: \"kubernetes.io/projected/2c558c0f-33c5-4584-b548-fc5af8cee89e-kube-api-access-77h79\") pod \"2c558c0f-33c5-4584-b548-fc5af8cee89e\" (UID: \"2c558c0f-33c5-4584-b548-fc5af8cee89e\") " Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.594057 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c558c0f-33c5-4584-b548-fc5af8cee89e-utilities" (OuterVolumeSpecName: "utilities") pod "2c558c0f-33c5-4584-b548-fc5af8cee89e" (UID: "2c558c0f-33c5-4584-b548-fc5af8cee89e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.596754 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c558c0f-33c5-4584-b548-fc5af8cee89e-kube-api-access-77h79" (OuterVolumeSpecName: "kube-api-access-77h79") pod "2c558c0f-33c5-4584-b548-fc5af8cee89e" (UID: "2c558c0f-33c5-4584-b548-fc5af8cee89e"). InnerVolumeSpecName "kube-api-access-77h79". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.694481 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77h79\" (UniqueName: \"kubernetes.io/projected/2c558c0f-33c5-4584-b548-fc5af8cee89e-kube-api-access-77h79\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.694557 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c558c0f-33c5-4584-b548-fc5af8cee89e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.723948 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c558c0f-33c5-4584-b548-fc5af8cee89e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c558c0f-33c5-4584-b548-fc5af8cee89e" (UID: "2c558c0f-33c5-4584-b548-fc5af8cee89e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.796277 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c558c0f-33c5-4584-b548-fc5af8cee89e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.860637 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-5zjtz" event={"ID":"b1a02641-de79-49cd-91a4-d689c669a38c","Type":"ContainerStarted","Data":"12cbd897a8c963b1753af6838fe6f74f721c8f8e6f46ac0835b5c50a96042e89"} Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.863274 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqcwn" event={"ID":"2c558c0f-33c5-4584-b548-fc5af8cee89e","Type":"ContainerDied","Data":"dee8922d5520e1f0c611f74ca26c6dc79b0da6d5a6e133ff12dfae78dbe2c30a"} Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.863312 4902 scope.go:117] "RemoveContainer" containerID="ef3174310ae77ae7733c59eda9f2154edea9a69da8c15f2f7b007132379ea630" Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.863345 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.909748 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-5zjtz" podStartSLOduration=2.190239242 podStartE2EDuration="9.909727821s" podCreationTimestamp="2026-01-21 16:17:04 +0000 UTC" firstStartedPulling="2026-01-21 16:17:05.618539994 +0000 UTC m=+6187.695373013" lastFinishedPulling="2026-01-21 16:17:13.338028563 +0000 UTC m=+6195.414861592" observedRunningTime="2026-01-21 16:17:13.894475862 +0000 UTC m=+6195.971308891" watchObservedRunningTime="2026-01-21 16:17:13.909727821 +0000 UTC m=+6195.986560850" Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.917447 4902 scope.go:117] "RemoveContainer" containerID="0b15326eb4c064e7851c30f18a27df97dd90b66b41ed4359185f31df9de1590c" Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.949922 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qqcwn"] Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.963650 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qqcwn"] Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.990062 4902 scope.go:117] "RemoveContainer" containerID="3978acdb017791c813d4f5337aced828704d5e523e45d415b1601e3ec73ed790" Jan 21 16:17:14 crc kubenswrapper[4902]: I0121 16:17:14.310189 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c558c0f-33c5-4584-b548-fc5af8cee89e" path="/var/lib/kubelet/pods/2c558c0f-33c5-4584-b548-fc5af8cee89e/volumes" Jan 21 16:17:16 crc kubenswrapper[4902]: I0121 16:17:16.861590 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7dd785d478-plbs7" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.110:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.110:8443: connect: connection refused" Jan 21 16:17:16 crc kubenswrapper[4902]: I0121 16:17:16.862558 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:17:16 crc kubenswrapper[4902]: I0121 16:17:16.904744 4902 generic.go:334] "Generic (PLEG): container finished" podID="b1a02641-de79-49cd-91a4-d689c669a38c" containerID="12cbd897a8c963b1753af6838fe6f74f721c8f8e6f46ac0835b5c50a96042e89" exitCode=0 Jan 21 16:17:16 crc kubenswrapper[4902]: I0121 16:17:16.904810 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-5zjtz" event={"ID":"b1a02641-de79-49cd-91a4-d689c669a38c","Type":"ContainerDied","Data":"12cbd897a8c963b1753af6838fe6f74f721c8f8e6f46ac0835b5c50a96042e89"} Jan 21 16:17:17 crc kubenswrapper[4902]: I0121 16:17:17.295251 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:17:17 crc kubenswrapper[4902]: E0121 16:17:17.295823 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.321286 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.493249 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1a02641-de79-49cd-91a4-d689c669a38c-config-data\") pod \"b1a02641-de79-49cd-91a4-d689c669a38c\" (UID: \"b1a02641-de79-49cd-91a4-d689c669a38c\") " Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.493587 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxmnr\" (UniqueName: \"kubernetes.io/projected/b1a02641-de79-49cd-91a4-d689c669a38c-kube-api-access-hxmnr\") pod \"b1a02641-de79-49cd-91a4-d689c669a38c\" (UID: \"b1a02641-de79-49cd-91a4-d689c669a38c\") " Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.493684 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a02641-de79-49cd-91a4-d689c669a38c-combined-ca-bundle\") pod \"b1a02641-de79-49cd-91a4-d689c669a38c\" (UID: \"b1a02641-de79-49cd-91a4-d689c669a38c\") " Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.499035 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1a02641-de79-49cd-91a4-d689c669a38c-kube-api-access-hxmnr" (OuterVolumeSpecName: "kube-api-access-hxmnr") pod "b1a02641-de79-49cd-91a4-d689c669a38c" (UID: "b1a02641-de79-49cd-91a4-d689c669a38c"). InnerVolumeSpecName "kube-api-access-hxmnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.525273 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1a02641-de79-49cd-91a4-d689c669a38c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1a02641-de79-49cd-91a4-d689c669a38c" (UID: "b1a02641-de79-49cd-91a4-d689c669a38c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.562741 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1a02641-de79-49cd-91a4-d689c669a38c-config-data" (OuterVolumeSpecName: "config-data") pod "b1a02641-de79-49cd-91a4-d689c669a38c" (UID: "b1a02641-de79-49cd-91a4-d689c669a38c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.595285 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1a02641-de79-49cd-91a4-d689c669a38c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.595309 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxmnr\" (UniqueName: \"kubernetes.io/projected/b1a02641-de79-49cd-91a4-d689c669a38c-kube-api-access-hxmnr\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.595319 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a02641-de79-49cd-91a4-d689c669a38c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.926389 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-5zjtz" event={"ID":"b1a02641-de79-49cd-91a4-d689c669a38c","Type":"ContainerDied","Data":"f45d4ff100cf62cbe14d607f941d4754955d6a683f1577c68cfa3d2ab9bbff49"} Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.926430 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f45d4ff100cf62cbe14d607f941d4754955d6a683f1577c68cfa3d2ab9bbff49" Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.926469 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.654999 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-77695bdf6-844ml"] Jan 21 16:17:20 crc kubenswrapper[4902]: E0121 16:17:20.655826 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerName="extract-utilities" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.655846 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerName="extract-utilities" Jan 21 16:17:20 crc kubenswrapper[4902]: E0121 16:17:20.655880 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerName="extract-content" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.655889 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerName="extract-content" Jan 21 16:17:20 crc kubenswrapper[4902]: E0121 16:17:20.655903 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerName="registry-server" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.655911 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerName="registry-server" Jan 21 16:17:20 crc kubenswrapper[4902]: E0121 16:17:20.655930 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1a02641-de79-49cd-91a4-d689c669a38c" containerName="heat-db-sync" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.655937 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1a02641-de79-49cd-91a4-d689c669a38c" containerName="heat-db-sync" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.656183 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerName="registry-server" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.656201 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1a02641-de79-49cd-91a4-d689c669a38c" containerName="heat-db-sync" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.657100 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.662326 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.667815 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-q7twz" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.677444 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.698633 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-77695bdf6-844ml"] Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.722637 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.762074 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-cf8444c78-xmqt2"] Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.763805 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.767987 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.782567 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vx98\" (UniqueName: \"kubernetes.io/projected/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-kube-api-access-5vx98\") pod \"heat-api-cf8444c78-xmqt2\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.782682 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-config-data\") pod \"heat-engine-77695bdf6-844ml\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.782798 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-config-data-custom\") pod \"heat-engine-77695bdf6-844ml\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.782834 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-combined-ca-bundle\") pod \"heat-api-cf8444c78-xmqt2\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.783178 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-combined-ca-bundle\") pod \"heat-engine-77695bdf6-844ml\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.783239 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv2sw\" (UniqueName: \"kubernetes.io/projected/26cf64d9-8389-473d-a51f-2ca282b5787f-kube-api-access-tv2sw\") pod \"heat-engine-77695bdf6-844ml\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.783282 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-config-data\") pod \"heat-api-cf8444c78-xmqt2\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.783437 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-config-data-custom\") pod \"heat-api-cf8444c78-xmqt2\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.789879 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-cf8444c78-xmqt2"] Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.848734 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-65bd9b7448-nvhqd"] Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.850420 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.854832 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.862217 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-65bd9b7448-nvhqd"] Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.886388 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-combined-ca-bundle\") pod \"heat-cfnapi-65bd9b7448-nvhqd\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.886527 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-combined-ca-bundle\") pod \"heat-engine-77695bdf6-844ml\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.886576 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv2sw\" (UniqueName: \"kubernetes.io/projected/26cf64d9-8389-473d-a51f-2ca282b5787f-kube-api-access-tv2sw\") pod \"heat-engine-77695bdf6-844ml\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.886632 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-config-data\") pod \"heat-api-cf8444c78-xmqt2\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.886656 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmrh4\" (UniqueName: \"kubernetes.io/projected/b55674f9-c7ae-4344-979f-d80fc2d0e03b-kube-api-access-lmrh4\") pod \"heat-cfnapi-65bd9b7448-nvhqd\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.886735 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-config-data\") pod \"heat-cfnapi-65bd9b7448-nvhqd\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.886820 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-config-data-custom\") pod \"heat-api-cf8444c78-xmqt2\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.886869 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vx98\" (UniqueName: \"kubernetes.io/projected/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-kube-api-access-5vx98\") pod \"heat-api-cf8444c78-xmqt2\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.886918 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-config-data\") pod \"heat-engine-77695bdf6-844ml\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.887126 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-config-data-custom\") pod \"heat-engine-77695bdf6-844ml\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.887155 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-combined-ca-bundle\") pod \"heat-api-cf8444c78-xmqt2\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.887242 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-config-data-custom\") pod \"heat-cfnapi-65bd9b7448-nvhqd\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.895113 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-combined-ca-bundle\") pod \"heat-api-cf8444c78-xmqt2\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.895869 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-config-data-custom\") pod \"heat-api-cf8444c78-xmqt2\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.896652 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-combined-ca-bundle\") pod \"heat-engine-77695bdf6-844ml\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.898334 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-config-data\") pod \"heat-api-cf8444c78-xmqt2\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.904421 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-config-data-custom\") pod \"heat-engine-77695bdf6-844ml\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.913392 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv2sw\" (UniqueName: \"kubernetes.io/projected/26cf64d9-8389-473d-a51f-2ca282b5787f-kube-api-access-tv2sw\") pod \"heat-engine-77695bdf6-844ml\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.914875 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-config-data\") pod \"heat-engine-77695bdf6-844ml\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.917304 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vx98\" (UniqueName: \"kubernetes.io/projected/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-kube-api-access-5vx98\") pod \"heat-api-cf8444c78-xmqt2\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.989913 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-config-data-custom\") pod \"heat-cfnapi-65bd9b7448-nvhqd\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.990011 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-combined-ca-bundle\") pod \"heat-cfnapi-65bd9b7448-nvhqd\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.990102 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmrh4\" (UniqueName: \"kubernetes.io/projected/b55674f9-c7ae-4344-979f-d80fc2d0e03b-kube-api-access-lmrh4\") pod \"heat-cfnapi-65bd9b7448-nvhqd\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.990154 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-config-data\") pod \"heat-cfnapi-65bd9b7448-nvhqd\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.993294 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:21 crc kubenswrapper[4902]: W0121 16:17:21.013502 4902 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1a02641_de79_49cd_91a4_d689c669a38c.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1a02641_de79_49cd_91a4_d689c669a38c.slice: no such file or directory Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.024170 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-config-data-custom\") pod \"heat-cfnapi-65bd9b7448-nvhqd\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.024840 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-config-data\") pod \"heat-cfnapi-65bd9b7448-nvhqd\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.028986 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-combined-ca-bundle\") pod \"heat-cfnapi-65bd9b7448-nvhqd\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.048940 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmrh4\" (UniqueName: \"kubernetes.io/projected/b55674f9-c7ae-4344-979f-d80fc2d0e03b-kube-api-access-lmrh4\") pod \"heat-cfnapi-65bd9b7448-nvhqd\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.110083 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.171557 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:21 crc kubenswrapper[4902]: E0121 16:17:21.392257 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91829544_e720_43f3_b3dd_3f1240beb6f6.slice/crio-conmon-92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b8ff7ce_f44c_45d2_ac7c_ddebb604798c.slice/crio-4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff9e17b7_5e08_4042_9b1b_ccad64651eef.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c558c0f_33c5_4584_b548_fc5af8cee89e.slice/crio-ef3174310ae77ae7733c59eda9f2154edea9a69da8c15f2f7b007132379ea630.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod217952b8_c6e3_44ba_b5f2_dabc3dfa9b1c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff9e17b7_5e08_4042_9b1b_ccad64651eef.slice/crio-8e3ea4085f3e9419958669812fbb80d867719697fa5d6f29fd25013487806482.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c558c0f_33c5_4584_b548_fc5af8cee89e.slice/crio-dee8922d5520e1f0c611f74ca26c6dc79b0da6d5a6e133ff12dfae78dbe2c30a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod217952b8_c6e3_44ba_b5f2_dabc3dfa9b1c.slice/crio-conmon-745a40ea71fa0659b994aca7c2aff73301bd6c551946f45e224b9ab71b71e18f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod217952b8_c6e3_44ba_b5f2_dabc3dfa9b1c.slice/crio-a98983d796b14519949744b954be43459febb454e85432558a7cb24d6e5aa795\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b8ff7ce_f44c_45d2_ac7c_ddebb604798c.slice/crio-conmon-4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91829544_e720_43f3_b3dd_3f1240beb6f6.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91829544_e720_43f3_b3dd_3f1240beb6f6.slice/crio-ff69022f1f995adb49589ec367e0815a70cf064bef008a8fa059b4c649c81ff9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c558c0f_33c5_4584_b548_fc5af8cee89e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c558c0f_33c5_4584_b548_fc5af8cee89e.slice/crio-conmon-ef3174310ae77ae7733c59eda9f2154edea9a69da8c15f2f7b007132379ea630.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91829544_e720_43f3_b3dd_3f1240beb6f6.slice/crio-92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod217952b8_c6e3_44ba_b5f2_dabc3dfa9b1c.slice/crio-745a40ea71fa0659b994aca7c2aff73301bd6c551946f45e224b9ab71b71e18f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff9e17b7_5e08_4042_9b1b_ccad64651eef.slice/crio-8796f7c4e100512924c5fa2afe8b23fc4c173a893529ebcc3ca8923c22390fcb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff9e17b7_5e08_4042_9b1b_ccad64651eef.slice/crio-conmon-8e3ea4085f3e9419958669812fbb80d867719697fa5d6f29fd25013487806482.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.646759 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-77695bdf6-844ml"] Jan 21 16:17:21 crc kubenswrapper[4902]: W0121 16:17:21.654322 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26cf64d9_8389_473d_a51f_2ca282b5787f.slice/crio-cb43a5fb250f0e18d67e65759c41da1b4870b16bb8305d85b3eb43efb5279133 WatchSource:0}: Error finding container cb43a5fb250f0e18d67e65759c41da1b4870b16bb8305d85b3eb43efb5279133: Status 404 returned error can't find the container with id cb43a5fb250f0e18d67e65759c41da1b4870b16bb8305d85b3eb43efb5279133 Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.747334 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.918828 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbsdh\" (UniqueName: \"kubernetes.io/projected/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-kube-api-access-vbsdh\") pod \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.918885 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-horizon-secret-key\") pod \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.918979 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-scripts\") pod \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.919032 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-combined-ca-bundle\") pod \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.919138 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-horizon-tls-certs\") pod \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.919213 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-logs\") pod \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.919240 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-config-data\") pod \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.923593 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-logs" (OuterVolumeSpecName: "logs") pod "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" (UID: "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.934690 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-kube-api-access-vbsdh" (OuterVolumeSpecName: "kube-api-access-vbsdh") pod "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" (UID: "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c"). InnerVolumeSpecName "kube-api-access-vbsdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.935329 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" (UID: "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.939064 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-cf8444c78-xmqt2"] Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.958921 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-scripts" (OuterVolumeSpecName: "scripts") pod "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" (UID: "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:17:21 crc kubenswrapper[4902]: W0121 16:17:21.960286 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda67ffd84_72d3_4d63_b99a_0fe8ebe12753.slice/crio-c268dfd4954d075a28333859e3f54fce320b71b326194e4f9bbadd0bffc420fd WatchSource:0}: Error finding container c268dfd4954d075a28333859e3f54fce320b71b326194e4f9bbadd0bffc420fd: Status 404 returned error can't find the container with id c268dfd4954d075a28333859e3f54fce320b71b326194e4f9bbadd0bffc420fd Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.964537 4902 generic.go:334] "Generic (PLEG): container finished" podID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerID="4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f" exitCode=137 Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.964630 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dd785d478-plbs7" event={"ID":"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c","Type":"ContainerDied","Data":"4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f"} Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.964662 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.964670 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dd785d478-plbs7" event={"ID":"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c","Type":"ContainerDied","Data":"4686f3e06661bcfec8e992129cddce590c37e819ce3cb3dd7fbf805003f4c581"} Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.964694 4902 scope.go:117] "RemoveContainer" containerID="f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.977271 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" (UID: "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.982617 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-77695bdf6-844ml" event={"ID":"26cf64d9-8389-473d-a51f-2ca282b5787f","Type":"ContainerStarted","Data":"c51c67b3d5eb2547d5d118a566d05127b5232bbe2fa2468af4680ad00279aa48"} Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.982655 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-77695bdf6-844ml" event={"ID":"26cf64d9-8389-473d-a51f-2ca282b5787f","Type":"ContainerStarted","Data":"cb43a5fb250f0e18d67e65759c41da1b4870b16bb8305d85b3eb43efb5279133"} Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.983747 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.005999 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-config-data" (OuterVolumeSpecName: "config-data") pod "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" (UID: "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.006961 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" (UID: "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.007488 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-77695bdf6-844ml" podStartSLOduration=2.007469935 podStartE2EDuration="2.007469935s" podCreationTimestamp="2026-01-21 16:17:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:17:22.004626885 +0000 UTC m=+6204.081459924" watchObservedRunningTime="2026-01-21 16:17:22.007469935 +0000 UTC m=+6204.084302964" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.024159 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.024186 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.024197 4902 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.024229 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.024239 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.024248 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbsdh\" (UniqueName: \"kubernetes.io/projected/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-kube-api-access-vbsdh\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.024257 4902 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.114033 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-65bd9b7448-nvhqd"] Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.255457 4902 scope.go:117] "RemoveContainer" containerID="4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.320472 4902 scope.go:117] "RemoveContainer" containerID="f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b" Jan 21 16:17:22 crc kubenswrapper[4902]: E0121 16:17:22.321145 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b\": container with ID starting with f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b not found: ID does not exist" containerID="f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.321255 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b"} err="failed to get container status \"f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b\": rpc error: code = NotFound desc = could not find container \"f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b\": container with ID starting with f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b not found: ID does not exist" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.321362 4902 scope.go:117] "RemoveContainer" containerID="4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f" Jan 21 16:17:22 crc kubenswrapper[4902]: E0121 16:17:22.325966 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f\": container with ID starting with 4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f not found: ID does not exist" containerID="4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.326010 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f"} err="failed to get container status \"4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f\": rpc error: code = NotFound desc = could not find container \"4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f\": container with ID starting with 4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f not found: ID does not exist" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.328800 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7dd785d478-plbs7"] Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.336002 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7dd785d478-plbs7"] Jan 21 16:17:23 crc kubenswrapper[4902]: I0121 16:17:23.000178 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" event={"ID":"b55674f9-c7ae-4344-979f-d80fc2d0e03b","Type":"ContainerStarted","Data":"8ddcaecce27ce57414d483ff30439ab5d5ef54218cadf2c9d61387f3947e594b"} Jan 21 16:17:23 crc kubenswrapper[4902]: I0121 16:17:23.006413 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-cf8444c78-xmqt2" event={"ID":"a67ffd84-72d3-4d63-b99a-0fe8ebe12753","Type":"ContainerStarted","Data":"c268dfd4954d075a28333859e3f54fce320b71b326194e4f9bbadd0bffc420fd"} Jan 21 16:17:23 crc kubenswrapper[4902]: I0121 16:17:23.437023 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:17:23 crc kubenswrapper[4902]: I0121 16:17:23.498860 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-786f96566b-w596t"] Jan 21 16:17:23 crc kubenswrapper[4902]: I0121 16:17:23.499332 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-786f96566b-w596t" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerName="horizon" containerID="cri-o://b5b92e7f1cc27fed5221f05667fdb25b332ac410148a8012346660a03a7b0fdf" gracePeriod=30 Jan 21 16:17:23 crc kubenswrapper[4902]: I0121 16:17:23.499195 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-786f96566b-w596t" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerName="horizon-log" containerID="cri-o://dd9c814774718de26b2a6f5f159c980f718ec5bd198d471d2426d82a67f32ddd" gracePeriod=30 Jan 21 16:17:24 crc kubenswrapper[4902]: I0121 16:17:24.310374 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" path="/var/lib/kubelet/pods/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c/volumes" Jan 21 16:17:25 crc kubenswrapper[4902]: I0121 16:17:25.225833 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" event={"ID":"b55674f9-c7ae-4344-979f-d80fc2d0e03b","Type":"ContainerStarted","Data":"3a0cbf96360641a1be4d26f0628bce956b77ed886ce1151ffc5559087263908f"} Jan 21 16:17:25 crc kubenswrapper[4902]: I0121 16:17:25.226615 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:25 crc kubenswrapper[4902]: I0121 16:17:25.245876 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" podStartSLOduration=3.161307694 podStartE2EDuration="5.245848544s" podCreationTimestamp="2026-01-21 16:17:20 +0000 UTC" firstStartedPulling="2026-01-21 16:17:22.266492284 +0000 UTC m=+6204.343325313" lastFinishedPulling="2026-01-21 16:17:24.351033124 +0000 UTC m=+6206.427866163" observedRunningTime="2026-01-21 16:17:25.242744947 +0000 UTC m=+6207.319577976" watchObservedRunningTime="2026-01-21 16:17:25.245848544 +0000 UTC m=+6207.322681573" Jan 21 16:17:26 crc kubenswrapper[4902]: I0121 16:17:26.245549 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-cf8444c78-xmqt2" event={"ID":"a67ffd84-72d3-4d63-b99a-0fe8ebe12753","Type":"ContainerStarted","Data":"94a0a3d3b10817a020a3d5800679a614ff1fe2e0c400b8287a07dd2afb859acd"} Jan 21 16:17:26 crc kubenswrapper[4902]: I0121 16:17:26.245823 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:26 crc kubenswrapper[4902]: I0121 16:17:26.301619 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-cf8444c78-xmqt2" podStartSLOduration=2.812518279 podStartE2EDuration="6.301588813s" podCreationTimestamp="2026-01-21 16:17:20 +0000 UTC" firstStartedPulling="2026-01-21 16:17:21.962453638 +0000 UTC m=+6204.039286667" lastFinishedPulling="2026-01-21 16:17:25.451524172 +0000 UTC m=+6207.528357201" observedRunningTime="2026-01-21 16:17:26.277568407 +0000 UTC m=+6208.354401466" watchObservedRunningTime="2026-01-21 16:17:26.301588813 +0000 UTC m=+6208.378421862" Jan 21 16:17:26 crc kubenswrapper[4902]: I0121 16:17:26.407298 4902 scope.go:117] "RemoveContainer" containerID="65fe44f3b0e17d56dbcc24184af4bec7f8662c78351c1314a8a65ecfa5dbb257" Jan 21 16:17:26 crc kubenswrapper[4902]: I0121 16:17:26.525892 4902 scope.go:117] "RemoveContainer" containerID="f86be6b9f95f42ed7575c1db6ca5d50d96cc6520921a01dd5dff53f1cdbb4ae8" Jan 21 16:17:26 crc kubenswrapper[4902]: I0121 16:17:26.589990 4902 scope.go:117] "RemoveContainer" containerID="371e1a26e1a76ba398e48c1e98072317dd29a6c8abf9e8ab60b15d658481161c" Jan 21 16:17:26 crc kubenswrapper[4902]: I0121 16:17:26.623245 4902 scope.go:117] "RemoveContainer" containerID="efad9d3030aa3752d324b9640e74fe010cdfafc51d4ab887dfdd4055c1f6fa5a" Jan 21 16:17:26 crc kubenswrapper[4902]: I0121 16:17:26.950025 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-786f96566b-w596t" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.111:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.111:8443: connect: connection refused" Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.253967 4902 generic.go:334] "Generic (PLEG): container finished" podID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerID="b5b92e7f1cc27fed5221f05667fdb25b332ac410148a8012346660a03a7b0fdf" exitCode=0 Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.254028 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-786f96566b-w596t" event={"ID":"b772cd9d-83ce-4675-84de-09f40bdcabe3","Type":"ContainerDied","Data":"b5b92e7f1cc27fed5221f05667fdb25b332ac410148a8012346660a03a7b0fdf"} Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.873486 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-68647965fb-5bvjr"] Jan 21 16:17:27 crc kubenswrapper[4902]: E0121 16:17:27.874099 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerName="horizon" Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.874114 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerName="horizon" Jan 21 16:17:27 crc kubenswrapper[4902]: E0121 16:17:27.874145 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerName="horizon-log" Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.874152 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerName="horizon-log" Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.874332 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerName="horizon-log" Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.874354 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerName="horizon" Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.875010 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.889253 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-575c784d98-scqmc"] Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.890820 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.899967 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-74df5fd5cf-g8qhb"] Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.901745 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.920394 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-68647965fb-5bvjr"] Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:27.928932 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-74df5fd5cf-g8qhb"] Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.256770 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-config-data\") pod \"heat-cfnapi-74df5fd5cf-g8qhb\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.256886 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-combined-ca-bundle\") pod \"heat-cfnapi-74df5fd5cf-g8qhb\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.256930 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw66l\" (UniqueName: \"kubernetes.io/projected/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-kube-api-access-pw66l\") pod \"heat-api-575c784d98-scqmc\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.256975 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krptw\" (UniqueName: \"kubernetes.io/projected/bb701a34-be50-44cd-b277-b687e8499664-kube-api-access-krptw\") pod \"heat-engine-68647965fb-5bvjr\" (UID: \"bb701a34-be50-44cd-b277-b687e8499664\") " pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.257007 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb701a34-be50-44cd-b277-b687e8499664-combined-ca-bundle\") pod \"heat-engine-68647965fb-5bvjr\" (UID: \"bb701a34-be50-44cd-b277-b687e8499664\") " pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.257028 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb701a34-be50-44cd-b277-b687e8499664-config-data-custom\") pod \"heat-engine-68647965fb-5bvjr\" (UID: \"bb701a34-be50-44cd-b277-b687e8499664\") " pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.258930 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-config-data\") pod \"heat-api-575c784d98-scqmc\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.258964 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-combined-ca-bundle\") pod \"heat-api-575c784d98-scqmc\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.259004 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-config-data-custom\") pod \"heat-cfnapi-74df5fd5cf-g8qhb\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.259066 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m89x7\" (UniqueName: \"kubernetes.io/projected/f471277e-f0f2-4a10-8234-ed5c3256c82a-kube-api-access-m89x7\") pod \"heat-cfnapi-74df5fd5cf-g8qhb\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.259148 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-config-data-custom\") pod \"heat-api-575c784d98-scqmc\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.259176 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb701a34-be50-44cd-b277-b687e8499664-config-data\") pod \"heat-engine-68647965fb-5bvjr\" (UID: \"bb701a34-be50-44cd-b277-b687e8499664\") " pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.299563 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:17:28 crc kubenswrapper[4902]: E0121 16:17:28.299817 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.362916 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krptw\" (UniqueName: \"kubernetes.io/projected/bb701a34-be50-44cd-b277-b687e8499664-kube-api-access-krptw\") pod \"heat-engine-68647965fb-5bvjr\" (UID: \"bb701a34-be50-44cd-b277-b687e8499664\") " pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.362998 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb701a34-be50-44cd-b277-b687e8499664-combined-ca-bundle\") pod \"heat-engine-68647965fb-5bvjr\" (UID: \"bb701a34-be50-44cd-b277-b687e8499664\") " pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.363022 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb701a34-be50-44cd-b277-b687e8499664-config-data-custom\") pod \"heat-engine-68647965fb-5bvjr\" (UID: \"bb701a34-be50-44cd-b277-b687e8499664\") " pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.363097 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-config-data\") pod \"heat-api-575c784d98-scqmc\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.363113 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-combined-ca-bundle\") pod \"heat-api-575c784d98-scqmc\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.363148 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-config-data-custom\") pod \"heat-cfnapi-74df5fd5cf-g8qhb\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.363188 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m89x7\" (UniqueName: \"kubernetes.io/projected/f471277e-f0f2-4a10-8234-ed5c3256c82a-kube-api-access-m89x7\") pod \"heat-cfnapi-74df5fd5cf-g8qhb\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.363280 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-config-data-custom\") pod \"heat-api-575c784d98-scqmc\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.363302 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb701a34-be50-44cd-b277-b687e8499664-config-data\") pod \"heat-engine-68647965fb-5bvjr\" (UID: \"bb701a34-be50-44cd-b277-b687e8499664\") " pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.363365 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-config-data\") pod \"heat-cfnapi-74df5fd5cf-g8qhb\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.363539 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-combined-ca-bundle\") pod \"heat-cfnapi-74df5fd5cf-g8qhb\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.363590 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw66l\" (UniqueName: \"kubernetes.io/projected/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-kube-api-access-pw66l\") pod \"heat-api-575c784d98-scqmc\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.370390 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-config-data-custom\") pod \"heat-api-575c784d98-scqmc\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.371382 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-575c784d98-scqmc"] Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.371672 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-combined-ca-bundle\") pod \"heat-cfnapi-74df5fd5cf-g8qhb\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.373291 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-config-data\") pod \"heat-api-575c784d98-scqmc\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.375502 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb701a34-be50-44cd-b277-b687e8499664-config-data\") pod \"heat-engine-68647965fb-5bvjr\" (UID: \"bb701a34-be50-44cd-b277-b687e8499664\") " pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.381583 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb701a34-be50-44cd-b277-b687e8499664-combined-ca-bundle\") pod \"heat-engine-68647965fb-5bvjr\" (UID: \"bb701a34-be50-44cd-b277-b687e8499664\") " pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.381951 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb701a34-be50-44cd-b277-b687e8499664-config-data-custom\") pod \"heat-engine-68647965fb-5bvjr\" (UID: \"bb701a34-be50-44cd-b277-b687e8499664\") " pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.382492 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-config-data-custom\") pod \"heat-cfnapi-74df5fd5cf-g8qhb\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.386390 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-combined-ca-bundle\") pod \"heat-api-575c784d98-scqmc\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.397007 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw66l\" (UniqueName: \"kubernetes.io/projected/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-kube-api-access-pw66l\") pod \"heat-api-575c784d98-scqmc\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.406834 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m89x7\" (UniqueName: \"kubernetes.io/projected/f471277e-f0f2-4a10-8234-ed5c3256c82a-kube-api-access-m89x7\") pod \"heat-cfnapi-74df5fd5cf-g8qhb\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.407112 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krptw\" (UniqueName: \"kubernetes.io/projected/bb701a34-be50-44cd-b277-b687e8499664-kube-api-access-krptw\") pod \"heat-engine-68647965fb-5bvjr\" (UID: \"bb701a34-be50-44cd-b277-b687e8499664\") " pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.407301 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-config-data\") pod \"heat-cfnapi-74df5fd5cf-g8qhb\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.496710 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.628091 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.661693 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.235607 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-68647965fb-5bvjr"] Jan 21 16:17:29 crc kubenswrapper[4902]: W0121 16:17:29.252641 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb701a34_be50_44cd_b277_b687e8499664.slice/crio-aadd5a2a19662f4118bce40a15337299b79d1652cfab8ff90921ae10164fa015 WatchSource:0}: Error finding container aadd5a2a19662f4118bce40a15337299b79d1652cfab8ff90921ae10164fa015: Status 404 returned error can't find the container with id aadd5a2a19662f4118bce40a15337299b79d1652cfab8ff90921ae10164fa015 Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.328524 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-68647965fb-5bvjr" event={"ID":"bb701a34-be50-44cd-b277-b687e8499664","Type":"ContainerStarted","Data":"aadd5a2a19662f4118bce40a15337299b79d1652cfab8ff90921ae10164fa015"} Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.440566 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-575c784d98-scqmc"] Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.560328 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-74df5fd5cf-g8qhb"] Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.662865 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-cf8444c78-xmqt2"] Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.663086 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-cf8444c78-xmqt2" podUID="a67ffd84-72d3-4d63-b99a-0fe8ebe12753" containerName="heat-api" containerID="cri-o://94a0a3d3b10817a020a3d5800679a614ff1fe2e0c400b8287a07dd2afb859acd" gracePeriod=60 Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.759650 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-65bd9b7448-nvhqd"] Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.760128 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" podUID="b55674f9-c7ae-4344-979f-d80fc2d0e03b" containerName="heat-cfnapi" containerID="cri-o://3a0cbf96360641a1be4d26f0628bce956b77ed886ce1151ffc5559087263908f" gracePeriod=60 Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.801325 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" podUID="b55674f9-c7ae-4344-979f-d80fc2d0e03b" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.1.123:8000/healthcheck\": EOF" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.828808 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-575dc5884b-mwxz4"] Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.833829 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.840546 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.840736 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.903413 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-575dc5884b-mwxz4"] Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.926550 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5c8d887b44-lnw77"] Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.928325 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.932469 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.932528 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.946288 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5c8d887b44-lnw77"] Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.950622 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-config-data-custom\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.952398 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k7zv\" (UniqueName: \"kubernetes.io/projected/9bfec31e-5cec-4820-9f26-34413330e44c-kube-api-access-4k7zv\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.952655 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-internal-tls-certs\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.953170 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-config-data\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.953319 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-public-tls-certs\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.953487 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-combined-ca-bundle\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.054926 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-config-data-custom\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.054977 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k7zv\" (UniqueName: \"kubernetes.io/projected/9bfec31e-5cec-4820-9f26-34413330e44c-kube-api-access-4k7zv\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.055019 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-internal-tls-certs\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.055080 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-config-data\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.055112 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-config-data\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.055150 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-public-tls-certs\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.055177 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-internal-tls-certs\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.055211 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-combined-ca-bundle\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.055243 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g462c\" (UniqueName: \"kubernetes.io/projected/5acd47b5-1a65-41c3-af06-401bd9880c1f-kube-api-access-g462c\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.055262 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-public-tls-certs\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.055287 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-combined-ca-bundle\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.055329 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-config-data-custom\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.060925 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-internal-tls-certs\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.061146 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-combined-ca-bundle\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.062170 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-public-tls-certs\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.062961 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-config-data\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.063725 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-config-data-custom\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.078451 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k7zv\" (UniqueName: \"kubernetes.io/projected/9bfec31e-5cec-4820-9f26-34413330e44c-kube-api-access-4k7zv\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.157314 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-config-data\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.157437 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-internal-tls-certs\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.157537 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g462c\" (UniqueName: \"kubernetes.io/projected/5acd47b5-1a65-41c3-af06-401bd9880c1f-kube-api-access-g462c\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.157573 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-public-tls-certs\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.157616 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-combined-ca-bundle\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.157664 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-config-data-custom\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.165017 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-internal-tls-certs\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.165666 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-config-data-custom\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.165827 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-public-tls-certs\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.167204 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-config-data\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.167670 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-combined-ca-bundle\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.177773 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g462c\" (UniqueName: \"kubernetes.io/projected/5acd47b5-1a65-41c3-af06-401bd9880c1f-kube-api-access-g462c\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.218426 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.254786 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.344590 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-68647965fb-5bvjr" event={"ID":"bb701a34-be50-44cd-b277-b687e8499664","Type":"ContainerStarted","Data":"fdd548492b6e40f80b093141b68efce769abae41b8a31f386ea29bbd895d7193"} Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.346488 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.362742 4902 generic.go:334] "Generic (PLEG): container finished" podID="c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" containerID="15c4f131c64554319e2cd62000d98288a117ca09d8376d69bd4cecdd1964137c" exitCode=1 Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.362813 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-575c784d98-scqmc" event={"ID":"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d","Type":"ContainerDied","Data":"15c4f131c64554319e2cd62000d98288a117ca09d8376d69bd4cecdd1964137c"} Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.362844 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-575c784d98-scqmc" event={"ID":"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d","Type":"ContainerStarted","Data":"3cccd66a946a157a90314b5d37ad28c489b5d8240c4b1c57e8f9ef0b761138da"} Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.363234 4902 scope.go:117] "RemoveContainer" containerID="15c4f131c64554319e2cd62000d98288a117ca09d8376d69bd4cecdd1964137c" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.375372 4902 generic.go:334] "Generic (PLEG): container finished" podID="f471277e-f0f2-4a10-8234-ed5c3256c82a" containerID="f3c2f8d67445ae1d14e1a237160100d289da6488adf80971ccb697a8822a9fb5" exitCode=1 Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.375463 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" event={"ID":"f471277e-f0f2-4a10-8234-ed5c3256c82a","Type":"ContainerDied","Data":"f3c2f8d67445ae1d14e1a237160100d289da6488adf80971ccb697a8822a9fb5"} Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.375490 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" event={"ID":"f471277e-f0f2-4a10-8234-ed5c3256c82a","Type":"ContainerStarted","Data":"9bd457e7b58a6be152461f46b27b4879936f13cc46a82ddd9cea36267b98f5de"} Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.376113 4902 scope.go:117] "RemoveContainer" containerID="f3c2f8d67445ae1d14e1a237160100d289da6488adf80971ccb697a8822a9fb5" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.393273 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-68647965fb-5bvjr" podStartSLOduration=3.393245543 podStartE2EDuration="3.393245543s" podCreationTimestamp="2026-01-21 16:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:17:30.371536012 +0000 UTC m=+6212.448369041" watchObservedRunningTime="2026-01-21 16:17:30.393245543 +0000 UTC m=+6212.470078572" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.400187 4902 generic.go:334] "Generic (PLEG): container finished" podID="a67ffd84-72d3-4d63-b99a-0fe8ebe12753" containerID="94a0a3d3b10817a020a3d5800679a614ff1fe2e0c400b8287a07dd2afb859acd" exitCode=0 Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.400227 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-cf8444c78-xmqt2" event={"ID":"a67ffd84-72d3-4d63-b99a-0fe8ebe12753","Type":"ContainerDied","Data":"94a0a3d3b10817a020a3d5800679a614ff1fe2e0c400b8287a07dd2afb859acd"} Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.741442 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.783705 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vx98\" (UniqueName: \"kubernetes.io/projected/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-kube-api-access-5vx98\") pod \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.783828 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-config-data\") pod \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.783881 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-config-data-custom\") pod \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.783902 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-combined-ca-bundle\") pod \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.800162 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a67ffd84-72d3-4d63-b99a-0fe8ebe12753" (UID: "a67ffd84-72d3-4d63-b99a-0fe8ebe12753"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.811328 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-kube-api-access-5vx98" (OuterVolumeSpecName: "kube-api-access-5vx98") pod "a67ffd84-72d3-4d63-b99a-0fe8ebe12753" (UID: "a67ffd84-72d3-4d63-b99a-0fe8ebe12753"). InnerVolumeSpecName "kube-api-access-5vx98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.843482 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a67ffd84-72d3-4d63-b99a-0fe8ebe12753" (UID: "a67ffd84-72d3-4d63-b99a-0fe8ebe12753"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.892250 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.892286 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.892298 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vx98\" (UniqueName: \"kubernetes.io/projected/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-kube-api-access-5vx98\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.954126 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-config-data" (OuterVolumeSpecName: "config-data") pod "a67ffd84-72d3-4d63-b99a-0fe8ebe12753" (UID: "a67ffd84-72d3-4d63-b99a-0fe8ebe12753"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.000657 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.057243 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5c8d887b44-lnw77"] Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.063813 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-575dc5884b-mwxz4"] Jan 21 16:17:31 crc kubenswrapper[4902]: W0121 16:17:31.124673 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5acd47b5_1a65_41c3_af06_401bd9880c1f.slice/crio-7eda3ce01e8c2937250dbb787105cd4df25d2e87bcc76612a88e447c1a78fef9 WatchSource:0}: Error finding container 7eda3ce01e8c2937250dbb787105cd4df25d2e87bcc76612a88e447c1a78fef9: Status 404 returned error can't find the container with id 7eda3ce01e8c2937250dbb787105cd4df25d2e87bcc76612a88e447c1a78fef9 Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.210234 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.431845 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5c8d887b44-lnw77" event={"ID":"5acd47b5-1a65-41c3-af06-401bd9880c1f","Type":"ContainerStarted","Data":"7eda3ce01e8c2937250dbb787105cd4df25d2e87bcc76612a88e447c1a78fef9"} Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.446831 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-575c784d98-scqmc" event={"ID":"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d","Type":"ContainerStarted","Data":"e99be572b5f0de08db1d4011d3d0be2733c9e9eaf11a42e426839109762bfc6f"} Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.446899 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.468749 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-575c784d98-scqmc" podStartSLOduration=4.468714368 podStartE2EDuration="4.468714368s" podCreationTimestamp="2026-01-21 16:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:17:31.461882955 +0000 UTC m=+6213.538715984" watchObservedRunningTime="2026-01-21 16:17:31.468714368 +0000 UTC m=+6213.545547387" Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.478118 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" event={"ID":"f471277e-f0f2-4a10-8234-ed5c3256c82a","Type":"ContainerStarted","Data":"0a3cd4119f6b93ca8c960d0e547537c8f6b4c8002580608a93d62f7ad2909dea"} Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.478339 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.479015 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-575dc5884b-mwxz4" event={"ID":"9bfec31e-5cec-4820-9f26-34413330e44c","Type":"ContainerStarted","Data":"6621e58a027de94e58b1552acf55b8206233929ba4aaebc3504289a32afdb2f1"} Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.486388 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.487331 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-cf8444c78-xmqt2" event={"ID":"a67ffd84-72d3-4d63-b99a-0fe8ebe12753","Type":"ContainerDied","Data":"c268dfd4954d075a28333859e3f54fce320b71b326194e4f9bbadd0bffc420fd"} Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.487383 4902 scope.go:117] "RemoveContainer" containerID="94a0a3d3b10817a020a3d5800679a614ff1fe2e0c400b8287a07dd2afb859acd" Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.500190 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" podStartSLOduration=4.500167713 podStartE2EDuration="4.500167713s" podCreationTimestamp="2026-01-21 16:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:17:31.498529757 +0000 UTC m=+6213.575362786" watchObservedRunningTime="2026-01-21 16:17:31.500167713 +0000 UTC m=+6213.577000742" Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.555928 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-cf8444c78-xmqt2"] Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.565745 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-cf8444c78-xmqt2"] Jan 21 16:17:31 crc kubenswrapper[4902]: E0121 16:17:31.730445 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6ae4f0a_2614_4689_83ae_4cef7ae1df9d.slice/crio-e99be572b5f0de08db1d4011d3d0be2733c9e9eaf11a42e426839109762bfc6f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda67ffd84_72d3_4d63_b99a_0fe8ebe12753.slice\": RecentStats: unable to find data in memory cache]" Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.305511 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a67ffd84-72d3-4d63-b99a-0fe8ebe12753" path="/var/lib/kubelet/pods/a67ffd84-72d3-4d63-b99a-0fe8ebe12753/volumes" Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.503452 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5c8d887b44-lnw77" event={"ID":"5acd47b5-1a65-41c3-af06-401bd9880c1f","Type":"ContainerStarted","Data":"461993387b5608976e6e2282fd1b390053faa19bbea7f9d5be7628219eacf786"} Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.503643 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.507186 4902 generic.go:334] "Generic (PLEG): container finished" podID="c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" containerID="e99be572b5f0de08db1d4011d3d0be2733c9e9eaf11a42e426839109762bfc6f" exitCode=1 Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.507249 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-575c784d98-scqmc" event={"ID":"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d","Type":"ContainerDied","Data":"e99be572b5f0de08db1d4011d3d0be2733c9e9eaf11a42e426839109762bfc6f"} Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.507280 4902 scope.go:117] "RemoveContainer" containerID="15c4f131c64554319e2cd62000d98288a117ca09d8376d69bd4cecdd1964137c" Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.507712 4902 scope.go:117] "RemoveContainer" containerID="e99be572b5f0de08db1d4011d3d0be2733c9e9eaf11a42e426839109762bfc6f" Jan 21 16:17:32 crc kubenswrapper[4902]: E0121 16:17:32.507953 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-575c784d98-scqmc_openstack(c6ae4f0a-2614-4689-83ae-4cef7ae1df9d)\"" pod="openstack/heat-api-575c784d98-scqmc" podUID="c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.522421 4902 generic.go:334] "Generic (PLEG): container finished" podID="f471277e-f0f2-4a10-8234-ed5c3256c82a" containerID="0a3cd4119f6b93ca8c960d0e547537c8f6b4c8002580608a93d62f7ad2909dea" exitCode=1 Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.522505 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" event={"ID":"f471277e-f0f2-4a10-8234-ed5c3256c82a","Type":"ContainerDied","Data":"0a3cd4119f6b93ca8c960d0e547537c8f6b4c8002580608a93d62f7ad2909dea"} Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.523344 4902 scope.go:117] "RemoveContainer" containerID="0a3cd4119f6b93ca8c960d0e547537c8f6b4c8002580608a93d62f7ad2909dea" Jan 21 16:17:32 crc kubenswrapper[4902]: E0121 16:17:32.523676 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-74df5fd5cf-g8qhb_openstack(f471277e-f0f2-4a10-8234-ed5c3256c82a)\"" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" podUID="f471277e-f0f2-4a10-8234-ed5c3256c82a" Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.527509 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5c8d887b44-lnw77" podStartSLOduration=3.527490882 podStartE2EDuration="3.527490882s" podCreationTimestamp="2026-01-21 16:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:17:32.524663413 +0000 UTC m=+6214.601496472" watchObservedRunningTime="2026-01-21 16:17:32.527490882 +0000 UTC m=+6214.604323911" Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.531216 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-575dc5884b-mwxz4" event={"ID":"9bfec31e-5cec-4820-9f26-34413330e44c","Type":"ContainerStarted","Data":"96c0755c729932ca2fbeba1d6f3b1d8d55971c0fa7b2ac7233ae108743ce654b"} Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.531431 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.595916 4902 scope.go:117] "RemoveContainer" containerID="f3c2f8d67445ae1d14e1a237160100d289da6488adf80971ccb697a8822a9fb5" Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.596510 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-575dc5884b-mwxz4" podStartSLOduration=3.5962742580000002 podStartE2EDuration="3.596274258s" podCreationTimestamp="2026-01-21 16:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:17:32.587069299 +0000 UTC m=+6214.663902328" watchObservedRunningTime="2026-01-21 16:17:32.596274258 +0000 UTC m=+6214.673107277" Jan 21 16:17:33 crc kubenswrapper[4902]: I0121 16:17:33.541801 4902 scope.go:117] "RemoveContainer" containerID="0a3cd4119f6b93ca8c960d0e547537c8f6b4c8002580608a93d62f7ad2909dea" Jan 21 16:17:33 crc kubenswrapper[4902]: E0121 16:17:33.542512 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-74df5fd5cf-g8qhb_openstack(f471277e-f0f2-4a10-8234-ed5c3256c82a)\"" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" podUID="f471277e-f0f2-4a10-8234-ed5c3256c82a" Jan 21 16:17:33 crc kubenswrapper[4902]: I0121 16:17:33.543636 4902 scope.go:117] "RemoveContainer" containerID="e99be572b5f0de08db1d4011d3d0be2733c9e9eaf11a42e426839109762bfc6f" Jan 21 16:17:33 crc kubenswrapper[4902]: E0121 16:17:33.543898 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-575c784d98-scqmc_openstack(c6ae4f0a-2614-4689-83ae-4cef7ae1df9d)\"" pod="openstack/heat-api-575c784d98-scqmc" podUID="c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" Jan 21 16:17:33 crc kubenswrapper[4902]: I0121 16:17:33.629305 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:33 crc kubenswrapper[4902]: I0121 16:17:33.661789 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:34 crc kubenswrapper[4902]: I0121 16:17:34.551769 4902 scope.go:117] "RemoveContainer" containerID="e99be572b5f0de08db1d4011d3d0be2733c9e9eaf11a42e426839109762bfc6f" Jan 21 16:17:34 crc kubenswrapper[4902]: I0121 16:17:34.551865 4902 scope.go:117] "RemoveContainer" containerID="0a3cd4119f6b93ca8c960d0e547537c8f6b4c8002580608a93d62f7ad2909dea" Jan 21 16:17:34 crc kubenswrapper[4902]: E0121 16:17:34.552166 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-575c784d98-scqmc_openstack(c6ae4f0a-2614-4689-83ae-4cef7ae1df9d)\"" pod="openstack/heat-api-575c784d98-scqmc" podUID="c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" Jan 21 16:17:34 crc kubenswrapper[4902]: E0121 16:17:34.552208 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-74df5fd5cf-g8qhb_openstack(f471277e-f0f2-4a10-8234-ed5c3256c82a)\"" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" podUID="f471277e-f0f2-4a10-8234-ed5c3256c82a" Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.192605 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" podUID="b55674f9-c7ae-4344-979f-d80fc2d0e03b" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.1.123:8000/healthcheck\": read tcp 10.217.0.2:38624->10.217.1.123:8000: read: connection reset by peer" Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.587140 4902 generic.go:334] "Generic (PLEG): container finished" podID="b55674f9-c7ae-4344-979f-d80fc2d0e03b" containerID="3a0cbf96360641a1be4d26f0628bce956b77ed886ce1151ffc5559087263908f" exitCode=0 Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.587211 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" event={"ID":"b55674f9-c7ae-4344-979f-d80fc2d0e03b","Type":"ContainerDied","Data":"3a0cbf96360641a1be4d26f0628bce956b77ed886ce1151ffc5559087263908f"} Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.690201 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.810135 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-combined-ca-bundle\") pod \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.810233 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-config-data-custom\") pod \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.810299 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-config-data\") pod \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.810418 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmrh4\" (UniqueName: \"kubernetes.io/projected/b55674f9-c7ae-4344-979f-d80fc2d0e03b-kube-api-access-lmrh4\") pod \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.816795 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b55674f9-c7ae-4344-979f-d80fc2d0e03b-kube-api-access-lmrh4" (OuterVolumeSpecName: "kube-api-access-lmrh4") pod "b55674f9-c7ae-4344-979f-d80fc2d0e03b" (UID: "b55674f9-c7ae-4344-979f-d80fc2d0e03b"). InnerVolumeSpecName "kube-api-access-lmrh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.824466 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b55674f9-c7ae-4344-979f-d80fc2d0e03b" (UID: "b55674f9-c7ae-4344-979f-d80fc2d0e03b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.843079 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b55674f9-c7ae-4344-979f-d80fc2d0e03b" (UID: "b55674f9-c7ae-4344-979f-d80fc2d0e03b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.880900 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-config-data" (OuterVolumeSpecName: "config-data") pod "b55674f9-c7ae-4344-979f-d80fc2d0e03b" (UID: "b55674f9-c7ae-4344-979f-d80fc2d0e03b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.918290 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmrh4\" (UniqueName: \"kubernetes.io/projected/b55674f9-c7ae-4344-979f-d80fc2d0e03b-kube-api-access-lmrh4\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.918390 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.918406 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.918418 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:36 crc kubenswrapper[4902]: I0121 16:17:36.601215 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:36 crc kubenswrapper[4902]: I0121 16:17:36.601269 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" event={"ID":"b55674f9-c7ae-4344-979f-d80fc2d0e03b","Type":"ContainerDied","Data":"8ddcaecce27ce57414d483ff30439ab5d5ef54218cadf2c9d61387f3947e594b"} Jan 21 16:17:36 crc kubenswrapper[4902]: I0121 16:17:36.601352 4902 scope.go:117] "RemoveContainer" containerID="3a0cbf96360641a1be4d26f0628bce956b77ed886ce1151ffc5559087263908f" Jan 21 16:17:36 crc kubenswrapper[4902]: I0121 16:17:36.630111 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-65bd9b7448-nvhqd"] Jan 21 16:17:36 crc kubenswrapper[4902]: I0121 16:17:36.639114 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-65bd9b7448-nvhqd"] Jan 21 16:17:36 crc kubenswrapper[4902]: I0121 16:17:36.949553 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-786f96566b-w596t" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.111:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.111:8443: connect: connection refused" Jan 21 16:17:38 crc kubenswrapper[4902]: I0121 16:17:38.306842 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b55674f9-c7ae-4344-979f-d80fc2d0e03b" path="/var/lib/kubelet/pods/b55674f9-c7ae-4344-979f-d80fc2d0e03b/volumes" Jan 21 16:17:41 crc kubenswrapper[4902]: I0121 16:17:41.582017 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:41 crc kubenswrapper[4902]: I0121 16:17:41.665636 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-575c784d98-scqmc"] Jan 21 16:17:41 crc kubenswrapper[4902]: I0121 16:17:41.796147 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:41 crc kubenswrapper[4902]: I0121 16:17:41.867859 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-74df5fd5cf-g8qhb"] Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.178890 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.270130 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-config-data\") pod \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.270184 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-combined-ca-bundle\") pod \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.270322 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-config-data-custom\") pod \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.270567 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw66l\" (UniqueName: \"kubernetes.io/projected/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-kube-api-access-pw66l\") pod \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.276915 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-kube-api-access-pw66l" (OuterVolumeSpecName: "kube-api-access-pw66l") pod "c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" (UID: "c6ae4f0a-2614-4689-83ae-4cef7ae1df9d"). InnerVolumeSpecName "kube-api-access-pw66l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.277105 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" (UID: "c6ae4f0a-2614-4689-83ae-4cef7ae1df9d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.303833 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" (UID: "c6ae4f0a-2614-4689-83ae-4cef7ae1df9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.344595 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-config-data" (OuterVolumeSpecName: "config-data") pod "c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" (UID: "c6ae4f0a-2614-4689-83ae-4cef7ae1df9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.373494 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw66l\" (UniqueName: \"kubernetes.io/projected/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-kube-api-access-pw66l\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.373525 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.373535 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.373544 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.391242 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.475493 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-config-data\") pod \"f471277e-f0f2-4a10-8234-ed5c3256c82a\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.475652 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-combined-ca-bundle\") pod \"f471277e-f0f2-4a10-8234-ed5c3256c82a\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.475867 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m89x7\" (UniqueName: \"kubernetes.io/projected/f471277e-f0f2-4a10-8234-ed5c3256c82a-kube-api-access-m89x7\") pod \"f471277e-f0f2-4a10-8234-ed5c3256c82a\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.475906 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-config-data-custom\") pod \"f471277e-f0f2-4a10-8234-ed5c3256c82a\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.484945 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f471277e-f0f2-4a10-8234-ed5c3256c82a-kube-api-access-m89x7" (OuterVolumeSpecName: "kube-api-access-m89x7") pod "f471277e-f0f2-4a10-8234-ed5c3256c82a" (UID: "f471277e-f0f2-4a10-8234-ed5c3256c82a"). InnerVolumeSpecName "kube-api-access-m89x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.485441 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f471277e-f0f2-4a10-8234-ed5c3256c82a" (UID: "f471277e-f0f2-4a10-8234-ed5c3256c82a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.508745 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f471277e-f0f2-4a10-8234-ed5c3256c82a" (UID: "f471277e-f0f2-4a10-8234-ed5c3256c82a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.529138 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-config-data" (OuterVolumeSpecName: "config-data") pod "f471277e-f0f2-4a10-8234-ed5c3256c82a" (UID: "f471277e-f0f2-4a10-8234-ed5c3256c82a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.578841 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.578872 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.578886 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m89x7\" (UniqueName: \"kubernetes.io/projected/f471277e-f0f2-4a10-8234-ed5c3256c82a-kube-api-access-m89x7\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.578896 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.729030 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-575c784d98-scqmc" event={"ID":"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d","Type":"ContainerDied","Data":"3cccd66a946a157a90314b5d37ad28c489b5d8240c4b1c57e8f9ef0b761138da"} Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.729401 4902 scope.go:117] "RemoveContainer" containerID="e99be572b5f0de08db1d4011d3d0be2733c9e9eaf11a42e426839109762bfc6f" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.729081 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.731334 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" event={"ID":"f471277e-f0f2-4a10-8234-ed5c3256c82a","Type":"ContainerDied","Data":"9bd457e7b58a6be152461f46b27b4879936f13cc46a82ddd9cea36267b98f5de"} Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.731362 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.754814 4902 scope.go:117] "RemoveContainer" containerID="0a3cd4119f6b93ca8c960d0e547537c8f6b4c8002580608a93d62f7ad2909dea" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.777634 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-575c784d98-scqmc"] Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.787528 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-575c784d98-scqmc"] Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.798758 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-74df5fd5cf-g8qhb"] Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.807315 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-74df5fd5cf-g8qhb"] Jan 21 16:17:43 crc kubenswrapper[4902]: I0121 16:17:43.294772 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:17:43 crc kubenswrapper[4902]: E0121 16:17:43.295103 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:17:44 crc kubenswrapper[4902]: I0121 16:17:44.306929 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" path="/var/lib/kubelet/pods/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d/volumes" Jan 21 16:17:44 crc kubenswrapper[4902]: I0121 16:17:44.308779 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f471277e-f0f2-4a10-8234-ed5c3256c82a" path="/var/lib/kubelet/pods/f471277e-f0f2-4a10-8234-ed5c3256c82a/volumes" Jan 21 16:17:46 crc kubenswrapper[4902]: I0121 16:17:46.950095 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-786f96566b-w596t" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.111:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.111:8443: connect: connection refused" Jan 21 16:17:46 crc kubenswrapper[4902]: I0121 16:17:46.950504 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-786f96566b-w596t" Jan 21 16:17:48 crc kubenswrapper[4902]: I0121 16:17:48.525569 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:48 crc kubenswrapper[4902]: I0121 16:17:48.573217 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-77695bdf6-844ml"] Jan 21 16:17:48 crc kubenswrapper[4902]: I0121 16:17:48.573437 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-77695bdf6-844ml" podUID="26cf64d9-8389-473d-a51f-2ca282b5787f" containerName="heat-engine" containerID="cri-o://c51c67b3d5eb2547d5d118a566d05127b5232bbe2fa2468af4680ad00279aa48" gracePeriod=60 Jan 21 16:17:50 crc kubenswrapper[4902]: E0121 16:17:50.999555 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c51c67b3d5eb2547d5d118a566d05127b5232bbe2fa2468af4680ad00279aa48" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 21 16:17:51 crc kubenswrapper[4902]: E0121 16:17:51.001733 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c51c67b3d5eb2547d5d118a566d05127b5232bbe2fa2468af4680ad00279aa48" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 21 16:17:51 crc kubenswrapper[4902]: E0121 16:17:51.003368 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c51c67b3d5eb2547d5d118a566d05127b5232bbe2fa2468af4680ad00279aa48" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 21 16:17:51 crc kubenswrapper[4902]: E0121 16:17:51.003406 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-77695bdf6-844ml" podUID="26cf64d9-8389-473d-a51f-2ca282b5787f" containerName="heat-engine" Jan 21 16:17:53 crc kubenswrapper[4902]: I0121 16:17:53.864557 4902 generic.go:334] "Generic (PLEG): container finished" podID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerID="dd9c814774718de26b2a6f5f159c980f718ec5bd198d471d2426d82a67f32ddd" exitCode=137 Jan 21 16:17:53 crc kubenswrapper[4902]: I0121 16:17:53.866308 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-786f96566b-w596t" event={"ID":"b772cd9d-83ce-4675-84de-09f40bdcabe3","Type":"ContainerDied","Data":"dd9c814774718de26b2a6f5f159c980f718ec5bd198d471d2426d82a67f32ddd"} Jan 21 16:17:53 crc kubenswrapper[4902]: I0121 16:17:53.866355 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-786f96566b-w596t" event={"ID":"b772cd9d-83ce-4675-84de-09f40bdcabe3","Type":"ContainerDied","Data":"3b11c2a3c705c7354a42e8da86cbb16b0b2109f8538bbf991bf39e340b5a23cd"} Jan 21 16:17:53 crc kubenswrapper[4902]: I0121 16:17:53.866369 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b11c2a3c705c7354a42e8da86cbb16b0b2109f8538bbf991bf39e340b5a23cd" Jan 21 16:17:53 crc kubenswrapper[4902]: I0121 16:17:53.955026 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-786f96566b-w596t" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.032164 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-combined-ca-bundle\") pod \"b772cd9d-83ce-4675-84de-09f40bdcabe3\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.032333 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b772cd9d-83ce-4675-84de-09f40bdcabe3-logs\") pod \"b772cd9d-83ce-4675-84de-09f40bdcabe3\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.032456 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b772cd9d-83ce-4675-84de-09f40bdcabe3-config-data\") pod \"b772cd9d-83ce-4675-84de-09f40bdcabe3\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.032503 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbc7c\" (UniqueName: \"kubernetes.io/projected/b772cd9d-83ce-4675-84de-09f40bdcabe3-kube-api-access-hbc7c\") pod \"b772cd9d-83ce-4675-84de-09f40bdcabe3\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.032564 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b772cd9d-83ce-4675-84de-09f40bdcabe3-scripts\") pod \"b772cd9d-83ce-4675-84de-09f40bdcabe3\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.032867 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-horizon-tls-certs\") pod \"b772cd9d-83ce-4675-84de-09f40bdcabe3\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.032890 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b772cd9d-83ce-4675-84de-09f40bdcabe3-logs" (OuterVolumeSpecName: "logs") pod "b772cd9d-83ce-4675-84de-09f40bdcabe3" (UID: "b772cd9d-83ce-4675-84de-09f40bdcabe3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.032917 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-horizon-secret-key\") pod \"b772cd9d-83ce-4675-84de-09f40bdcabe3\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.033744 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b772cd9d-83ce-4675-84de-09f40bdcabe3-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.040362 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b772cd9d-83ce-4675-84de-09f40bdcabe3" (UID: "b772cd9d-83ce-4675-84de-09f40bdcabe3"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.040524 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b772cd9d-83ce-4675-84de-09f40bdcabe3-kube-api-access-hbc7c" (OuterVolumeSpecName: "kube-api-access-hbc7c") pod "b772cd9d-83ce-4675-84de-09f40bdcabe3" (UID: "b772cd9d-83ce-4675-84de-09f40bdcabe3"). InnerVolumeSpecName "kube-api-access-hbc7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.064733 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b772cd9d-83ce-4675-84de-09f40bdcabe3-scripts" (OuterVolumeSpecName: "scripts") pod "b772cd9d-83ce-4675-84de-09f40bdcabe3" (UID: "b772cd9d-83ce-4675-84de-09f40bdcabe3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.075704 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b772cd9d-83ce-4675-84de-09f40bdcabe3-config-data" (OuterVolumeSpecName: "config-data") pod "b772cd9d-83ce-4675-84de-09f40bdcabe3" (UID: "b772cd9d-83ce-4675-84de-09f40bdcabe3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.080437 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b772cd9d-83ce-4675-84de-09f40bdcabe3" (UID: "b772cd9d-83ce-4675-84de-09f40bdcabe3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.107119 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "b772cd9d-83ce-4675-84de-09f40bdcabe3" (UID: "b772cd9d-83ce-4675-84de-09f40bdcabe3"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.135344 4902 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.135372 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.135381 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b772cd9d-83ce-4675-84de-09f40bdcabe3-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.135389 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbc7c\" (UniqueName: \"kubernetes.io/projected/b772cd9d-83ce-4675-84de-09f40bdcabe3-kube-api-access-hbc7c\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.135401 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b772cd9d-83ce-4675-84de-09f40bdcabe3-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.135410 4902 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.300494 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:17:54 crc kubenswrapper[4902]: E0121 16:17:54.300728 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.877252 4902 generic.go:334] "Generic (PLEG): container finished" podID="26cf64d9-8389-473d-a51f-2ca282b5787f" containerID="c51c67b3d5eb2547d5d118a566d05127b5232bbe2fa2468af4680ad00279aa48" exitCode=0 Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.877415 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-77695bdf6-844ml" event={"ID":"26cf64d9-8389-473d-a51f-2ca282b5787f","Type":"ContainerDied","Data":"c51c67b3d5eb2547d5d118a566d05127b5232bbe2fa2468af4680ad00279aa48"} Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.878385 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-77695bdf6-844ml" event={"ID":"26cf64d9-8389-473d-a51f-2ca282b5787f","Type":"ContainerDied","Data":"cb43a5fb250f0e18d67e65759c41da1b4870b16bb8305d85b3eb43efb5279133"} Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.878437 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb43a5fb250f0e18d67e65759c41da1b4870b16bb8305d85b3eb43efb5279133" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.878572 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-786f96566b-w596t" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.915541 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.935378 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-786f96566b-w596t"] Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.951128 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-786f96566b-w596t"] Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.951772 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-config-data-custom\") pod \"26cf64d9-8389-473d-a51f-2ca282b5787f\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.951831 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv2sw\" (UniqueName: \"kubernetes.io/projected/26cf64d9-8389-473d-a51f-2ca282b5787f-kube-api-access-tv2sw\") pod \"26cf64d9-8389-473d-a51f-2ca282b5787f\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.951971 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-config-data\") pod \"26cf64d9-8389-473d-a51f-2ca282b5787f\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.952133 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-combined-ca-bundle\") pod \"26cf64d9-8389-473d-a51f-2ca282b5787f\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.956375 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "26cf64d9-8389-473d-a51f-2ca282b5787f" (UID: "26cf64d9-8389-473d-a51f-2ca282b5787f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.956990 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26cf64d9-8389-473d-a51f-2ca282b5787f-kube-api-access-tv2sw" (OuterVolumeSpecName: "kube-api-access-tv2sw") pod "26cf64d9-8389-473d-a51f-2ca282b5787f" (UID: "26cf64d9-8389-473d-a51f-2ca282b5787f"). InnerVolumeSpecName "kube-api-access-tv2sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.986952 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26cf64d9-8389-473d-a51f-2ca282b5787f" (UID: "26cf64d9-8389-473d-a51f-2ca282b5787f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:55 crc kubenswrapper[4902]: I0121 16:17:55.001898 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-config-data" (OuterVolumeSpecName: "config-data") pod "26cf64d9-8389-473d-a51f-2ca282b5787f" (UID: "26cf64d9-8389-473d-a51f-2ca282b5787f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:55 crc kubenswrapper[4902]: I0121 16:17:55.054348 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:55 crc kubenswrapper[4902]: I0121 16:17:55.054380 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:55 crc kubenswrapper[4902]: I0121 16:17:55.054390 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv2sw\" (UniqueName: \"kubernetes.io/projected/26cf64d9-8389-473d-a51f-2ca282b5787f-kube-api-access-tv2sw\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:55 crc kubenswrapper[4902]: I0121 16:17:55.054400 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:55 crc kubenswrapper[4902]: I0121 16:17:55.886710 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:55 crc kubenswrapper[4902]: I0121 16:17:55.917494 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-77695bdf6-844ml"] Jan 21 16:17:55 crc kubenswrapper[4902]: I0121 16:17:55.927495 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-77695bdf6-844ml"] Jan 21 16:17:56 crc kubenswrapper[4902]: I0121 16:17:56.309328 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26cf64d9-8389-473d-a51f-2ca282b5787f" path="/var/lib/kubelet/pods/26cf64d9-8389-473d-a51f-2ca282b5787f/volumes" Jan 21 16:17:56 crc kubenswrapper[4902]: I0121 16:17:56.310389 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" path="/var/lib/kubelet/pods/b772cd9d-83ce-4675-84de-09f40bdcabe3/volumes" Jan 21 16:18:05 crc kubenswrapper[4902]: I0121 16:18:05.048896 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5eaa-account-create-update-6b2pj"] Jan 21 16:18:05 crc kubenswrapper[4902]: I0121 16:18:05.056900 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-5eaa-account-create-update-6b2pj"] Jan 21 16:18:05 crc kubenswrapper[4902]: I0121 16:18:05.066682 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-nh5zs"] Jan 21 16:18:05 crc kubenswrapper[4902]: I0121 16:18:05.074699 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-nh5zs"] Jan 21 16:18:06 crc kubenswrapper[4902]: I0121 16:18:06.311995 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="316e80e8-1286-4be7-b686-90693f8e7c95" path="/var/lib/kubelet/pods/316e80e8-1286-4be7-b686-90693f8e7c95/volumes" Jan 21 16:18:06 crc kubenswrapper[4902]: I0121 16:18:06.312773 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8d97084-2d8b-44c2-877e-b09211b7d84d" path="/var/lib/kubelet/pods/d8d97084-2d8b-44c2-877e-b09211b7d84d/volumes" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.305518 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:18:08 crc kubenswrapper[4902]: E0121 16:18:08.306264 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.633921 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn"] Jan 21 16:18:08 crc kubenswrapper[4902]: E0121 16:18:08.634673 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f471277e-f0f2-4a10-8234-ed5c3256c82a" containerName="heat-cfnapi" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.634696 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f471277e-f0f2-4a10-8234-ed5c3256c82a" containerName="heat-cfnapi" Jan 21 16:18:08 crc kubenswrapper[4902]: E0121 16:18:08.634711 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" containerName="heat-api" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.634723 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" containerName="heat-api" Jan 21 16:18:08 crc kubenswrapper[4902]: E0121 16:18:08.634737 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerName="horizon" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.634745 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerName="horizon" Jan 21 16:18:08 crc kubenswrapper[4902]: E0121 16:18:08.634756 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f471277e-f0f2-4a10-8234-ed5c3256c82a" containerName="heat-cfnapi" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.634765 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f471277e-f0f2-4a10-8234-ed5c3256c82a" containerName="heat-cfnapi" Jan 21 16:18:08 crc kubenswrapper[4902]: E0121 16:18:08.634779 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b55674f9-c7ae-4344-979f-d80fc2d0e03b" containerName="heat-cfnapi" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.634788 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b55674f9-c7ae-4344-979f-d80fc2d0e03b" containerName="heat-cfnapi" Jan 21 16:18:08 crc kubenswrapper[4902]: E0121 16:18:08.634815 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" containerName="heat-api" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.634824 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" containerName="heat-api" Jan 21 16:18:08 crc kubenswrapper[4902]: E0121 16:18:08.634838 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerName="horizon-log" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.634846 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerName="horizon-log" Jan 21 16:18:08 crc kubenswrapper[4902]: E0121 16:18:08.634865 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67ffd84-72d3-4d63-b99a-0fe8ebe12753" containerName="heat-api" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.634874 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67ffd84-72d3-4d63-b99a-0fe8ebe12753" containerName="heat-api" Jan 21 16:18:08 crc kubenswrapper[4902]: E0121 16:18:08.634889 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26cf64d9-8389-473d-a51f-2ca282b5787f" containerName="heat-engine" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.634897 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="26cf64d9-8389-473d-a51f-2ca282b5787f" containerName="heat-engine" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.635134 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerName="horizon" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.635148 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="26cf64d9-8389-473d-a51f-2ca282b5787f" containerName="heat-engine" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.635159 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f471277e-f0f2-4a10-8234-ed5c3256c82a" containerName="heat-cfnapi" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.635166 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b55674f9-c7ae-4344-979f-d80fc2d0e03b" containerName="heat-cfnapi" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.635177 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerName="horizon-log" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.635184 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f471277e-f0f2-4a10-8234-ed5c3256c82a" containerName="heat-cfnapi" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.635191 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" containerName="heat-api" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.635204 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67ffd84-72d3-4d63-b99a-0fe8ebe12753" containerName="heat-api" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.635212 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" containerName="heat-api" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.636607 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.638947 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.645605 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn"] Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.714523 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n55b\" (UniqueName: \"kubernetes.io/projected/052d7e2b-1135-41ae-8c3e-a750c22fce27-kube-api-access-7n55b\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn\" (UID: \"052d7e2b-1135-41ae-8c3e-a750c22fce27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.714604 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/052d7e2b-1135-41ae-8c3e-a750c22fce27-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn\" (UID: \"052d7e2b-1135-41ae-8c3e-a750c22fce27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.715041 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/052d7e2b-1135-41ae-8c3e-a750c22fce27-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn\" (UID: \"052d7e2b-1135-41ae-8c3e-a750c22fce27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.816794 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/052d7e2b-1135-41ae-8c3e-a750c22fce27-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn\" (UID: \"052d7e2b-1135-41ae-8c3e-a750c22fce27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.816880 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n55b\" (UniqueName: \"kubernetes.io/projected/052d7e2b-1135-41ae-8c3e-a750c22fce27-kube-api-access-7n55b\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn\" (UID: \"052d7e2b-1135-41ae-8c3e-a750c22fce27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.816927 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/052d7e2b-1135-41ae-8c3e-a750c22fce27-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn\" (UID: \"052d7e2b-1135-41ae-8c3e-a750c22fce27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.817577 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/052d7e2b-1135-41ae-8c3e-a750c22fce27-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn\" (UID: \"052d7e2b-1135-41ae-8c3e-a750c22fce27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.817678 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/052d7e2b-1135-41ae-8c3e-a750c22fce27-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn\" (UID: \"052d7e2b-1135-41ae-8c3e-a750c22fce27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.849919 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n55b\" (UniqueName: \"kubernetes.io/projected/052d7e2b-1135-41ae-8c3e-a750c22fce27-kube-api-access-7n55b\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn\" (UID: \"052d7e2b-1135-41ae-8c3e-a750c22fce27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.965674 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:09 crc kubenswrapper[4902]: I0121 16:18:09.433218 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn"] Jan 21 16:18:10 crc kubenswrapper[4902]: I0121 16:18:10.025720 4902 generic.go:334] "Generic (PLEG): container finished" podID="052d7e2b-1135-41ae-8c3e-a750c22fce27" containerID="0b00fe8c5f033dc6351be1dde481bb8d45c0867e594bdad310b21aa2f63223d4" exitCode=0 Jan 21 16:18:10 crc kubenswrapper[4902]: I0121 16:18:10.025930 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" event={"ID":"052d7e2b-1135-41ae-8c3e-a750c22fce27","Type":"ContainerDied","Data":"0b00fe8c5f033dc6351be1dde481bb8d45c0867e594bdad310b21aa2f63223d4"} Jan 21 16:18:10 crc kubenswrapper[4902]: I0121 16:18:10.026065 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" event={"ID":"052d7e2b-1135-41ae-8c3e-a750c22fce27","Type":"ContainerStarted","Data":"78d219e3bb4b4d28dfde0cb06bec377b18c0b547c9f904dee848391bb2a7d6f8"} Jan 21 16:18:13 crc kubenswrapper[4902]: I0121 16:18:13.052981 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-k7rr4"] Jan 21 16:18:13 crc kubenswrapper[4902]: I0121 16:18:13.061663 4902 generic.go:334] "Generic (PLEG): container finished" podID="052d7e2b-1135-41ae-8c3e-a750c22fce27" containerID="862bb0aa13c9e2f00e09139d82c13884734dfb5915760790c082c3ccda69f0a6" exitCode=0 Jan 21 16:18:13 crc kubenswrapper[4902]: I0121 16:18:13.061725 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" event={"ID":"052d7e2b-1135-41ae-8c3e-a750c22fce27","Type":"ContainerDied","Data":"862bb0aa13c9e2f00e09139d82c13884734dfb5915760790c082c3ccda69f0a6"} Jan 21 16:18:13 crc kubenswrapper[4902]: I0121 16:18:13.064576 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-k7rr4"] Jan 21 16:18:14 crc kubenswrapper[4902]: I0121 16:18:14.072805 4902 generic.go:334] "Generic (PLEG): container finished" podID="052d7e2b-1135-41ae-8c3e-a750c22fce27" containerID="a5e85813eaf0a006813b7355e03235929731ddf57140673e0f3ee7fa69ff26ae" exitCode=0 Jan 21 16:18:14 crc kubenswrapper[4902]: I0121 16:18:14.072885 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" event={"ID":"052d7e2b-1135-41ae-8c3e-a750c22fce27","Type":"ContainerDied","Data":"a5e85813eaf0a006813b7355e03235929731ddf57140673e0f3ee7fa69ff26ae"} Jan 21 16:18:14 crc kubenswrapper[4902]: I0121 16:18:14.306216 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="610eddf1-f5de-40bb-8946-2092c4edfa9c" path="/var/lib/kubelet/pods/610eddf1-f5de-40bb-8946-2092c4edfa9c/volumes" Jan 21 16:18:15 crc kubenswrapper[4902]: I0121 16:18:15.447067 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:15 crc kubenswrapper[4902]: I0121 16:18:15.593059 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/052d7e2b-1135-41ae-8c3e-a750c22fce27-util\") pod \"052d7e2b-1135-41ae-8c3e-a750c22fce27\" (UID: \"052d7e2b-1135-41ae-8c3e-a750c22fce27\") " Jan 21 16:18:15 crc kubenswrapper[4902]: I0121 16:18:15.593675 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n55b\" (UniqueName: \"kubernetes.io/projected/052d7e2b-1135-41ae-8c3e-a750c22fce27-kube-api-access-7n55b\") pod \"052d7e2b-1135-41ae-8c3e-a750c22fce27\" (UID: \"052d7e2b-1135-41ae-8c3e-a750c22fce27\") " Jan 21 16:18:15 crc kubenswrapper[4902]: I0121 16:18:15.593745 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/052d7e2b-1135-41ae-8c3e-a750c22fce27-bundle\") pod \"052d7e2b-1135-41ae-8c3e-a750c22fce27\" (UID: \"052d7e2b-1135-41ae-8c3e-a750c22fce27\") " Jan 21 16:18:15 crc kubenswrapper[4902]: I0121 16:18:15.597726 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/052d7e2b-1135-41ae-8c3e-a750c22fce27-bundle" (OuterVolumeSpecName: "bundle") pod "052d7e2b-1135-41ae-8c3e-a750c22fce27" (UID: "052d7e2b-1135-41ae-8c3e-a750c22fce27"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:18:15 crc kubenswrapper[4902]: I0121 16:18:15.603339 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/052d7e2b-1135-41ae-8c3e-a750c22fce27-kube-api-access-7n55b" (OuterVolumeSpecName: "kube-api-access-7n55b") pod "052d7e2b-1135-41ae-8c3e-a750c22fce27" (UID: "052d7e2b-1135-41ae-8c3e-a750c22fce27"). InnerVolumeSpecName "kube-api-access-7n55b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:18:15 crc kubenswrapper[4902]: I0121 16:18:15.608636 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/052d7e2b-1135-41ae-8c3e-a750c22fce27-util" (OuterVolumeSpecName: "util") pod "052d7e2b-1135-41ae-8c3e-a750c22fce27" (UID: "052d7e2b-1135-41ae-8c3e-a750c22fce27"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:18:15 crc kubenswrapper[4902]: I0121 16:18:15.696788 4902 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/052d7e2b-1135-41ae-8c3e-a750c22fce27-util\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:15 crc kubenswrapper[4902]: I0121 16:18:15.696840 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n55b\" (UniqueName: \"kubernetes.io/projected/052d7e2b-1135-41ae-8c3e-a750c22fce27-kube-api-access-7n55b\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:15 crc kubenswrapper[4902]: I0121 16:18:15.696858 4902 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/052d7e2b-1135-41ae-8c3e-a750c22fce27-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:16 crc kubenswrapper[4902]: I0121 16:18:16.101298 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" event={"ID":"052d7e2b-1135-41ae-8c3e-a750c22fce27","Type":"ContainerDied","Data":"78d219e3bb4b4d28dfde0cb06bec377b18c0b547c9f904dee848391bb2a7d6f8"} Jan 21 16:18:16 crc kubenswrapper[4902]: I0121 16:18:16.101585 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78d219e3bb4b4d28dfde0cb06bec377b18c0b547c9f904dee848391bb2a7d6f8" Jan 21 16:18:16 crc kubenswrapper[4902]: I0121 16:18:16.101365 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:22 crc kubenswrapper[4902]: I0121 16:18:22.295613 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:18:22 crc kubenswrapper[4902]: E0121 16:18:22.296305 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.338512 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-tw4cr"] Jan 21 16:18:26 crc kubenswrapper[4902]: E0121 16:18:26.339646 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052d7e2b-1135-41ae-8c3e-a750c22fce27" containerName="extract" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.339665 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="052d7e2b-1135-41ae-8c3e-a750c22fce27" containerName="extract" Jan 21 16:18:26 crc kubenswrapper[4902]: E0121 16:18:26.339684 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052d7e2b-1135-41ae-8c3e-a750c22fce27" containerName="pull" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.339692 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="052d7e2b-1135-41ae-8c3e-a750c22fce27" containerName="pull" Jan 21 16:18:26 crc kubenswrapper[4902]: E0121 16:18:26.339709 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052d7e2b-1135-41ae-8c3e-a750c22fce27" containerName="util" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.339719 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="052d7e2b-1135-41ae-8c3e-a750c22fce27" containerName="util" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.339942 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="052d7e2b-1135-41ae-8c3e-a750c22fce27" containerName="extract" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.340919 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tw4cr" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.343893 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-wrvml" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.344183 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.344330 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.360690 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-tw4cr"] Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.472737 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x"] Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.474158 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.477070 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-zb9cz" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.477510 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.500095 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks"] Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.501672 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.526027 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x"] Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.538282 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q654\" (UniqueName: \"kubernetes.io/projected/5bef9b7b-7b8b-4a3b-82ca-cc12bfa8d7a5-kube-api-access-2q654\") pod \"obo-prometheus-operator-68bc856cb9-tw4cr\" (UID: \"5bef9b7b-7b8b-4a3b-82ca-cc12bfa8d7a5\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tw4cr" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.562504 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks"] Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.639975 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q654\" (UniqueName: \"kubernetes.io/projected/5bef9b7b-7b8b-4a3b-82ca-cc12bfa8d7a5-kube-api-access-2q654\") pod \"obo-prometheus-operator-68bc856cb9-tw4cr\" (UID: \"5bef9b7b-7b8b-4a3b-82ca-cc12bfa8d7a5\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tw4cr" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.640052 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dce978e0-318d-4086-8594-08da83f1fe23-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cb6669c59-l469x\" (UID: \"dce978e0-318d-4086-8594-08da83f1fe23\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.640152 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dce978e0-318d-4086-8594-08da83f1fe23-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cb6669c59-l469x\" (UID: \"dce978e0-318d-4086-8594-08da83f1fe23\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.640283 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c014cd52-9da2-4fa7-96b6-0a400835f56e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cb6669c59-csqks\" (UID: \"c014cd52-9da2-4fa7-96b6-0a400835f56e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.640366 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c014cd52-9da2-4fa7-96b6-0a400835f56e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cb6669c59-csqks\" (UID: \"c014cd52-9da2-4fa7-96b6-0a400835f56e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.677622 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-6xc5d"] Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.678487 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q654\" (UniqueName: \"kubernetes.io/projected/5bef9b7b-7b8b-4a3b-82ca-cc12bfa8d7a5-kube-api-access-2q654\") pod \"obo-prometheus-operator-68bc856cb9-tw4cr\" (UID: \"5bef9b7b-7b8b-4a3b-82ca-cc12bfa8d7a5\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tw4cr" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.679349 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.686528 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.686760 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-rm684" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.690742 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-6xc5d"] Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.741967 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c014cd52-9da2-4fa7-96b6-0a400835f56e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cb6669c59-csqks\" (UID: \"c014cd52-9da2-4fa7-96b6-0a400835f56e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.742098 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c014cd52-9da2-4fa7-96b6-0a400835f56e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cb6669c59-csqks\" (UID: \"c014cd52-9da2-4fa7-96b6-0a400835f56e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.742154 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dce978e0-318d-4086-8594-08da83f1fe23-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cb6669c59-l469x\" (UID: \"dce978e0-318d-4086-8594-08da83f1fe23\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.742225 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dce978e0-318d-4086-8594-08da83f1fe23-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cb6669c59-l469x\" (UID: \"dce978e0-318d-4086-8594-08da83f1fe23\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.746366 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dce978e0-318d-4086-8594-08da83f1fe23-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cb6669c59-l469x\" (UID: \"dce978e0-318d-4086-8594-08da83f1fe23\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.746700 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dce978e0-318d-4086-8594-08da83f1fe23-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cb6669c59-l469x\" (UID: \"dce978e0-318d-4086-8594-08da83f1fe23\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.749215 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c014cd52-9da2-4fa7-96b6-0a400835f56e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cb6669c59-csqks\" (UID: \"c014cd52-9da2-4fa7-96b6-0a400835f56e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.752035 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c014cd52-9da2-4fa7-96b6-0a400835f56e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cb6669c59-csqks\" (UID: \"c014cd52-9da2-4fa7-96b6-0a400835f56e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.789128 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-k6f6k"] Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.790739 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.794473 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-llxvm" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.798746 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.808081 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-k6f6k"] Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.855031 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/cdfe14cf-a2d6-4df7-92b5-c4146bdab44d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-6xc5d\" (UID: \"cdfe14cf-a2d6-4df7-92b5-c4146bdab44d\") " pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.855196 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfs8l\" (UniqueName: \"kubernetes.io/projected/cdfe14cf-a2d6-4df7-92b5-c4146bdab44d-kube-api-access-wfs8l\") pod \"observability-operator-59bdc8b94-6xc5d\" (UID: \"cdfe14cf-a2d6-4df7-92b5-c4146bdab44d\") " pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.862826 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.941276 4902 scope.go:117] "RemoveContainer" containerID="73644d909cc281b656bcc92b7fe668f43e2f43e5a2df8a9a26185cf7ab096d45" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.958520 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/cdfe14cf-a2d6-4df7-92b5-c4146bdab44d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-6xc5d\" (UID: \"cdfe14cf-a2d6-4df7-92b5-c4146bdab44d\") " pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.965997 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwcll\" (UniqueName: \"kubernetes.io/projected/ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a-kube-api-access-dwcll\") pod \"perses-operator-5bf474d74f-k6f6k\" (UID: \"ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a\") " pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.966617 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tw4cr" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.966902 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-k6f6k\" (UID: \"ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a\") " pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.966973 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfs8l\" (UniqueName: \"kubernetes.io/projected/cdfe14cf-a2d6-4df7-92b5-c4146bdab44d-kube-api-access-wfs8l\") pod \"observability-operator-59bdc8b94-6xc5d\" (UID: \"cdfe14cf-a2d6-4df7-92b5-c4146bdab44d\") " pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.985352 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/cdfe14cf-a2d6-4df7-92b5-c4146bdab44d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-6xc5d\" (UID: \"cdfe14cf-a2d6-4df7-92b5-c4146bdab44d\") " pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.012601 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfs8l\" (UniqueName: \"kubernetes.io/projected/cdfe14cf-a2d6-4df7-92b5-c4146bdab44d-kube-api-access-wfs8l\") pod \"observability-operator-59bdc8b94-6xc5d\" (UID: \"cdfe14cf-a2d6-4df7-92b5-c4146bdab44d\") " pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.024338 4902 scope.go:117] "RemoveContainer" containerID="f9ff394d565c17472cbe0972635a74048f6673c7d9a12c90517226508f39624b" Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.069330 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwcll\" (UniqueName: \"kubernetes.io/projected/ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a-kube-api-access-dwcll\") pod \"perses-operator-5bf474d74f-k6f6k\" (UID: \"ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a\") " pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.069654 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-k6f6k\" (UID: \"ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a\") " pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.070532 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-k6f6k\" (UID: \"ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a\") " pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.086319 4902 scope.go:117] "RemoveContainer" containerID="08d576dd917c4a5813c6d9db476bd6fcba6691cafc01f2c3b9a02a013671f644" Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.090631 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwcll\" (UniqueName: \"kubernetes.io/projected/ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a-kube-api-access-dwcll\") pod \"perses-operator-5bf474d74f-k6f6k\" (UID: \"ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a\") " pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.162283 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.313224 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.380523 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x"] Jan 21 16:18:27 crc kubenswrapper[4902]: W0121 16:18:27.413125 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddce978e0_318d_4086_8594_08da83f1fe23.slice/crio-06baaa8ce7f1ee2729da5b7463082d9063adfdccc1ce078700fcef828c7c54ae WatchSource:0}: Error finding container 06baaa8ce7f1ee2729da5b7463082d9063adfdccc1ce078700fcef828c7c54ae: Status 404 returned error can't find the container with id 06baaa8ce7f1ee2729da5b7463082d9063adfdccc1ce078700fcef828c7c54ae Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.481832 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks"] Jan 21 16:18:27 crc kubenswrapper[4902]: W0121 16:18:27.624157 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bef9b7b_7b8b_4a3b_82ca_cc12bfa8d7a5.slice/crio-c9e3ec7b76a75727f0b440c1c28989960e5fbedaeea7de46806fdff61bd2465c WatchSource:0}: Error finding container c9e3ec7b76a75727f0b440c1c28989960e5fbedaeea7de46806fdff61bd2465c: Status 404 returned error can't find the container with id c9e3ec7b76a75727f0b440c1c28989960e5fbedaeea7de46806fdff61bd2465c Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.634237 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-tw4cr"] Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.822196 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-6xc5d"] Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.930533 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-k6f6k"] Jan 21 16:18:27 crc kubenswrapper[4902]: W0121 16:18:27.938004 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea8d550d_3cd6_4d90_9209_f11bbf7d4e3a.slice/crio-813c998872b17b9d7ee3b9b5723b300c843217cc1529bcd7e89717662e75c39b WatchSource:0}: Error finding container 813c998872b17b9d7ee3b9b5723b300c843217cc1529bcd7e89717662e75c39b: Status 404 returned error can't find the container with id 813c998872b17b9d7ee3b9b5723b300c843217cc1529bcd7e89717662e75c39b Jan 21 16:18:28 crc kubenswrapper[4902]: I0121 16:18:28.256143 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" event={"ID":"cdfe14cf-a2d6-4df7-92b5-c4146bdab44d","Type":"ContainerStarted","Data":"089b6b52baec51ea81aa390dfae2c9f909a4e8e2b79dd484df55340e6e1f58fa"} Jan 21 16:18:28 crc kubenswrapper[4902]: I0121 16:18:28.259954 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks" event={"ID":"c014cd52-9da2-4fa7-96b6-0a400835f56e","Type":"ContainerStarted","Data":"1c0681d4cbfe8ce60ee3345a1f6d3377c0033bcd55ab3ea739533131e9008e02"} Jan 21 16:18:28 crc kubenswrapper[4902]: I0121 16:18:28.263350 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tw4cr" event={"ID":"5bef9b7b-7b8b-4a3b-82ca-cc12bfa8d7a5","Type":"ContainerStarted","Data":"c9e3ec7b76a75727f0b440c1c28989960e5fbedaeea7de46806fdff61bd2465c"} Jan 21 16:18:28 crc kubenswrapper[4902]: I0121 16:18:28.265326 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" event={"ID":"ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a","Type":"ContainerStarted","Data":"813c998872b17b9d7ee3b9b5723b300c843217cc1529bcd7e89717662e75c39b"} Jan 21 16:18:28 crc kubenswrapper[4902]: I0121 16:18:28.269164 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x" event={"ID":"dce978e0-318d-4086-8594-08da83f1fe23","Type":"ContainerStarted","Data":"06baaa8ce7f1ee2729da5b7463082d9063adfdccc1ce078700fcef828c7c54ae"} Jan 21 16:18:35 crc kubenswrapper[4902]: I0121 16:18:35.296059 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:18:35 crc kubenswrapper[4902]: E0121 16:18:35.296967 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.415832 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" event={"ID":"ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a","Type":"ContainerStarted","Data":"1dd6288d3dce16efddd74e9158b867d9611e49f940f2c1411740d9d7dd589b64"} Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.417325 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.419701 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" event={"ID":"cdfe14cf-a2d6-4df7-92b5-c4146bdab44d","Type":"ContainerStarted","Data":"57d93f2e374b6d8ce1ce82e849b147adf267dbddd1e75268a89cc64514ade57b"} Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.420751 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.424579 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.428939 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks" event={"ID":"c014cd52-9da2-4fa7-96b6-0a400835f56e","Type":"ContainerStarted","Data":"1d8e9ee848c15b0397cea97beacec61eb32e7f5507bfdeb91b4ff5760f150317"} Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.434526 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x" event={"ID":"dce978e0-318d-4086-8594-08da83f1fe23","Type":"ContainerStarted","Data":"b7db0be9f863c77dbaecee87b14573b8c2484dc9ba565f7f8666329296adf971"} Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.444071 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tw4cr" event={"ID":"5bef9b7b-7b8b-4a3b-82ca-cc12bfa8d7a5","Type":"ContainerStarted","Data":"ccad71d335f9ae9bfad642b05db21d00feb1eae6305188b52d64aa7aaa2a201c"} Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.446718 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" podStartSLOduration=2.962584365 podStartE2EDuration="11.446699682s" podCreationTimestamp="2026-01-21 16:18:26 +0000 UTC" firstStartedPulling="2026-01-21 16:18:27.940794722 +0000 UTC m=+6270.017627751" lastFinishedPulling="2026-01-21 16:18:36.424910039 +0000 UTC m=+6278.501743068" observedRunningTime="2026-01-21 16:18:37.4402155 +0000 UTC m=+6279.517048519" watchObservedRunningTime="2026-01-21 16:18:37.446699682 +0000 UTC m=+6279.523532711" Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.464731 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" podStartSLOduration=2.802443619 podStartE2EDuration="11.464714159s" podCreationTimestamp="2026-01-21 16:18:26 +0000 UTC" firstStartedPulling="2026-01-21 16:18:27.831606409 +0000 UTC m=+6269.908439438" lastFinishedPulling="2026-01-21 16:18:36.493876949 +0000 UTC m=+6278.570709978" observedRunningTime="2026-01-21 16:18:37.460192592 +0000 UTC m=+6279.537025621" watchObservedRunningTime="2026-01-21 16:18:37.464714159 +0000 UTC m=+6279.541547188" Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.494905 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x" podStartSLOduration=2.578718793 podStartE2EDuration="11.494885788s" podCreationTimestamp="2026-01-21 16:18:26 +0000 UTC" firstStartedPulling="2026-01-21 16:18:27.415691685 +0000 UTC m=+6269.492524714" lastFinishedPulling="2026-01-21 16:18:36.33185868 +0000 UTC m=+6278.408691709" observedRunningTime="2026-01-21 16:18:37.488517409 +0000 UTC m=+6279.565350438" watchObservedRunningTime="2026-01-21 16:18:37.494885788 +0000 UTC m=+6279.571718817" Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.523328 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks" podStartSLOduration=2.711758907 podStartE2EDuration="11.523304018s" podCreationTimestamp="2026-01-21 16:18:26 +0000 UTC" firstStartedPulling="2026-01-21 16:18:27.52033542 +0000 UTC m=+6269.597168459" lastFinishedPulling="2026-01-21 16:18:36.331880541 +0000 UTC m=+6278.408713570" observedRunningTime="2026-01-21 16:18:37.516533998 +0000 UTC m=+6279.593367027" watchObservedRunningTime="2026-01-21 16:18:37.523304018 +0000 UTC m=+6279.600137047" Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.558518 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tw4cr" podStartSLOduration=2.860319139 podStartE2EDuration="11.558500429s" podCreationTimestamp="2026-01-21 16:18:26 +0000 UTC" firstStartedPulling="2026-01-21 16:18:27.638132215 +0000 UTC m=+6269.714965244" lastFinishedPulling="2026-01-21 16:18:36.336313505 +0000 UTC m=+6278.413146534" observedRunningTime="2026-01-21 16:18:37.554486056 +0000 UTC m=+6279.631319085" watchObservedRunningTime="2026-01-21 16:18:37.558500429 +0000 UTC m=+6279.635333458" Jan 21 16:18:47 crc kubenswrapper[4902]: I0121 16:18:47.295900 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:18:47 crc kubenswrapper[4902]: E0121 16:18:47.296687 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:18:47 crc kubenswrapper[4902]: I0121 16:18:47.315600 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.773422 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.774033 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="f901a0e2-6941-4d4e-a90a-2905acf87521" containerName="openstackclient" containerID="cri-o://0820f291e3e79ca9f589a0a9fd094ceca1ca151624389e86bb426b3920d38db1" gracePeriod=2 Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.782384 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.823723 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 16:18:49 crc kubenswrapper[4902]: E0121 16:18:49.824190 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f901a0e2-6941-4d4e-a90a-2905acf87521" containerName="openstackclient" Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.824208 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f901a0e2-6941-4d4e-a90a-2905acf87521" containerName="openstackclient" Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.824410 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f901a0e2-6941-4d4e-a90a-2905acf87521" containerName="openstackclient" Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.825250 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.837407 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.861294 4902 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="f901a0e2-6941-4d4e-a90a-2905acf87521" podUID="7fbbd7fc-3ed5-4747-8723-d1b24677c146" Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.991421 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbbd7fc-3ed5-4747-8723-d1b24677c146-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " pod="openstack/openstackclient" Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.991551 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7fbbd7fc-3ed5-4747-8723-d1b24677c146-openstack-config-secret\") pod \"openstackclient\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " pod="openstack/openstackclient" Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.991597 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lx4m\" (UniqueName: \"kubernetes.io/projected/7fbbd7fc-3ed5-4747-8723-d1b24677c146-kube-api-access-6lx4m\") pod \"openstackclient\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " pod="openstack/openstackclient" Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.991928 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7fbbd7fc-3ed5-4747-8723-d1b24677c146-openstack-config\") pod \"openstackclient\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " pod="openstack/openstackclient" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.015909 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.018803 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.029438 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-89mvr" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.032340 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.095331 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7fbbd7fc-3ed5-4747-8723-d1b24677c146-openstack-config\") pod \"openstackclient\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " pod="openstack/openstackclient" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.095436 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbbd7fc-3ed5-4747-8723-d1b24677c146-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " pod="openstack/openstackclient" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.095658 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7fbbd7fc-3ed5-4747-8723-d1b24677c146-openstack-config-secret\") pod \"openstackclient\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " pod="openstack/openstackclient" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.095713 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lx4m\" (UniqueName: \"kubernetes.io/projected/7fbbd7fc-3ed5-4747-8723-d1b24677c146-kube-api-access-6lx4m\") pod \"openstackclient\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " pod="openstack/openstackclient" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.096484 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7fbbd7fc-3ed5-4747-8723-d1b24677c146-openstack-config\") pod \"openstackclient\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " pod="openstack/openstackclient" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.103795 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbbd7fc-3ed5-4747-8723-d1b24677c146-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " pod="openstack/openstackclient" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.118600 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7fbbd7fc-3ed5-4747-8723-d1b24677c146-openstack-config-secret\") pod \"openstackclient\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " pod="openstack/openstackclient" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.140697 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lx4m\" (UniqueName: \"kubernetes.io/projected/7fbbd7fc-3ed5-4747-8723-d1b24677c146-kube-api-access-6lx4m\") pod \"openstackclient\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " pod="openstack/openstackclient" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.170562 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.204532 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xqw8\" (UniqueName: \"kubernetes.io/projected/11823665-4fce-4950-a6d3-bc34bafbc01d-kube-api-access-7xqw8\") pod \"kube-state-metrics-0\" (UID: \"11823665-4fce-4950-a6d3-bc34bafbc01d\") " pod="openstack/kube-state-metrics-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.312505 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xqw8\" (UniqueName: \"kubernetes.io/projected/11823665-4fce-4950-a6d3-bc34bafbc01d-kube-api-access-7xqw8\") pod \"kube-state-metrics-0\" (UID: \"11823665-4fce-4950-a6d3-bc34bafbc01d\") " pod="openstack/kube-state-metrics-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.357285 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xqw8\" (UniqueName: \"kubernetes.io/projected/11823665-4fce-4950-a6d3-bc34bafbc01d-kube-api-access-7xqw8\") pod \"kube-state-metrics-0\" (UID: \"11823665-4fce-4950-a6d3-bc34bafbc01d\") " pod="openstack/kube-state-metrics-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.632786 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.822082 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.834780 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.840985 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.851125 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-c52zv" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.851709 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.851810 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.855784 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.865368 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.933866 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/657b791a-81e2-483e-8ae9-b261f3bc0c41-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.934003 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/657b791a-81e2-483e-8ae9-b261f3bc0c41-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.934079 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/657b791a-81e2-483e-8ae9-b261f3bc0c41-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.934124 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/657b791a-81e2-483e-8ae9-b261f3bc0c41-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.934168 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnj2c\" (UniqueName: \"kubernetes.io/projected/657b791a-81e2-483e-8ae9-b261f3bc0c41-kube-api-access-rnj2c\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.934227 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/657b791a-81e2-483e-8ae9-b261f3bc0c41-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.934248 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/657b791a-81e2-483e-8ae9-b261f3bc0c41-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.036683 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/657b791a-81e2-483e-8ae9-b261f3bc0c41-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.037024 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/657b791a-81e2-483e-8ae9-b261f3bc0c41-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.037088 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/657b791a-81e2-483e-8ae9-b261f3bc0c41-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.037121 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/657b791a-81e2-483e-8ae9-b261f3bc0c41-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.037156 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnj2c\" (UniqueName: \"kubernetes.io/projected/657b791a-81e2-483e-8ae9-b261f3bc0c41-kube-api-access-rnj2c\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.037199 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/657b791a-81e2-483e-8ae9-b261f3bc0c41-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.037218 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/657b791a-81e2-483e-8ae9-b261f3bc0c41-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.043118 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/657b791a-81e2-483e-8ae9-b261f3bc0c41-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.043218 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/657b791a-81e2-483e-8ae9-b261f3bc0c41-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.070263 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/657b791a-81e2-483e-8ae9-b261f3bc0c41-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.070376 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/657b791a-81e2-483e-8ae9-b261f3bc0c41-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.070558 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/657b791a-81e2-483e-8ae9-b261f3bc0c41-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.070567 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/657b791a-81e2-483e-8ae9-b261f3bc0c41-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.080684 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnj2c\" (UniqueName: \"kubernetes.io/projected/657b791a-81e2-483e-8ae9-b261f3bc0c41-kube-api-access-rnj2c\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.194636 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.260505 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.468905 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.491820 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.494716 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.495305 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.508755 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.508960 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.509137 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.509286 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.509426 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.512002 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-nkftv" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.559883 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.559970 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-config\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.560007 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5bd53316-beed-49bb-8eec-a78efdb19f0a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.560077 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5bd53316-beed-49bb-8eec-a78efdb19f0a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.590526 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.598727 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.598880 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldpwc\" (UniqueName: \"kubernetes.io/projected/5bd53316-beed-49bb-8eec-a78efdb19f0a-kube-api-access-ldpwc\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.598938 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.598966 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.599019 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.599107 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.697416 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.703528 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-config\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.703655 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5bd53316-beed-49bb-8eec-a78efdb19f0a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.704077 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5bd53316-beed-49bb-8eec-a78efdb19f0a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.704182 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.704328 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldpwc\" (UniqueName: \"kubernetes.io/projected/5bd53316-beed-49bb-8eec-a78efdb19f0a-kube-api-access-ldpwc\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.704425 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.704453 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.704512 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.704590 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.704687 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.712669 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.712758 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.713188 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.720301 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5bd53316-beed-49bb-8eec-a78efdb19f0a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.724203 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5bd53316-beed-49bb-8eec-a78efdb19f0a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.730819 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-config\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.734768 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.734820 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2072cb44e63c79cbe1d1309d1abe2e89a3820c0b6ba86768b97aed1379d46137/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.746959 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldpwc\" (UniqueName: \"kubernetes.io/projected/5bd53316-beed-49bb-8eec-a78efdb19f0a-kube-api-access-ldpwc\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.748857 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.749356 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.757266 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.794909 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"11823665-4fce-4950-a6d3-bc34bafbc01d","Type":"ContainerStarted","Data":"6cce11915f96493257a7b6fc755ce2c6cf10806ef6428a4421a57569fde4b038"} Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.801910 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7fbbd7fc-3ed5-4747-8723-d1b24677c146","Type":"ContainerStarted","Data":"01cb86272a3a106d5f7caa4d00fe9d1b23462318a93651ce85d1c2e200e45add"} Jan 21 16:18:52 crc kubenswrapper[4902]: I0121 16:18:52.192080 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 21 16:18:52 crc kubenswrapper[4902]: W0121 16:18:52.309649 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod657b791a_81e2_483e_8ae9_b261f3bc0c41.slice/crio-48dae7a089436e6fe913bb9894877d101fb5d1e2ac226b31a442e91d78824490 WatchSource:0}: Error finding container 48dae7a089436e6fe913bb9894877d101fb5d1e2ac226b31a442e91d78824490: Status 404 returned error can't find the container with id 48dae7a089436e6fe913bb9894877d101fb5d1e2ac226b31a442e91d78824490 Jan 21 16:18:52 crc kubenswrapper[4902]: I0121 16:18:52.400380 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:52 crc kubenswrapper[4902]: I0121 16:18:52.448113 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:52 crc kubenswrapper[4902]: I0121 16:18:52.820821 4902 generic.go:334] "Generic (PLEG): container finished" podID="f901a0e2-6941-4d4e-a90a-2905acf87521" containerID="0820f291e3e79ca9f589a0a9fd094ceca1ca151624389e86bb426b3920d38db1" exitCode=137 Jan 21 16:18:52 crc kubenswrapper[4902]: I0121 16:18:52.822547 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"657b791a-81e2-483e-8ae9-b261f3bc0c41","Type":"ContainerStarted","Data":"48dae7a089436e6fe913bb9894877d101fb5d1e2ac226b31a442e91d78824490"} Jan 21 16:18:52 crc kubenswrapper[4902]: I0121 16:18:52.994855 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 16:18:52 crc kubenswrapper[4902]: W0121 16:18:52.998058 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bd53316_beed_49bb_8eec_a78efdb19f0a.slice/crio-0fc37cbdb7d70159ca8295f97b2fa25e8f7409a5c1a417f96662f1f15ccc1985 WatchSource:0}: Error finding container 0fc37cbdb7d70159ca8295f97b2fa25e8f7409a5c1a417f96662f1f15ccc1985: Status 404 returned error can't find the container with id 0fc37cbdb7d70159ca8295f97b2fa25e8f7409a5c1a417f96662f1f15ccc1985 Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.381268 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.464765 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f901a0e2-6941-4d4e-a90a-2905acf87521-openstack-config\") pod \"f901a0e2-6941-4d4e-a90a-2905acf87521\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.464961 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f901a0e2-6941-4d4e-a90a-2905acf87521-combined-ca-bundle\") pod \"f901a0e2-6941-4d4e-a90a-2905acf87521\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.465061 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89zjk\" (UniqueName: \"kubernetes.io/projected/f901a0e2-6941-4d4e-a90a-2905acf87521-kube-api-access-89zjk\") pod \"f901a0e2-6941-4d4e-a90a-2905acf87521\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.465241 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f901a0e2-6941-4d4e-a90a-2905acf87521-openstack-config-secret\") pod \"f901a0e2-6941-4d4e-a90a-2905acf87521\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.471380 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f901a0e2-6941-4d4e-a90a-2905acf87521-kube-api-access-89zjk" (OuterVolumeSpecName: "kube-api-access-89zjk") pod "f901a0e2-6941-4d4e-a90a-2905acf87521" (UID: "f901a0e2-6941-4d4e-a90a-2905acf87521"). InnerVolumeSpecName "kube-api-access-89zjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.493942 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f901a0e2-6941-4d4e-a90a-2905acf87521-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "f901a0e2-6941-4d4e-a90a-2905acf87521" (UID: "f901a0e2-6941-4d4e-a90a-2905acf87521"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.506155 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f901a0e2-6941-4d4e-a90a-2905acf87521-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f901a0e2-6941-4d4e-a90a-2905acf87521" (UID: "f901a0e2-6941-4d4e-a90a-2905acf87521"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.533565 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f901a0e2-6941-4d4e-a90a-2905acf87521-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "f901a0e2-6941-4d4e-a90a-2905acf87521" (UID: "f901a0e2-6941-4d4e-a90a-2905acf87521"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.567460 4902 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f901a0e2-6941-4d4e-a90a-2905acf87521-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.567522 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f901a0e2-6941-4d4e-a90a-2905acf87521-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.567533 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89zjk\" (UniqueName: \"kubernetes.io/projected/f901a0e2-6941-4d4e-a90a-2905acf87521-kube-api-access-89zjk\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.567547 4902 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f901a0e2-6941-4d4e-a90a-2905acf87521-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.832463 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7fbbd7fc-3ed5-4747-8723-d1b24677c146","Type":"ContainerStarted","Data":"eeda23de981561c4ee07dfd2013c375cea0893455901d9904cf9c61704674e95"} Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.836278 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"11823665-4fce-4950-a6d3-bc34bafbc01d","Type":"ContainerStarted","Data":"2ef81e85f6901284a8b407191f27c64483362c30e1987b357fa5d21aa8dc8169"} Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.836355 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.838229 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.838279 4902 scope.go:117] "RemoveContainer" containerID="0820f291e3e79ca9f589a0a9fd094ceca1ca151624389e86bb426b3920d38db1" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.839542 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5bd53316-beed-49bb-8eec-a78efdb19f0a","Type":"ContainerStarted","Data":"0fc37cbdb7d70159ca8295f97b2fa25e8f7409a5c1a417f96662f1f15ccc1985"} Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.856491 4902 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="f901a0e2-6941-4d4e-a90a-2905acf87521" podUID="7fbbd7fc-3ed5-4747-8723-d1b24677c146" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.857303 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=4.857272371 podStartE2EDuration="4.857272371s" podCreationTimestamp="2026-01-21 16:18:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:18:53.85331369 +0000 UTC m=+6295.930146729" watchObservedRunningTime="2026-01-21 16:18:53.857272371 +0000 UTC m=+6295.934105400" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.882537 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.337981969 podStartE2EDuration="4.882517262s" podCreationTimestamp="2026-01-21 16:18:49 +0000 UTC" firstStartedPulling="2026-01-21 16:18:51.756966388 +0000 UTC m=+6293.833799417" lastFinishedPulling="2026-01-21 16:18:53.301501681 +0000 UTC m=+6295.378334710" observedRunningTime="2026-01-21 16:18:53.87249357 +0000 UTC m=+6295.949326609" watchObservedRunningTime="2026-01-21 16:18:53.882517262 +0000 UTC m=+6295.959350291" Jan 21 16:18:54 crc kubenswrapper[4902]: I0121 16:18:54.314902 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f901a0e2-6941-4d4e-a90a-2905acf87521" path="/var/lib/kubelet/pods/f901a0e2-6941-4d4e-a90a-2905acf87521/volumes" Jan 21 16:18:59 crc kubenswrapper[4902]: I0121 16:18:59.906208 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"657b791a-81e2-483e-8ae9-b261f3bc0c41","Type":"ContainerStarted","Data":"ce5537c27ef67a56a41ab21272778326e134e9f64738ecf2ae425325e61ed791"} Jan 21 16:18:59 crc kubenswrapper[4902]: I0121 16:18:59.908091 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5bd53316-beed-49bb-8eec-a78efdb19f0a","Type":"ContainerStarted","Data":"ac1d3d4a6fd4695fa37a965837770a925659e19ff547b7bc40c9d740b5a8f0f9"} Jan 21 16:19:00 crc kubenswrapper[4902]: I0121 16:19:00.295014 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:19:00 crc kubenswrapper[4902]: E0121 16:19:00.295388 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:19:00 crc kubenswrapper[4902]: I0121 16:19:00.642183 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 16:19:06 crc kubenswrapper[4902]: I0121 16:19:06.976700 4902 generic.go:334] "Generic (PLEG): container finished" podID="657b791a-81e2-483e-8ae9-b261f3bc0c41" containerID="ce5537c27ef67a56a41ab21272778326e134e9f64738ecf2ae425325e61ed791" exitCode=0 Jan 21 16:19:06 crc kubenswrapper[4902]: I0121 16:19:06.976840 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"657b791a-81e2-483e-8ae9-b261f3bc0c41","Type":"ContainerDied","Data":"ce5537c27ef67a56a41ab21272778326e134e9f64738ecf2ae425325e61ed791"} Jan 21 16:19:06 crc kubenswrapper[4902]: I0121 16:19:06.980522 4902 generic.go:334] "Generic (PLEG): container finished" podID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerID="ac1d3d4a6fd4695fa37a965837770a925659e19ff547b7bc40c9d740b5a8f0f9" exitCode=0 Jan 21 16:19:06 crc kubenswrapper[4902]: I0121 16:19:06.980645 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5bd53316-beed-49bb-8eec-a78efdb19f0a","Type":"ContainerDied","Data":"ac1d3d4a6fd4695fa37a965837770a925659e19ff547b7bc40c9d740b5a8f0f9"} Jan 21 16:19:10 crc kubenswrapper[4902]: I0121 16:19:10.031270 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"657b791a-81e2-483e-8ae9-b261f3bc0c41","Type":"ContainerStarted","Data":"77de32e6d676c373333b37260bfafcd1458b14d092faf9c4240b79e643a0cd70"} Jan 21 16:19:14 crc kubenswrapper[4902]: I0121 16:19:14.070304 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"657b791a-81e2-483e-8ae9-b261f3bc0c41","Type":"ContainerStarted","Data":"9e9837593f8094b124af773e99b3c2e25b7283a441c08e6df1ca3384c6ba9061"} Jan 21 16:19:14 crc kubenswrapper[4902]: I0121 16:19:14.071344 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Jan 21 16:19:14 crc kubenswrapper[4902]: I0121 16:19:14.073838 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Jan 21 16:19:14 crc kubenswrapper[4902]: I0121 16:19:14.098122 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=6.784775432 podStartE2EDuration="24.098101756s" podCreationTimestamp="2026-01-21 16:18:50 +0000 UTC" firstStartedPulling="2026-01-21 16:18:52.319697894 +0000 UTC m=+6294.396530923" lastFinishedPulling="2026-01-21 16:19:09.633024218 +0000 UTC m=+6311.709857247" observedRunningTime="2026-01-21 16:19:14.094238277 +0000 UTC m=+6316.171071306" watchObservedRunningTime="2026-01-21 16:19:14.098101756 +0000 UTC m=+6316.174934775" Jan 21 16:19:15 crc kubenswrapper[4902]: I0121 16:19:15.294896 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:19:15 crc kubenswrapper[4902]: E0121 16:19:15.295428 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:19:17 crc kubenswrapper[4902]: I0121 16:19:17.102737 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5bd53316-beed-49bb-8eec-a78efdb19f0a","Type":"ContainerStarted","Data":"9a355b75289815a6e09b6415743b2be3282dad6b45d280d3916521198ab7d34d"} Jan 21 16:19:18 crc kubenswrapper[4902]: I0121 16:19:18.045652 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-47gxx"] Jan 21 16:19:18 crc kubenswrapper[4902]: I0121 16:19:18.058246 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-acb5-account-create-update-v87vq"] Jan 21 16:19:18 crc kubenswrapper[4902]: I0121 16:19:18.073991 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-47gxx"] Jan 21 16:19:18 crc kubenswrapper[4902]: I0121 16:19:18.085362 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-acb5-account-create-update-v87vq"] Jan 21 16:19:18 crc kubenswrapper[4902]: I0121 16:19:18.328108 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91fe5022-2b6f-46b9-9275-c8a809b32808" path="/var/lib/kubelet/pods/91fe5022-2b6f-46b9-9275-c8a809b32808/volumes" Jan 21 16:19:18 crc kubenswrapper[4902]: I0121 16:19:18.329333 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95" path="/var/lib/kubelet/pods/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95/volumes" Jan 21 16:19:21 crc kubenswrapper[4902]: I0121 16:19:21.142090 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5bd53316-beed-49bb-8eec-a78efdb19f0a","Type":"ContainerStarted","Data":"fe72e7d7b7c14cd45027bed85d16a5a727a64a07ece72e2a953a58f5e1029bf1"} Jan 21 16:19:24 crc kubenswrapper[4902]: I0121 16:19:24.180413 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5bd53316-beed-49bb-8eec-a78efdb19f0a","Type":"ContainerStarted","Data":"cd406b88fdad163c7edea30845b9be98daefef8f90607411f070f18d212dccf9"} Jan 21 16:19:24 crc kubenswrapper[4902]: I0121 16:19:24.217214 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.344372818 podStartE2EDuration="34.217191132s" podCreationTimestamp="2026-01-21 16:18:50 +0000 UTC" firstStartedPulling="2026-01-21 16:18:53.000711767 +0000 UTC m=+6295.077544806" lastFinishedPulling="2026-01-21 16:19:23.873530091 +0000 UTC m=+6325.950363120" observedRunningTime="2026-01-21 16:19:24.207016265 +0000 UTC m=+6326.283849294" watchObservedRunningTime="2026-01-21 16:19:24.217191132 +0000 UTC m=+6326.294024161" Jan 21 16:19:25 crc kubenswrapper[4902]: I0121 16:19:25.034012 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-8xw4q"] Jan 21 16:19:25 crc kubenswrapper[4902]: I0121 16:19:25.043517 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-8xw4q"] Jan 21 16:19:26 crc kubenswrapper[4902]: I0121 16:19:26.295603 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:19:26 crc kubenswrapper[4902]: I0121 16:19:26.312541 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3" path="/var/lib/kubelet/pods/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3/volumes" Jan 21 16:19:27 crc kubenswrapper[4902]: I0121 16:19:27.411697 4902 scope.go:117] "RemoveContainer" containerID="203c5f96aeff362658b5520a6e9eab7da26f8f63fd730b8b01fac5d263703aa2" Jan 21 16:19:27 crc kubenswrapper[4902]: I0121 16:19:27.448871 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:27 crc kubenswrapper[4902]: I0121 16:19:27.454428 4902 scope.go:117] "RemoveContainer" containerID="fa806723dfd7c0c4b6154749911e6912458d2480fc0fa40932f24e709061ffad" Jan 21 16:19:27 crc kubenswrapper[4902]: I0121 16:19:27.516025 4902 scope.go:117] "RemoveContainer" containerID="896306bd2b1df34ec4addf4110626bc7531717802d050ed131267e70790b5a08" Jan 21 16:19:27 crc kubenswrapper[4902]: I0121 16:19:27.929931 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"dd9f943d521b68000af79f0fd73624ba084fada704e30191659b3cc0a8066bce"} Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.104887 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.108543 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.114631 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.120065 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.128941 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.211436 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-config-data\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.211496 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.211562 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d69a60b6-5623-4c6c-aaac-8d944a90748a-log-httpd\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.211618 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-scripts\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.211720 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.211754 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d69a60b6-5623-4c6c-aaac-8d944a90748a-run-httpd\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.211773 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgtzj\" (UniqueName: \"kubernetes.io/projected/d69a60b6-5623-4c6c-aaac-8d944a90748a-kube-api-access-wgtzj\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.313451 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-config-data\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.313514 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.313571 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d69a60b6-5623-4c6c-aaac-8d944a90748a-log-httpd\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.313617 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-scripts\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.313703 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.313737 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d69a60b6-5623-4c6c-aaac-8d944a90748a-run-httpd\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.313756 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgtzj\" (UniqueName: \"kubernetes.io/projected/d69a60b6-5623-4c6c-aaac-8d944a90748a-kube-api-access-wgtzj\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.314121 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d69a60b6-5623-4c6c-aaac-8d944a90748a-log-httpd\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.314157 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d69a60b6-5623-4c6c-aaac-8d944a90748a-run-httpd\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.320565 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.321107 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.330402 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-config-data\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.330897 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-scripts\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.338343 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgtzj\" (UniqueName: \"kubernetes.io/projected/d69a60b6-5623-4c6c-aaac-8d944a90748a-kube-api-access-wgtzj\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.445068 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.917233 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:19:32 crc kubenswrapper[4902]: W0121 16:19:32.920975 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd69a60b6_5623_4c6c_aaac_8d944a90748a.slice/crio-b1f7bb2c211e7960134591749043375f5b8146796c4a6545e0fa123d72475b61 WatchSource:0}: Error finding container b1f7bb2c211e7960134591749043375f5b8146796c4a6545e0fa123d72475b61: Status 404 returned error can't find the container with id b1f7bb2c211e7960134591749043375f5b8146796c4a6545e0fa123d72475b61 Jan 21 16:19:33 crc kubenswrapper[4902]: I0121 16:19:33.019519 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d69a60b6-5623-4c6c-aaac-8d944a90748a","Type":"ContainerStarted","Data":"b1f7bb2c211e7960134591749043375f5b8146796c4a6545e0fa123d72475b61"} Jan 21 16:19:34 crc kubenswrapper[4902]: I0121 16:19:34.041535 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d69a60b6-5623-4c6c-aaac-8d944a90748a","Type":"ContainerStarted","Data":"17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c"} Jan 21 16:19:35 crc kubenswrapper[4902]: I0121 16:19:35.069084 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d69a60b6-5623-4c6c-aaac-8d944a90748a","Type":"ContainerStarted","Data":"52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636"} Jan 21 16:19:36 crc kubenswrapper[4902]: I0121 16:19:36.080849 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d69a60b6-5623-4c6c-aaac-8d944a90748a","Type":"ContainerStarted","Data":"ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce"} Jan 21 16:19:37 crc kubenswrapper[4902]: I0121 16:19:37.099291 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d69a60b6-5623-4c6c-aaac-8d944a90748a","Type":"ContainerStarted","Data":"feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b"} Jan 21 16:19:37 crc kubenswrapper[4902]: I0121 16:19:37.099805 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:19:37 crc kubenswrapper[4902]: I0121 16:19:37.124712 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.373517893 podStartE2EDuration="5.124685363s" podCreationTimestamp="2026-01-21 16:19:32 +0000 UTC" firstStartedPulling="2026-01-21 16:19:32.923563491 +0000 UTC m=+6335.000396520" lastFinishedPulling="2026-01-21 16:19:36.674730951 +0000 UTC m=+6338.751563990" observedRunningTime="2026-01-21 16:19:37.122934183 +0000 UTC m=+6339.199767212" watchObservedRunningTime="2026-01-21 16:19:37.124685363 +0000 UTC m=+6339.201518402" Jan 21 16:19:37 crc kubenswrapper[4902]: I0121 16:19:37.450266 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:37 crc kubenswrapper[4902]: I0121 16:19:37.453295 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:38 crc kubenswrapper[4902]: I0121 16:19:38.109938 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.643662 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.644670 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="7fbbd7fc-3ed5-4747-8723-d1b24677c146" containerName="openstackclient" containerID="cri-o://eeda23de981561c4ee07dfd2013c375cea0893455901d9904cf9c61704674e95" gracePeriod=2 Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.656553 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.687220 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 16:19:39 crc kubenswrapper[4902]: E0121 16:19:39.687664 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fbbd7fc-3ed5-4747-8723-d1b24677c146" containerName="openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.687687 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fbbd7fc-3ed5-4747-8723-d1b24677c146" containerName="openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.687917 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fbbd7fc-3ed5-4747-8723-d1b24677c146" containerName="openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.688670 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.701324 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.716759 4902 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="7fbbd7fc-3ed5-4747-8723-d1b24677c146" podUID="052c7402-6934-4f86-bb78-e83d7da3b587" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.808926 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/052c7402-6934-4f86-bb78-e83d7da3b587-combined-ca-bundle\") pod \"openstackclient\" (UID: \"052c7402-6934-4f86-bb78-e83d7da3b587\") " pod="openstack/openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.808980 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d97v2\" (UniqueName: \"kubernetes.io/projected/052c7402-6934-4f86-bb78-e83d7da3b587-kube-api-access-d97v2\") pod \"openstackclient\" (UID: \"052c7402-6934-4f86-bb78-e83d7da3b587\") " pod="openstack/openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.809074 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/052c7402-6934-4f86-bb78-e83d7da3b587-openstack-config-secret\") pod \"openstackclient\" (UID: \"052c7402-6934-4f86-bb78-e83d7da3b587\") " pod="openstack/openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.809544 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/052c7402-6934-4f86-bb78-e83d7da3b587-openstack-config\") pod \"openstackclient\" (UID: \"052c7402-6934-4f86-bb78-e83d7da3b587\") " pod="openstack/openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.912027 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/052c7402-6934-4f86-bb78-e83d7da3b587-combined-ca-bundle\") pod \"openstackclient\" (UID: \"052c7402-6934-4f86-bb78-e83d7da3b587\") " pod="openstack/openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.912082 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d97v2\" (UniqueName: \"kubernetes.io/projected/052c7402-6934-4f86-bb78-e83d7da3b587-kube-api-access-d97v2\") pod \"openstackclient\" (UID: \"052c7402-6934-4f86-bb78-e83d7da3b587\") " pod="openstack/openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.912124 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/052c7402-6934-4f86-bb78-e83d7da3b587-openstack-config-secret\") pod \"openstackclient\" (UID: \"052c7402-6934-4f86-bb78-e83d7da3b587\") " pod="openstack/openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.912350 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/052c7402-6934-4f86-bb78-e83d7da3b587-openstack-config\") pod \"openstackclient\" (UID: \"052c7402-6934-4f86-bb78-e83d7da3b587\") " pod="openstack/openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.913175 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/052c7402-6934-4f86-bb78-e83d7da3b587-openstack-config\") pod \"openstackclient\" (UID: \"052c7402-6934-4f86-bb78-e83d7da3b587\") " pod="openstack/openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.918757 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/052c7402-6934-4f86-bb78-e83d7da3b587-openstack-config-secret\") pod \"openstackclient\" (UID: \"052c7402-6934-4f86-bb78-e83d7da3b587\") " pod="openstack/openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.922057 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/052c7402-6934-4f86-bb78-e83d7da3b587-combined-ca-bundle\") pod \"openstackclient\" (UID: \"052c7402-6934-4f86-bb78-e83d7da3b587\") " pod="openstack/openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.928543 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d97v2\" (UniqueName: \"kubernetes.io/projected/052c7402-6934-4f86-bb78-e83d7da3b587-kube-api-access-d97v2\") pod \"openstackclient\" (UID: \"052c7402-6934-4f86-bb78-e83d7da3b587\") " pod="openstack/openstackclient" Jan 21 16:19:40 crc kubenswrapper[4902]: I0121 16:19:40.009056 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:19:40 crc kubenswrapper[4902]: I0121 16:19:40.603877 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 16:19:40 crc kubenswrapper[4902]: W0121 16:19:40.623404 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod052c7402_6934_4f86_bb78_e83d7da3b587.slice/crio-da15f1e02843264e59b668c244d416698a18a74ddf52c69eca97ac495d0a20d2 WatchSource:0}: Error finding container da15f1e02843264e59b668c244d416698a18a74ddf52c69eca97ac495d0a20d2: Status 404 returned error can't find the container with id da15f1e02843264e59b668c244d416698a18a74ddf52c69eca97ac495d0a20d2 Jan 21 16:19:40 crc kubenswrapper[4902]: I0121 16:19:40.960832 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-v4xqk"] Jan 21 16:19:40 crc kubenswrapper[4902]: I0121 16:19:40.962330 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-v4xqk" Jan 21 16:19:40 crc kubenswrapper[4902]: I0121 16:19:40.979906 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-v4xqk"] Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.071551 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.071839 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="prometheus" containerID="cri-o://9a355b75289815a6e09b6415743b2be3282dad6b45d280d3916521198ab7d34d" gracePeriod=600 Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.071895 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="config-reloader" containerID="cri-o://fe72e7d7b7c14cd45027bed85d16a5a727a64a07ece72e2a953a58f5e1029bf1" gracePeriod=600 Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.071927 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="thanos-sidecar" containerID="cri-o://cd406b88fdad163c7edea30845b9be98daefef8f90607411f070f18d212dccf9" gracePeriod=600 Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.139631 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f9de683-01b0-4513-8e18-d56361ae4bc6-operator-scripts\") pod \"aodh-db-create-v4xqk\" (UID: \"4f9de683-01b0-4513-8e18-d56361ae4bc6\") " pod="openstack/aodh-db-create-v4xqk" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.140203 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhrqk\" (UniqueName: \"kubernetes.io/projected/4f9de683-01b0-4513-8e18-d56361ae4bc6-kube-api-access-mhrqk\") pod \"aodh-db-create-v4xqk\" (UID: \"4f9de683-01b0-4513-8e18-d56361ae4bc6\") " pod="openstack/aodh-db-create-v4xqk" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.162151 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"052c7402-6934-4f86-bb78-e83d7da3b587","Type":"ContainerStarted","Data":"21ff18962aadea4c8d1af735e1fe49f6745b23f53882b2305982a44ac9e132be"} Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.162193 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"052c7402-6934-4f86-bb78-e83d7da3b587","Type":"ContainerStarted","Data":"da15f1e02843264e59b668c244d416698a18a74ddf52c69eca97ac495d0a20d2"} Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.179746 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.179727463 podStartE2EDuration="2.179727463s" podCreationTimestamp="2026-01-21 16:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:19:41.178663203 +0000 UTC m=+6343.255496232" watchObservedRunningTime="2026-01-21 16:19:41.179727463 +0000 UTC m=+6343.256560492" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.204607 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-fe19-account-create-update-m4ndc"] Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.206220 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-fe19-account-create-update-m4ndc" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.209024 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.215926 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-fe19-account-create-update-m4ndc"] Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.242596 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f9de683-01b0-4513-8e18-d56361ae4bc6-operator-scripts\") pod \"aodh-db-create-v4xqk\" (UID: \"4f9de683-01b0-4513-8e18-d56361ae4bc6\") " pod="openstack/aodh-db-create-v4xqk" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.242703 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhrqk\" (UniqueName: \"kubernetes.io/projected/4f9de683-01b0-4513-8e18-d56361ae4bc6-kube-api-access-mhrqk\") pod \"aodh-db-create-v4xqk\" (UID: \"4f9de683-01b0-4513-8e18-d56361ae4bc6\") " pod="openstack/aodh-db-create-v4xqk" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.244178 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f9de683-01b0-4513-8e18-d56361ae4bc6-operator-scripts\") pod \"aodh-db-create-v4xqk\" (UID: \"4f9de683-01b0-4513-8e18-d56361ae4bc6\") " pod="openstack/aodh-db-create-v4xqk" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.266696 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhrqk\" (UniqueName: \"kubernetes.io/projected/4f9de683-01b0-4513-8e18-d56361ae4bc6-kube-api-access-mhrqk\") pod \"aodh-db-create-v4xqk\" (UID: \"4f9de683-01b0-4513-8e18-d56361ae4bc6\") " pod="openstack/aodh-db-create-v4xqk" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.325660 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-v4xqk" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.349015 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnnlh\" (UniqueName: \"kubernetes.io/projected/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea-kube-api-access-xnnlh\") pod \"aodh-fe19-account-create-update-m4ndc\" (UID: \"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea\") " pod="openstack/aodh-fe19-account-create-update-m4ndc" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.362001 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea-operator-scripts\") pod \"aodh-fe19-account-create-update-m4ndc\" (UID: \"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea\") " pod="openstack/aodh-fe19-account-create-update-m4ndc" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.464536 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea-operator-scripts\") pod \"aodh-fe19-account-create-update-m4ndc\" (UID: \"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea\") " pod="openstack/aodh-fe19-account-create-update-m4ndc" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.464705 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnnlh\" (UniqueName: \"kubernetes.io/projected/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea-kube-api-access-xnnlh\") pod \"aodh-fe19-account-create-update-m4ndc\" (UID: \"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea\") " pod="openstack/aodh-fe19-account-create-update-m4ndc" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.466255 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea-operator-scripts\") pod \"aodh-fe19-account-create-update-m4ndc\" (UID: \"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea\") " pod="openstack/aodh-fe19-account-create-update-m4ndc" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.495982 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnnlh\" (UniqueName: \"kubernetes.io/projected/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea-kube-api-access-xnnlh\") pod \"aodh-fe19-account-create-update-m4ndc\" (UID: \"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea\") " pod="openstack/aodh-fe19-account-create-update-m4ndc" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.559141 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-fe19-account-create-update-m4ndc" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.032928 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-v4xqk"] Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.096728 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.265149 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-fe19-account-create-update-m4ndc"] Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.296306 4902 generic.go:334] "Generic (PLEG): container finished" podID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerID="cd406b88fdad163c7edea30845b9be98daefef8f90607411f070f18d212dccf9" exitCode=0 Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.296336 4902 generic.go:334] "Generic (PLEG): container finished" podID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerID="fe72e7d7b7c14cd45027bed85d16a5a727a64a07ece72e2a953a58f5e1029bf1" exitCode=0 Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.296345 4902 generic.go:334] "Generic (PLEG): container finished" podID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerID="9a355b75289815a6e09b6415743b2be3282dad6b45d280d3916521198ab7d34d" exitCode=0 Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.304015 4902 generic.go:334] "Generic (PLEG): container finished" podID="7fbbd7fc-3ed5-4747-8723-d1b24677c146" containerID="eeda23de981561c4ee07dfd2013c375cea0893455901d9904cf9c61704674e95" exitCode=137 Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.305999 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.365280 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7fbbd7fc-3ed5-4747-8723-d1b24677c146-openstack-config\") pod \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.365458 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7fbbd7fc-3ed5-4747-8723-d1b24677c146-openstack-config-secret\") pod \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.365668 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbbd7fc-3ed5-4747-8723-d1b24677c146-combined-ca-bundle\") pod \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.365981 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lx4m\" (UniqueName: \"kubernetes.io/projected/7fbbd7fc-3ed5-4747-8723-d1b24677c146-kube-api-access-6lx4m\") pod \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.395356 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fbbd7fc-3ed5-4747-8723-d1b24677c146-kube-api-access-6lx4m" (OuterVolumeSpecName: "kube-api-access-6lx4m") pod "7fbbd7fc-3ed5-4747-8723-d1b24677c146" (UID: "7fbbd7fc-3ed5-4747-8723-d1b24677c146"). InnerVolumeSpecName "kube-api-access-6lx4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.402661 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5bd53316-beed-49bb-8eec-a78efdb19f0a","Type":"ContainerDied","Data":"cd406b88fdad163c7edea30845b9be98daefef8f90607411f070f18d212dccf9"} Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.402713 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5bd53316-beed-49bb-8eec-a78efdb19f0a","Type":"ContainerDied","Data":"fe72e7d7b7c14cd45027bed85d16a5a727a64a07ece72e2a953a58f5e1029bf1"} Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.402724 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5bd53316-beed-49bb-8eec-a78efdb19f0a","Type":"ContainerDied","Data":"9a355b75289815a6e09b6415743b2be3282dad6b45d280d3916521198ab7d34d"} Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.402735 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-v4xqk" event={"ID":"4f9de683-01b0-4513-8e18-d56361ae4bc6","Type":"ContainerStarted","Data":"03782de3cd4352efbf4af45d731178ba78494637b72a83a8537282df4f5d7339"} Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.403713 4902 scope.go:117] "RemoveContainer" containerID="eeda23de981561c4ee07dfd2013c375cea0893455901d9904cf9c61704674e95" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.447565 4902 scope.go:117] "RemoveContainer" containerID="eeda23de981561c4ee07dfd2013c375cea0893455901d9904cf9c61704674e95" Jan 21 16:19:42 crc kubenswrapper[4902]: E0121 16:19:42.448116 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeda23de981561c4ee07dfd2013c375cea0893455901d9904cf9c61704674e95\": container with ID starting with eeda23de981561c4ee07dfd2013c375cea0893455901d9904cf9c61704674e95 not found: ID does not exist" containerID="eeda23de981561c4ee07dfd2013c375cea0893455901d9904cf9c61704674e95" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.448155 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeda23de981561c4ee07dfd2013c375cea0893455901d9904cf9c61704674e95"} err="failed to get container status \"eeda23de981561c4ee07dfd2013c375cea0893455901d9904cf9c61704674e95\": rpc error: code = NotFound desc = could not find container \"eeda23de981561c4ee07dfd2013c375cea0893455901d9904cf9c61704674e95\": container with ID starting with eeda23de981561c4ee07dfd2013c375cea0893455901d9904cf9c61704674e95 not found: ID does not exist" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.472269 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lx4m\" (UniqueName: \"kubernetes.io/projected/7fbbd7fc-3ed5-4747-8723-d1b24677c146-kube-api-access-6lx4m\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.491148 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fbbd7fc-3ed5-4747-8723-d1b24677c146-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fbbd7fc-3ed5-4747-8723-d1b24677c146" (UID: "7fbbd7fc-3ed5-4747-8723-d1b24677c146"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.496763 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.503656 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fbbd7fc-3ed5-4747-8723-d1b24677c146-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "7fbbd7fc-3ed5-4747-8723-d1b24677c146" (UID: "7fbbd7fc-3ed5-4747-8723-d1b24677c146"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.533947 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fbbd7fc-3ed5-4747-8723-d1b24677c146-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "7fbbd7fc-3ed5-4747-8723-d1b24677c146" (UID: "7fbbd7fc-3ed5-4747-8723-d1b24677c146"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.576256 4902 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7fbbd7fc-3ed5-4747-8723-d1b24677c146-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.576299 4902 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7fbbd7fc-3ed5-4747-8723-d1b24677c146-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.576312 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbbd7fc-3ed5-4747-8723-d1b24677c146-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.677436 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5bd53316-beed-49bb-8eec-a78efdb19f0a-tls-assets\") pod \"5bd53316-beed-49bb-8eec-a78efdb19f0a\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.677811 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5bd53316-beed-49bb-8eec-a78efdb19f0a-config-out\") pod \"5bd53316-beed-49bb-8eec-a78efdb19f0a\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.677883 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-1\") pod \"5bd53316-beed-49bb-8eec-a78efdb19f0a\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.677971 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-web-config\") pod \"5bd53316-beed-49bb-8eec-a78efdb19f0a\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.678020 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-2\") pod \"5bd53316-beed-49bb-8eec-a78efdb19f0a\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.678084 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-thanos-prometheus-http-client-file\") pod \"5bd53316-beed-49bb-8eec-a78efdb19f0a\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.678118 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldpwc\" (UniqueName: \"kubernetes.io/projected/5bd53316-beed-49bb-8eec-a78efdb19f0a-kube-api-access-ldpwc\") pod \"5bd53316-beed-49bb-8eec-a78efdb19f0a\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.678149 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-0\") pod \"5bd53316-beed-49bb-8eec-a78efdb19f0a\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.678352 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\") pod \"5bd53316-beed-49bb-8eec-a78efdb19f0a\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.678434 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-config\") pod \"5bd53316-beed-49bb-8eec-a78efdb19f0a\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.679550 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "5bd53316-beed-49bb-8eec-a78efdb19f0a" (UID: "5bd53316-beed-49bb-8eec-a78efdb19f0a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.679883 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "5bd53316-beed-49bb-8eec-a78efdb19f0a" (UID: "5bd53316-beed-49bb-8eec-a78efdb19f0a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.680161 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "5bd53316-beed-49bb-8eec-a78efdb19f0a" (UID: "5bd53316-beed-49bb-8eec-a78efdb19f0a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.719672 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bd53316-beed-49bb-8eec-a78efdb19f0a-config-out" (OuterVolumeSpecName: "config-out") pod "5bd53316-beed-49bb-8eec-a78efdb19f0a" (UID: "5bd53316-beed-49bb-8eec-a78efdb19f0a"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.719918 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bd53316-beed-49bb-8eec-a78efdb19f0a-kube-api-access-ldpwc" (OuterVolumeSpecName: "kube-api-access-ldpwc") pod "5bd53316-beed-49bb-8eec-a78efdb19f0a" (UID: "5bd53316-beed-49bb-8eec-a78efdb19f0a"). InnerVolumeSpecName "kube-api-access-ldpwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.721080 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "5bd53316-beed-49bb-8eec-a78efdb19f0a" (UID: "5bd53316-beed-49bb-8eec-a78efdb19f0a"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.721514 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-config" (OuterVolumeSpecName: "config") pod "5bd53316-beed-49bb-8eec-a78efdb19f0a" (UID: "5bd53316-beed-49bb-8eec-a78efdb19f0a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.722655 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bd53316-beed-49bb-8eec-a78efdb19f0a-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "5bd53316-beed-49bb-8eec-a78efdb19f0a" (UID: "5bd53316-beed-49bb-8eec-a78efdb19f0a"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.724303 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-web-config" (OuterVolumeSpecName: "web-config") pod "5bd53316-beed-49bb-8eec-a78efdb19f0a" (UID: "5bd53316-beed-49bb-8eec-a78efdb19f0a"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.740571 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "5bd53316-beed-49bb-8eec-a78efdb19f0a" (UID: "5bd53316-beed-49bb-8eec-a78efdb19f0a"). InnerVolumeSpecName "pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.781917 4902 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5bd53316-beed-49bb-8eec-a78efdb19f0a-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.781955 4902 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5bd53316-beed-49bb-8eec-a78efdb19f0a-config-out\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.781965 4902 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.781974 4902 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-web-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.781983 4902 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.781994 4902 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.782006 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldpwc\" (UniqueName: \"kubernetes.io/projected/5bd53316-beed-49bb-8eec-a78efdb19f0a-kube-api-access-ldpwc\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.782015 4902 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.782085 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\") on node \"crc\" " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.782098 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.820993 4902 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.821165 4902 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc") on node "crc" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.883806 4902 reconciler_common.go:293] "Volume detached for volume \"pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.315686 4902 generic.go:334] "Generic (PLEG): container finished" podID="4f9de683-01b0-4513-8e18-d56361ae4bc6" containerID="04e8685d31a4c1b85ba91615c510f74e4584d6a0993549e22bc5847f14ee429d" exitCode=0 Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.315776 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-v4xqk" event={"ID":"4f9de683-01b0-4513-8e18-d56361ae4bc6","Type":"ContainerDied","Data":"04e8685d31a4c1b85ba91615c510f74e4584d6a0993549e22bc5847f14ee429d"} Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.319491 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5bd53316-beed-49bb-8eec-a78efdb19f0a","Type":"ContainerDied","Data":"0fc37cbdb7d70159ca8295f97b2fa25e8f7409a5c1a417f96662f1f15ccc1985"} Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.319548 4902 scope.go:117] "RemoveContainer" containerID="cd406b88fdad163c7edea30845b9be98daefef8f90607411f070f18d212dccf9" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.319823 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.339851 4902 generic.go:334] "Generic (PLEG): container finished" podID="947c6da7-eea1-412b-8f8d-f1cdfadcf4ea" containerID="a17204ae8500af5c3ac489e63a42369874fd6943aaf98b293789e79f2dc7c291" exitCode=0 Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.340138 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-fe19-account-create-update-m4ndc" event={"ID":"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea","Type":"ContainerDied","Data":"a17204ae8500af5c3ac489e63a42369874fd6943aaf98b293789e79f2dc7c291"} Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.340194 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-fe19-account-create-update-m4ndc" event={"ID":"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea","Type":"ContainerStarted","Data":"da0b1780c377ee2c6d363298f097a1e811d58ad4385aef980f7111ef3a9d2062"} Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.365907 4902 scope.go:117] "RemoveContainer" containerID="fe72e7d7b7c14cd45027bed85d16a5a727a64a07ece72e2a953a58f5e1029bf1" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.397478 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.410248 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.436103 4902 scope.go:117] "RemoveContainer" containerID="9a355b75289815a6e09b6415743b2be3282dad6b45d280d3916521198ab7d34d" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.445425 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 16:19:43 crc kubenswrapper[4902]: E0121 16:19:43.445974 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="prometheus" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.445997 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="prometheus" Jan 21 16:19:43 crc kubenswrapper[4902]: E0121 16:19:43.446059 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="init-config-reloader" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.446070 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="init-config-reloader" Jan 21 16:19:43 crc kubenswrapper[4902]: E0121 16:19:43.446102 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="thanos-sidecar" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.446111 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="thanos-sidecar" Jan 21 16:19:43 crc kubenswrapper[4902]: E0121 16:19:43.446133 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="config-reloader" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.446138 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="config-reloader" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.446329 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="prometheus" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.446352 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="config-reloader" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.446364 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="thanos-sidecar" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.448978 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.456334 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.456377 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.456405 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.456786 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.456861 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.456998 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.457684 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.461636 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-nkftv" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.465243 4902 scope.go:117] "RemoveContainer" containerID="ac1d3d4a6fd4695fa37a965837770a925659e19ff547b7bc40c9d740b5a8f0f9" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.465732 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.467795 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601025 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601169 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601251 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/294a561c-9181-4330-86e5-ab51e9f3c07c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601404 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/294a561c-9181-4330-86e5-ab51e9f3c07c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601429 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/294a561c-9181-4330-86e5-ab51e9f3c07c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601554 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/294a561c-9181-4330-86e5-ab51e9f3c07c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601607 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601652 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601728 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601782 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601822 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xqdm\" (UniqueName: \"kubernetes.io/projected/294a561c-9181-4330-86e5-ab51e9f3c07c-kube-api-access-6xqdm\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601915 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/294a561c-9181-4330-86e5-ab51e9f3c07c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601952 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-config\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.703888 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/294a561c-9181-4330-86e5-ab51e9f3c07c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.703952 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/294a561c-9181-4330-86e5-ab51e9f3c07c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.704006 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/294a561c-9181-4330-86e5-ab51e9f3c07c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.704028 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.704075 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.704099 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.704123 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.704147 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xqdm\" (UniqueName: \"kubernetes.io/projected/294a561c-9181-4330-86e5-ab51e9f3c07c-kube-api-access-6xqdm\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.704206 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/294a561c-9181-4330-86e5-ab51e9f3c07c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.704233 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-config\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.704271 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.704304 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.704336 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/294a561c-9181-4330-86e5-ab51e9f3c07c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.705294 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/294a561c-9181-4330-86e5-ab51e9f3c07c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.705424 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/294a561c-9181-4330-86e5-ab51e9f3c07c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.706578 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/294a561c-9181-4330-86e5-ab51e9f3c07c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.710062 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.710348 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.710394 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2072cb44e63c79cbe1d1309d1abe2e89a3820c0b6ba86768b97aed1379d46137/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.711622 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.711702 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.712944 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.715012 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.717123 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-config\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.724875 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xqdm\" (UniqueName: \"kubernetes.io/projected/294a561c-9181-4330-86e5-ab51e9f3c07c-kube-api-access-6xqdm\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.725157 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/294a561c-9181-4330-86e5-ab51e9f3c07c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.730738 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/294a561c-9181-4330-86e5-ab51e9f3c07c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.774444 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.838932 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:44 crc kubenswrapper[4902]: I0121 16:19:44.306369 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" path="/var/lib/kubelet/pods/5bd53316-beed-49bb-8eec-a78efdb19f0a/volumes" Jan 21 16:19:44 crc kubenswrapper[4902]: I0121 16:19:44.307596 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fbbd7fc-3ed5-4747-8723-d1b24677c146" path="/var/lib/kubelet/pods/7fbbd7fc-3ed5-4747-8723-d1b24677c146/volumes" Jan 21 16:19:44 crc kubenswrapper[4902]: I0121 16:19:44.357803 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 16:19:44 crc kubenswrapper[4902]: I0121 16:19:44.915580 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-fe19-account-create-update-m4ndc" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.050803 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-v4xqk" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.057577 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea-operator-scripts\") pod \"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea\" (UID: \"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea\") " Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.057756 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnnlh\" (UniqueName: \"kubernetes.io/projected/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea-kube-api-access-xnnlh\") pod \"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea\" (UID: \"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea\") " Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.058437 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "947c6da7-eea1-412b-8f8d-f1cdfadcf4ea" (UID: "947c6da7-eea1-412b-8f8d-f1cdfadcf4ea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.064735 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea-kube-api-access-xnnlh" (OuterVolumeSpecName: "kube-api-access-xnnlh") pod "947c6da7-eea1-412b-8f8d-f1cdfadcf4ea" (UID: "947c6da7-eea1-412b-8f8d-f1cdfadcf4ea"). InnerVolumeSpecName "kube-api-access-xnnlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.159035 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhrqk\" (UniqueName: \"kubernetes.io/projected/4f9de683-01b0-4513-8e18-d56361ae4bc6-kube-api-access-mhrqk\") pod \"4f9de683-01b0-4513-8e18-d56361ae4bc6\" (UID: \"4f9de683-01b0-4513-8e18-d56361ae4bc6\") " Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.159276 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f9de683-01b0-4513-8e18-d56361ae4bc6-operator-scripts\") pod \"4f9de683-01b0-4513-8e18-d56361ae4bc6\" (UID: \"4f9de683-01b0-4513-8e18-d56361ae4bc6\") " Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.159893 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f9de683-01b0-4513-8e18-d56361ae4bc6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4f9de683-01b0-4513-8e18-d56361ae4bc6" (UID: "4f9de683-01b0-4513-8e18-d56361ae4bc6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.160099 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.160127 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnnlh\" (UniqueName: \"kubernetes.io/projected/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea-kube-api-access-xnnlh\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.161678 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f9de683-01b0-4513-8e18-d56361ae4bc6-kube-api-access-mhrqk" (OuterVolumeSpecName: "kube-api-access-mhrqk") pod "4f9de683-01b0-4513-8e18-d56361ae4bc6" (UID: "4f9de683-01b0-4513-8e18-d56361ae4bc6"). InnerVolumeSpecName "kube-api-access-mhrqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.263394 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f9de683-01b0-4513-8e18-d56361ae4bc6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.263430 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhrqk\" (UniqueName: \"kubernetes.io/projected/4f9de683-01b0-4513-8e18-d56361ae4bc6-kube-api-access-mhrqk\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.367431 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-fe19-account-create-update-m4ndc" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.367427 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-fe19-account-create-update-m4ndc" event={"ID":"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea","Type":"ContainerDied","Data":"da0b1780c377ee2c6d363298f097a1e811d58ad4385aef980f7111ef3a9d2062"} Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.367548 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da0b1780c377ee2c6d363298f097a1e811d58ad4385aef980f7111ef3a9d2062" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.368984 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"294a561c-9181-4330-86e5-ab51e9f3c07c","Type":"ContainerStarted","Data":"55af016233e8bdb2e69339f03e2e871b89a824bcde36fb6977c57f1e39316cdb"} Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.371603 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-v4xqk" event={"ID":"4f9de683-01b0-4513-8e18-d56361ae4bc6","Type":"ContainerDied","Data":"03782de3cd4352efbf4af45d731178ba78494637b72a83a8537282df4f5d7339"} Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.371646 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03782de3cd4352efbf4af45d731178ba78494637b72a83a8537282df4f5d7339" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.371714 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-v4xqk" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.449532 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.138:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:19:45 crc kubenswrapper[4902]: E0121 16:19:45.487429 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod947c6da7_eea1_412b_8f8d_f1cdfadcf4ea.slice/crio-da0b1780c377ee2c6d363298f097a1e811d58ad4385aef980f7111ef3a9d2062\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f9de683_01b0_4513_8e18_d56361ae4bc6.slice/crio-03782de3cd4352efbf4af45d731178ba78494637b72a83a8537282df4f5d7339\": RecentStats: unable to find data in memory cache]" Jan 21 16:19:49 crc kubenswrapper[4902]: I0121 16:19:49.422849 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"294a561c-9181-4330-86e5-ab51e9f3c07c","Type":"ContainerStarted","Data":"a90dbf20357ff7f033de8ec9d2730be60fc8e73c3b80e638a51f94879d51d50d"} Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.315215 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-bvsxp"] Jan 21 16:19:51 crc kubenswrapper[4902]: E0121 16:19:51.315912 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="947c6da7-eea1-412b-8f8d-f1cdfadcf4ea" containerName="mariadb-account-create-update" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.316019 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="947c6da7-eea1-412b-8f8d-f1cdfadcf4ea" containerName="mariadb-account-create-update" Jan 21 16:19:51 crc kubenswrapper[4902]: E0121 16:19:51.318941 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f9de683-01b0-4513-8e18-d56361ae4bc6" containerName="mariadb-database-create" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.318989 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f9de683-01b0-4513-8e18-d56361ae4bc6" containerName="mariadb-database-create" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.319463 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f9de683-01b0-4513-8e18-d56361ae4bc6" containerName="mariadb-database-create" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.319481 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="947c6da7-eea1-412b-8f8d-f1cdfadcf4ea" containerName="mariadb-account-create-update" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.320500 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.327197 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.327344 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.327494 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-zlqm8" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.327495 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.354593 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-bvsxp"] Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.465328 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zlsg\" (UniqueName: \"kubernetes.io/projected/7ad5c1ce-9471-430a-b273-873699a86d57-kube-api-access-9zlsg\") pod \"aodh-db-sync-bvsxp\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.465885 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-scripts\") pod \"aodh-db-sync-bvsxp\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.466777 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-config-data\") pod \"aodh-db-sync-bvsxp\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.466824 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-combined-ca-bundle\") pod \"aodh-db-sync-bvsxp\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.569722 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zlsg\" (UniqueName: \"kubernetes.io/projected/7ad5c1ce-9471-430a-b273-873699a86d57-kube-api-access-9zlsg\") pod \"aodh-db-sync-bvsxp\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.569809 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-scripts\") pod \"aodh-db-sync-bvsxp\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.569846 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-config-data\") pod \"aodh-db-sync-bvsxp\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.569876 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-combined-ca-bundle\") pod \"aodh-db-sync-bvsxp\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.578811 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-scripts\") pod \"aodh-db-sync-bvsxp\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.582406 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-config-data\") pod \"aodh-db-sync-bvsxp\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.584820 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-combined-ca-bundle\") pod \"aodh-db-sync-bvsxp\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.589967 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zlsg\" (UniqueName: \"kubernetes.io/projected/7ad5c1ce-9471-430a-b273-873699a86d57-kube-api-access-9zlsg\") pod \"aodh-db-sync-bvsxp\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.651916 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:52 crc kubenswrapper[4902]: I0121 16:19:52.130632 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-bvsxp"] Jan 21 16:19:52 crc kubenswrapper[4902]: W0121 16:19:52.134445 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ad5c1ce_9471_430a_b273_873699a86d57.slice/crio-717d1e1819361222257d7c8c0b607114d7f551f24f2d5ad4f20d3d971a608d81 WatchSource:0}: Error finding container 717d1e1819361222257d7c8c0b607114d7f551f24f2d5ad4f20d3d971a608d81: Status 404 returned error can't find the container with id 717d1e1819361222257d7c8c0b607114d7f551f24f2d5ad4f20d3d971a608d81 Jan 21 16:19:52 crc kubenswrapper[4902]: I0121 16:19:52.456733 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-bvsxp" event={"ID":"7ad5c1ce-9471-430a-b273-873699a86d57","Type":"ContainerStarted","Data":"717d1e1819361222257d7c8c0b607114d7f551f24f2d5ad4f20d3d971a608d81"} Jan 21 16:19:55 crc kubenswrapper[4902]: I0121 16:19:55.491334 4902 generic.go:334] "Generic (PLEG): container finished" podID="294a561c-9181-4330-86e5-ab51e9f3c07c" containerID="a90dbf20357ff7f033de8ec9d2730be60fc8e73c3b80e638a51f94879d51d50d" exitCode=0 Jan 21 16:19:55 crc kubenswrapper[4902]: I0121 16:19:55.491819 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"294a561c-9181-4330-86e5-ab51e9f3c07c","Type":"ContainerDied","Data":"a90dbf20357ff7f033de8ec9d2730be60fc8e73c3b80e638a51f94879d51d50d"} Jan 21 16:19:56 crc kubenswrapper[4902]: I0121 16:19:56.516502 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-bvsxp" event={"ID":"7ad5c1ce-9471-430a-b273-873699a86d57","Type":"ContainerStarted","Data":"8ce585bfe7e263f38d6e4b6cf4cca542c267ca3f4df18725b7e9510d21180fb3"} Jan 21 16:19:56 crc kubenswrapper[4902]: I0121 16:19:56.520267 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"294a561c-9181-4330-86e5-ab51e9f3c07c","Type":"ContainerStarted","Data":"6deb6b902dd5dd2154aed2a6c1c7127254a8e01efdfa7a3e5492b8e17c3da362"} Jan 21 16:19:56 crc kubenswrapper[4902]: I0121 16:19:56.546850 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-bvsxp" podStartSLOduration=1.793954152 podStartE2EDuration="5.546827449s" podCreationTimestamp="2026-01-21 16:19:51 +0000 UTC" firstStartedPulling="2026-01-21 16:19:52.13646686 +0000 UTC m=+6354.213299889" lastFinishedPulling="2026-01-21 16:19:55.889340167 +0000 UTC m=+6357.966173186" observedRunningTime="2026-01-21 16:19:56.535301795 +0000 UTC m=+6358.612134854" watchObservedRunningTime="2026-01-21 16:19:56.546827449 +0000 UTC m=+6358.623660498" Jan 21 16:19:57 crc kubenswrapper[4902]: I0121 16:19:57.039067 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-70cb-account-create-update-hprl8"] Jan 21 16:19:57 crc kubenswrapper[4902]: I0121 16:19:57.049171 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-xrlb5"] Jan 21 16:19:57 crc kubenswrapper[4902]: I0121 16:19:57.058034 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-70cb-account-create-update-hprl8"] Jan 21 16:19:57 crc kubenswrapper[4902]: I0121 16:19:57.066787 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-xrlb5"] Jan 21 16:19:58 crc kubenswrapper[4902]: I0121 16:19:58.316786 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="311b51a9-7349-42c3-8777-e1da9c997866" path="/var/lib/kubelet/pods/311b51a9-7349-42c3-8777-e1da9c997866/volumes" Jan 21 16:19:58 crc kubenswrapper[4902]: I0121 16:19:58.319154 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32dabaa5-86fa-4ff4-9a8e-7cd5360c978c" path="/var/lib/kubelet/pods/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c/volumes" Jan 21 16:19:59 crc kubenswrapper[4902]: I0121 16:19:59.551852 4902 generic.go:334] "Generic (PLEG): container finished" podID="7ad5c1ce-9471-430a-b273-873699a86d57" containerID="8ce585bfe7e263f38d6e4b6cf4cca542c267ca3f4df18725b7e9510d21180fb3" exitCode=0 Jan 21 16:19:59 crc kubenswrapper[4902]: I0121 16:19:59.551899 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-bvsxp" event={"ID":"7ad5c1ce-9471-430a-b273-873699a86d57","Type":"ContainerDied","Data":"8ce585bfe7e263f38d6e4b6cf4cca542c267ca3f4df18725b7e9510d21180fb3"} Jan 21 16:20:00 crc kubenswrapper[4902]: I0121 16:20:00.569275 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"294a561c-9181-4330-86e5-ab51e9f3c07c","Type":"ContainerStarted","Data":"d56f4095260e9d7560120b406ac9902151f1010693b894dc04fe5a44023bba5c"} Jan 21 16:20:00 crc kubenswrapper[4902]: I0121 16:20:00.569625 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"294a561c-9181-4330-86e5-ab51e9f3c07c","Type":"ContainerStarted","Data":"5cd62bbc0874133bb18e6d511f86ad6fa95a5bfc1930982ecd818f9e3d17728e"} Jan 21 16:20:00 crc kubenswrapper[4902]: I0121 16:20:00.608117 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.608101104 podStartE2EDuration="17.608101104s" podCreationTimestamp="2026-01-21 16:19:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:20:00.596836777 +0000 UTC m=+6362.673669886" watchObservedRunningTime="2026-01-21 16:20:00.608101104 +0000 UTC m=+6362.684934133" Jan 21 16:20:00 crc kubenswrapper[4902]: I0121 16:20:00.997170 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.094368 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-scripts\") pod \"7ad5c1ce-9471-430a-b273-873699a86d57\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.094519 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-config-data\") pod \"7ad5c1ce-9471-430a-b273-873699a86d57\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.094543 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-combined-ca-bundle\") pod \"7ad5c1ce-9471-430a-b273-873699a86d57\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.094711 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zlsg\" (UniqueName: \"kubernetes.io/projected/7ad5c1ce-9471-430a-b273-873699a86d57-kube-api-access-9zlsg\") pod \"7ad5c1ce-9471-430a-b273-873699a86d57\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.100405 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-scripts" (OuterVolumeSpecName: "scripts") pod "7ad5c1ce-9471-430a-b273-873699a86d57" (UID: "7ad5c1ce-9471-430a-b273-873699a86d57"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.100768 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ad5c1ce-9471-430a-b273-873699a86d57-kube-api-access-9zlsg" (OuterVolumeSpecName: "kube-api-access-9zlsg") pod "7ad5c1ce-9471-430a-b273-873699a86d57" (UID: "7ad5c1ce-9471-430a-b273-873699a86d57"). InnerVolumeSpecName "kube-api-access-9zlsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.123095 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-config-data" (OuterVolumeSpecName: "config-data") pod "7ad5c1ce-9471-430a-b273-873699a86d57" (UID: "7ad5c1ce-9471-430a-b273-873699a86d57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.125829 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ad5c1ce-9471-430a-b273-873699a86d57" (UID: "7ad5c1ce-9471-430a-b273-873699a86d57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.197124 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.197154 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.197164 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.197177 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zlsg\" (UniqueName: \"kubernetes.io/projected/7ad5c1ce-9471-430a-b273-873699a86d57-kube-api-access-9zlsg\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.582952 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-bvsxp" event={"ID":"7ad5c1ce-9471-430a-b273-873699a86d57","Type":"ContainerDied","Data":"717d1e1819361222257d7c8c0b607114d7f551f24f2d5ad4f20d3d971a608d81"} Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.583440 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="717d1e1819361222257d7c8c0b607114d7f551f24f2d5ad4f20d3d971a608d81" Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.582999 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:20:02 crc kubenswrapper[4902]: I0121 16:20:02.457305 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 16:20:03 crc kubenswrapper[4902]: I0121 16:20:03.036391 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-nnsm5"] Jan 21 16:20:03 crc kubenswrapper[4902]: I0121 16:20:03.044833 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-nnsm5"] Jan 21 16:20:03 crc kubenswrapper[4902]: I0121 16:20:03.839007 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 21 16:20:04 crc kubenswrapper[4902]: I0121 16:20:04.313872 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f532d2b6-7ad3-4b83-9100-d4b94d5a512d" path="/var/lib/kubelet/pods/f532d2b6-7ad3-4b83-9100-d4b94d5a512d/volumes" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.111172 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 21 16:20:06 crc kubenswrapper[4902]: E0121 16:20:06.112198 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ad5c1ce-9471-430a-b273-873699a86d57" containerName="aodh-db-sync" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.112218 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ad5c1ce-9471-430a-b273-873699a86d57" containerName="aodh-db-sync" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.112495 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ad5c1ce-9471-430a-b273-873699a86d57" containerName="aodh-db-sync" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.114830 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.122080 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-zlqm8" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.122458 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.122648 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.124928 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.201485 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-combined-ca-bundle\") pod \"aodh-0\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.201539 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-scripts\") pod \"aodh-0\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.201665 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-config-data\") pod \"aodh-0\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.201834 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b7ww\" (UniqueName: \"kubernetes.io/projected/76119988-951c-4bee-9832-7ac41e0335de-kube-api-access-4b7ww\") pod \"aodh-0\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.384177 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-combined-ca-bundle\") pod \"aodh-0\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.384222 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-scripts\") pod \"aodh-0\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.384364 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-config-data\") pod \"aodh-0\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.384522 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b7ww\" (UniqueName: \"kubernetes.io/projected/76119988-951c-4bee-9832-7ac41e0335de-kube-api-access-4b7ww\") pod \"aodh-0\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.390925 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-combined-ca-bundle\") pod \"aodh-0\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.407979 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-scripts\") pod \"aodh-0\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.426858 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-config-data\") pod \"aodh-0\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.427671 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b7ww\" (UniqueName: \"kubernetes.io/projected/76119988-951c-4bee-9832-7ac41e0335de-kube-api-access-4b7ww\") pod \"aodh-0\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.460687 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 21 16:20:07 crc kubenswrapper[4902]: I0121 16:20:07.173205 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 21 16:20:07 crc kubenswrapper[4902]: I0121 16:20:07.661693 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"76119988-951c-4bee-9832-7ac41e0335de","Type":"ContainerStarted","Data":"6eedcc7523efe081a15cb931149e929c40f8fd79a47397d367b9e351ed5ed0bc"} Jan 21 16:20:08 crc kubenswrapper[4902]: I0121 16:20:08.682285 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"76119988-951c-4bee-9832-7ac41e0335de","Type":"ContainerStarted","Data":"a1aabff299ab969d0cfdbf9f4830081affd8dc382ecd5010dad633aa2dd9aecc"} Jan 21 16:20:09 crc kubenswrapper[4902]: I0121 16:20:09.427801 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:20:09 crc kubenswrapper[4902]: I0121 16:20:09.428745 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="ceilometer-central-agent" containerID="cri-o://17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c" gracePeriod=30 Jan 21 16:20:09 crc kubenswrapper[4902]: I0121 16:20:09.428927 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="proxy-httpd" containerID="cri-o://feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b" gracePeriod=30 Jan 21 16:20:09 crc kubenswrapper[4902]: I0121 16:20:09.428976 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="sg-core" containerID="cri-o://ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce" gracePeriod=30 Jan 21 16:20:09 crc kubenswrapper[4902]: I0121 16:20:09.429007 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="ceilometer-notification-agent" containerID="cri-o://52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636" gracePeriod=30 Jan 21 16:20:09 crc kubenswrapper[4902]: I0121 16:20:09.726221 4902 generic.go:334] "Generic (PLEG): container finished" podID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerID="ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce" exitCode=2 Jan 21 16:20:09 crc kubenswrapper[4902]: I0121 16:20:09.726273 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d69a60b6-5623-4c6c-aaac-8d944a90748a","Type":"ContainerDied","Data":"ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce"} Jan 21 16:20:10 crc kubenswrapper[4902]: I0121 16:20:10.277993 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 21 16:20:10 crc kubenswrapper[4902]: I0121 16:20:10.749507 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"76119988-951c-4bee-9832-7ac41e0335de","Type":"ContainerStarted","Data":"80a1fc628ae387280e9939ad7bb8b7e183b4a8c6a2e6094ae36e73a0d3c70710"} Jan 21 16:20:10 crc kubenswrapper[4902]: I0121 16:20:10.765959 4902 generic.go:334] "Generic (PLEG): container finished" podID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerID="feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b" exitCode=0 Jan 21 16:20:10 crc kubenswrapper[4902]: I0121 16:20:10.765998 4902 generic.go:334] "Generic (PLEG): container finished" podID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerID="17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c" exitCode=0 Jan 21 16:20:10 crc kubenswrapper[4902]: I0121 16:20:10.766074 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d69a60b6-5623-4c6c-aaac-8d944a90748a","Type":"ContainerDied","Data":"feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b"} Jan 21 16:20:10 crc kubenswrapper[4902]: I0121 16:20:10.766112 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d69a60b6-5623-4c6c-aaac-8d944a90748a","Type":"ContainerDied","Data":"17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c"} Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.627519 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.759168 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgtzj\" (UniqueName: \"kubernetes.io/projected/d69a60b6-5623-4c6c-aaac-8d944a90748a-kube-api-access-wgtzj\") pod \"d69a60b6-5623-4c6c-aaac-8d944a90748a\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.759485 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-sg-core-conf-yaml\") pod \"d69a60b6-5623-4c6c-aaac-8d944a90748a\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.759667 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d69a60b6-5623-4c6c-aaac-8d944a90748a-run-httpd\") pod \"d69a60b6-5623-4c6c-aaac-8d944a90748a\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.760225 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-config-data\") pod \"d69a60b6-5623-4c6c-aaac-8d944a90748a\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.760341 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-scripts\") pod \"d69a60b6-5623-4c6c-aaac-8d944a90748a\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.760465 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d69a60b6-5623-4c6c-aaac-8d944a90748a-log-httpd\") pod \"d69a60b6-5623-4c6c-aaac-8d944a90748a\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.760575 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-combined-ca-bundle\") pod \"d69a60b6-5623-4c6c-aaac-8d944a90748a\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.760154 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d69a60b6-5623-4c6c-aaac-8d944a90748a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d69a60b6-5623-4c6c-aaac-8d944a90748a" (UID: "d69a60b6-5623-4c6c-aaac-8d944a90748a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.762547 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d69a60b6-5623-4c6c-aaac-8d944a90748a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d69a60b6-5623-4c6c-aaac-8d944a90748a" (UID: "d69a60b6-5623-4c6c-aaac-8d944a90748a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.777822 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d69a60b6-5623-4c6c-aaac-8d944a90748a-kube-api-access-wgtzj" (OuterVolumeSpecName: "kube-api-access-wgtzj") pod "d69a60b6-5623-4c6c-aaac-8d944a90748a" (UID: "d69a60b6-5623-4c6c-aaac-8d944a90748a"). InnerVolumeSpecName "kube-api-access-wgtzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.789359 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-scripts" (OuterVolumeSpecName: "scripts") pod "d69a60b6-5623-4c6c-aaac-8d944a90748a" (UID: "d69a60b6-5623-4c6c-aaac-8d944a90748a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.818859 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"76119988-951c-4bee-9832-7ac41e0335de","Type":"ContainerStarted","Data":"7757c4395511b6240ada2abaaec9c1d3750f8cb7664c807793da11bcff1a2a77"} Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.823481 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d69a60b6-5623-4c6c-aaac-8d944a90748a" (UID: "d69a60b6-5623-4c6c-aaac-8d944a90748a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.825183 4902 generic.go:334] "Generic (PLEG): container finished" podID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerID="52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636" exitCode=0 Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.825481 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.825320 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d69a60b6-5623-4c6c-aaac-8d944a90748a","Type":"ContainerDied","Data":"52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636"} Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.825682 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d69a60b6-5623-4c6c-aaac-8d944a90748a","Type":"ContainerDied","Data":"b1f7bb2c211e7960134591749043375f5b8146796c4a6545e0fa123d72475b61"} Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.825701 4902 scope.go:117] "RemoveContainer" containerID="feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.864469 4902 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.864517 4902 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d69a60b6-5623-4c6c-aaac-8d944a90748a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.864528 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.864540 4902 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d69a60b6-5623-4c6c-aaac-8d944a90748a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.864551 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgtzj\" (UniqueName: \"kubernetes.io/projected/d69a60b6-5623-4c6c-aaac-8d944a90748a-kube-api-access-wgtzj\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.893983 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-config-data" (OuterVolumeSpecName: "config-data") pod "d69a60b6-5623-4c6c-aaac-8d944a90748a" (UID: "d69a60b6-5623-4c6c-aaac-8d944a90748a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.912712 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d69a60b6-5623-4c6c-aaac-8d944a90748a" (UID: "d69a60b6-5623-4c6c-aaac-8d944a90748a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.966185 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.966214 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.170418 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.188408 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.196654 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:20:13 crc kubenswrapper[4902]: E0121 16:20:13.197129 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="proxy-httpd" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.197148 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="proxy-httpd" Jan 21 16:20:13 crc kubenswrapper[4902]: E0121 16:20:13.197166 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="ceilometer-notification-agent" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.197174 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="ceilometer-notification-agent" Jan 21 16:20:13 crc kubenswrapper[4902]: E0121 16:20:13.197209 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="ceilometer-central-agent" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.197218 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="ceilometer-central-agent" Jan 21 16:20:13 crc kubenswrapper[4902]: E0121 16:20:13.197231 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="sg-core" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.197237 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="sg-core" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.197430 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="ceilometer-notification-agent" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.197447 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="proxy-httpd" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.197461 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="ceilometer-central-agent" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.197471 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="sg-core" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.199249 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.202280 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.202372 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.227727 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.288901 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-scripts\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.288947 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4f95e4f-1f5c-4664-91c4-8c904bbac588-run-httpd\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.288968 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.288992 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-config-data\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.289066 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67rfz\" (UniqueName: \"kubernetes.io/projected/c4f95e4f-1f5c-4664-91c4-8c904bbac588-kube-api-access-67rfz\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.289086 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4f95e4f-1f5c-4664-91c4-8c904bbac588-log-httpd\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.289106 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.393161 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67rfz\" (UniqueName: \"kubernetes.io/projected/c4f95e4f-1f5c-4664-91c4-8c904bbac588-kube-api-access-67rfz\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.393500 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4f95e4f-1f5c-4664-91c4-8c904bbac588-log-httpd\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.393609 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.393960 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-scripts\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.394093 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4f95e4f-1f5c-4664-91c4-8c904bbac588-run-httpd\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.394198 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.394326 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-config-data\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.397748 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4f95e4f-1f5c-4664-91c4-8c904bbac588-run-httpd\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.401747 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4f95e4f-1f5c-4664-91c4-8c904bbac588-log-httpd\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.409251 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-scripts\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.416859 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.417727 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-config-data\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.421692 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.440150 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67rfz\" (UniqueName: \"kubernetes.io/projected/c4f95e4f-1f5c-4664-91c4-8c904bbac588-kube-api-access-67rfz\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.527319 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.761183 4902 scope.go:117] "RemoveContainer" containerID="ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.839224 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.848173 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.937447 4902 scope.go:117] "RemoveContainer" containerID="52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.962349 4902 scope.go:117] "RemoveContainer" containerID="17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c" Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.002531 4902 scope.go:117] "RemoveContainer" containerID="feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b" Jan 21 16:20:14 crc kubenswrapper[4902]: E0121 16:20:14.002928 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b\": container with ID starting with feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b not found: ID does not exist" containerID="feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b" Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.002966 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b"} err="failed to get container status \"feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b\": rpc error: code = NotFound desc = could not find container \"feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b\": container with ID starting with feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b not found: ID does not exist" Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.002997 4902 scope.go:117] "RemoveContainer" containerID="ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce" Jan 21 16:20:14 crc kubenswrapper[4902]: E0121 16:20:14.004344 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce\": container with ID starting with ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce not found: ID does not exist" containerID="ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce" Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.004376 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce"} err="failed to get container status \"ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce\": rpc error: code = NotFound desc = could not find container \"ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce\": container with ID starting with ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce not found: ID does not exist" Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.004396 4902 scope.go:117] "RemoveContainer" containerID="52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636" Jan 21 16:20:14 crc kubenswrapper[4902]: E0121 16:20:14.004643 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636\": container with ID starting with 52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636 not found: ID does not exist" containerID="52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636" Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.004662 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636"} err="failed to get container status \"52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636\": rpc error: code = NotFound desc = could not find container \"52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636\": container with ID starting with 52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636 not found: ID does not exist" Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.004677 4902 scope.go:117] "RemoveContainer" containerID="17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c" Jan 21 16:20:14 crc kubenswrapper[4902]: E0121 16:20:14.004897 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c\": container with ID starting with 17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c not found: ID does not exist" containerID="17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c" Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.004911 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c"} err="failed to get container status \"17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c\": rpc error: code = NotFound desc = could not find container \"17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c\": container with ID starting with 17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c not found: ID does not exist" Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.309267 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" path="/var/lib/kubelet/pods/d69a60b6-5623-4c6c-aaac-8d944a90748a/volumes" Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.311335 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.863820 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4f95e4f-1f5c-4664-91c4-8c904bbac588","Type":"ContainerStarted","Data":"6a0ffc37c1dc6797f40e78442f47022b5947c62404a4648910e4832c3ca3e7c8"} Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.867437 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-api" containerID="cri-o://a1aabff299ab969d0cfdbf9f4830081affd8dc382ecd5010dad633aa2dd9aecc" gracePeriod=30 Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.867656 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"76119988-951c-4bee-9832-7ac41e0335de","Type":"ContainerStarted","Data":"dba9d32141eb642334d7b118ec33f493bcf870c1c7a90210f0fe7599d9f406a3"} Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.867945 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-listener" containerID="cri-o://dba9d32141eb642334d7b118ec33f493bcf870c1c7a90210f0fe7599d9f406a3" gracePeriod=30 Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.867994 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-notifier" containerID="cri-o://7757c4395511b6240ada2abaaec9c1d3750f8cb7664c807793da11bcff1a2a77" gracePeriod=30 Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.868026 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-evaluator" containerID="cri-o://80a1fc628ae387280e9939ad7bb8b7e183b4a8c6a2e6094ae36e73a0d3c70710" gracePeriod=30 Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.877826 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.893257 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.300447 podStartE2EDuration="8.893233963s" podCreationTimestamp="2026-01-21 16:20:06 +0000 UTC" firstStartedPulling="2026-01-21 16:20:07.180677009 +0000 UTC m=+6369.257510038" lastFinishedPulling="2026-01-21 16:20:13.773463972 +0000 UTC m=+6375.850297001" observedRunningTime="2026-01-21 16:20:14.892947645 +0000 UTC m=+6376.969780674" watchObservedRunningTime="2026-01-21 16:20:14.893233963 +0000 UTC m=+6376.970066992" Jan 21 16:20:15 crc kubenswrapper[4902]: I0121 16:20:15.875943 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4f95e4f-1f5c-4664-91c4-8c904bbac588","Type":"ContainerStarted","Data":"a8f0823ba1c0a5684f79d4de9630d7b181fe3de6041c572de1b9d8bd941a0b73"} Jan 21 16:20:15 crc kubenswrapper[4902]: I0121 16:20:15.876819 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4f95e4f-1f5c-4664-91c4-8c904bbac588","Type":"ContainerStarted","Data":"75fba54a577984f27ac176f357e52d94c5cdfda78ed3b9823956d8f8f3c0ca23"} Jan 21 16:20:15 crc kubenswrapper[4902]: I0121 16:20:15.878568 4902 generic.go:334] "Generic (PLEG): container finished" podID="76119988-951c-4bee-9832-7ac41e0335de" containerID="7757c4395511b6240ada2abaaec9c1d3750f8cb7664c807793da11bcff1a2a77" exitCode=0 Jan 21 16:20:15 crc kubenswrapper[4902]: I0121 16:20:15.878656 4902 generic.go:334] "Generic (PLEG): container finished" podID="76119988-951c-4bee-9832-7ac41e0335de" containerID="80a1fc628ae387280e9939ad7bb8b7e183b4a8c6a2e6094ae36e73a0d3c70710" exitCode=0 Jan 21 16:20:15 crc kubenswrapper[4902]: I0121 16:20:15.878718 4902 generic.go:334] "Generic (PLEG): container finished" podID="76119988-951c-4bee-9832-7ac41e0335de" containerID="a1aabff299ab969d0cfdbf9f4830081affd8dc382ecd5010dad633aa2dd9aecc" exitCode=0 Jan 21 16:20:15 crc kubenswrapper[4902]: I0121 16:20:15.878646 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"76119988-951c-4bee-9832-7ac41e0335de","Type":"ContainerDied","Data":"7757c4395511b6240ada2abaaec9c1d3750f8cb7664c807793da11bcff1a2a77"} Jan 21 16:20:15 crc kubenswrapper[4902]: I0121 16:20:15.878822 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"76119988-951c-4bee-9832-7ac41e0335de","Type":"ContainerDied","Data":"80a1fc628ae387280e9939ad7bb8b7e183b4a8c6a2e6094ae36e73a0d3c70710"} Jan 21 16:20:15 crc kubenswrapper[4902]: I0121 16:20:15.878835 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"76119988-951c-4bee-9832-7ac41e0335de","Type":"ContainerDied","Data":"a1aabff299ab969d0cfdbf9f4830081affd8dc382ecd5010dad633aa2dd9aecc"} Jan 21 16:20:16 crc kubenswrapper[4902]: I0121 16:20:16.889469 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4f95e4f-1f5c-4664-91c4-8c904bbac588","Type":"ContainerStarted","Data":"7fe3c3c2d648b699f01ffcd94a87024e19a4dd935851ffcf7e84ba63d370a012"} Jan 21 16:20:18 crc kubenswrapper[4902]: I0121 16:20:18.909750 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4f95e4f-1f5c-4664-91c4-8c904bbac588","Type":"ContainerStarted","Data":"5caee1232afcd9c4c5a8dc77f05596a1ad92560cc083b64beb6fa8f0cbb3b5ef"} Jan 21 16:20:18 crc kubenswrapper[4902]: I0121 16:20:18.910262 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:20:18 crc kubenswrapper[4902]: I0121 16:20:18.930357 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.538001097 podStartE2EDuration="5.930336278s" podCreationTimestamp="2026-01-21 16:20:13 +0000 UTC" firstStartedPulling="2026-01-21 16:20:14.304880817 +0000 UTC m=+6376.381713846" lastFinishedPulling="2026-01-21 16:20:17.697215998 +0000 UTC m=+6379.774049027" observedRunningTime="2026-01-21 16:20:18.929753622 +0000 UTC m=+6381.006586651" watchObservedRunningTime="2026-01-21 16:20:18.930336278 +0000 UTC m=+6381.007169307" Jan 21 16:20:28 crc kubenswrapper[4902]: I0121 16:20:28.111660 4902 scope.go:117] "RemoveContainer" containerID="bfee1fd2715dd8d05c9392fd3ab86d1d97c355292e968dc34fcc4d66a846b5d3" Jan 21 16:20:28 crc kubenswrapper[4902]: I0121 16:20:28.140404 4902 scope.go:117] "RemoveContainer" containerID="03abc4558e909383d3d41af8248acf4829b9d6450d3df00a2f6958bd3e3264e7" Jan 21 16:20:28 crc kubenswrapper[4902]: I0121 16:20:28.220417 4902 scope.go:117] "RemoveContainer" containerID="e3e9c87d9d90cc49442da28637d2cced6b19e9645d80d76c03c98029e5898f54" Jan 21 16:20:28 crc kubenswrapper[4902]: I0121 16:20:28.255945 4902 scope.go:117] "RemoveContainer" containerID="aef8011a0955408b9b496fd1dcaa48e11cb807245e77d4a67f379e75f01adc85" Jan 21 16:20:43 crc kubenswrapper[4902]: I0121 16:20:43.531624 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 16:20:45 crc kubenswrapper[4902]: I0121 16:20:45.448634 4902 generic.go:334] "Generic (PLEG): container finished" podID="76119988-951c-4bee-9832-7ac41e0335de" containerID="dba9d32141eb642334d7b118ec33f493bcf870c1c7a90210f0fe7599d9f406a3" exitCode=137 Jan 21 16:20:45 crc kubenswrapper[4902]: I0121 16:20:45.448714 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"76119988-951c-4bee-9832-7ac41e0335de","Type":"ContainerDied","Data":"dba9d32141eb642334d7b118ec33f493bcf870c1c7a90210f0fe7599d9f406a3"} Jan 21 16:20:45 crc kubenswrapper[4902]: I0121 16:20:45.820507 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 21 16:20:45 crc kubenswrapper[4902]: I0121 16:20:45.925813 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-scripts\") pod \"76119988-951c-4bee-9832-7ac41e0335de\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " Jan 21 16:20:45 crc kubenswrapper[4902]: I0121 16:20:45.925859 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b7ww\" (UniqueName: \"kubernetes.io/projected/76119988-951c-4bee-9832-7ac41e0335de-kube-api-access-4b7ww\") pod \"76119988-951c-4bee-9832-7ac41e0335de\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " Jan 21 16:20:45 crc kubenswrapper[4902]: I0121 16:20:45.925946 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-config-data\") pod \"76119988-951c-4bee-9832-7ac41e0335de\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " Jan 21 16:20:45 crc kubenswrapper[4902]: I0121 16:20:45.927032 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-combined-ca-bundle\") pod \"76119988-951c-4bee-9832-7ac41e0335de\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " Jan 21 16:20:45 crc kubenswrapper[4902]: I0121 16:20:45.933427 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76119988-951c-4bee-9832-7ac41e0335de-kube-api-access-4b7ww" (OuterVolumeSpecName: "kube-api-access-4b7ww") pod "76119988-951c-4bee-9832-7ac41e0335de" (UID: "76119988-951c-4bee-9832-7ac41e0335de"). InnerVolumeSpecName "kube-api-access-4b7ww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:20:45 crc kubenswrapper[4902]: I0121 16:20:45.958275 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-scripts" (OuterVolumeSpecName: "scripts") pod "76119988-951c-4bee-9832-7ac41e0335de" (UID: "76119988-951c-4bee-9832-7ac41e0335de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.030909 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.030950 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b7ww\" (UniqueName: \"kubernetes.io/projected/76119988-951c-4bee-9832-7ac41e0335de-kube-api-access-4b7ww\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.075991 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76119988-951c-4bee-9832-7ac41e0335de" (UID: "76119988-951c-4bee-9832-7ac41e0335de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.090643 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-config-data" (OuterVolumeSpecName: "config-data") pod "76119988-951c-4bee-9832-7ac41e0335de" (UID: "76119988-951c-4bee-9832-7ac41e0335de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.133436 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.133476 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.464718 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"76119988-951c-4bee-9832-7ac41e0335de","Type":"ContainerDied","Data":"6eedcc7523efe081a15cb931149e929c40f8fd79a47397d367b9e351ed5ed0bc"} Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.464785 4902 scope.go:117] "RemoveContainer" containerID="dba9d32141eb642334d7b118ec33f493bcf870c1c7a90210f0fe7599d9f406a3" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.464798 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.489811 4902 scope.go:117] "RemoveContainer" containerID="7757c4395511b6240ada2abaaec9c1d3750f8cb7664c807793da11bcff1a2a77" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.491543 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.514115 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.519864 4902 scope.go:117] "RemoveContainer" containerID="80a1fc628ae387280e9939ad7bb8b7e183b4a8c6a2e6094ae36e73a0d3c70710" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.537285 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 21 16:20:46 crc kubenswrapper[4902]: E0121 16:20:46.537856 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-listener" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.537876 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-listener" Jan 21 16:20:46 crc kubenswrapper[4902]: E0121 16:20:46.537894 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-evaluator" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.537901 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-evaluator" Jan 21 16:20:46 crc kubenswrapper[4902]: E0121 16:20:46.537908 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-notifier" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.537914 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-notifier" Jan 21 16:20:46 crc kubenswrapper[4902]: E0121 16:20:46.537952 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-api" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.537958 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-api" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.538234 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-notifier" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.538260 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-evaluator" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.538270 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-listener" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.538283 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-api" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.540451 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.540551 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.545125 4902 scope.go:117] "RemoveContainer" containerID="a1aabff299ab969d0cfdbf9f4830081affd8dc382ecd5010dad633aa2dd9aecc" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.551660 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.552472 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-zlqm8" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.552599 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.552702 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.552981 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.645199 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-public-tls-certs\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.645243 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-internal-tls-certs\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.645297 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shsrb\" (UniqueName: \"kubernetes.io/projected/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-kube-api-access-shsrb\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.645409 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-combined-ca-bundle\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.645716 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-config-data\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.645998 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-scripts\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.748681 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shsrb\" (UniqueName: \"kubernetes.io/projected/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-kube-api-access-shsrb\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.748767 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-combined-ca-bundle\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.748879 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-config-data\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.748972 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-scripts\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.749068 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-public-tls-certs\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.749104 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-internal-tls-certs\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.753162 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-public-tls-certs\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.753585 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-scripts\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.753848 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-internal-tls-certs\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.754355 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-combined-ca-bundle\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.757680 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-config-data\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.769203 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shsrb\" (UniqueName: \"kubernetes.io/projected/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-kube-api-access-shsrb\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.885367 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 21 16:20:47 crc kubenswrapper[4902]: I0121 16:20:47.263907 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:20:47 crc kubenswrapper[4902]: I0121 16:20:47.264488 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="11823665-4fce-4950-a6d3-bc34bafbc01d" containerName="kube-state-metrics" containerID="cri-o://2ef81e85f6901284a8b407191f27c64483362c30e1987b357fa5d21aa8dc8169" gracePeriod=30 Jan 21 16:20:47 crc kubenswrapper[4902]: I0121 16:20:47.381067 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 21 16:20:47 crc kubenswrapper[4902]: I0121 16:20:47.483620 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"da0893d4-ad82-4a00-8ccf-5e33ead4d85d","Type":"ContainerStarted","Data":"42ae57e5d80c3db206b50a9ae70b559daf09dddf48d7ab2aa530f63f0b4e7ebb"} Jan 21 16:20:47 crc kubenswrapper[4902]: I0121 16:20:47.486532 4902 generic.go:334] "Generic (PLEG): container finished" podID="11823665-4fce-4950-a6d3-bc34bafbc01d" containerID="2ef81e85f6901284a8b407191f27c64483362c30e1987b357fa5d21aa8dc8169" exitCode=2 Jan 21 16:20:47 crc kubenswrapper[4902]: I0121 16:20:47.486564 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"11823665-4fce-4950-a6d3-bc34bafbc01d","Type":"ContainerDied","Data":"2ef81e85f6901284a8b407191f27c64483362c30e1987b357fa5d21aa8dc8169"} Jan 21 16:20:47 crc kubenswrapper[4902]: I0121 16:20:47.705648 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:20:47 crc kubenswrapper[4902]: I0121 16:20:47.870717 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xqw8\" (UniqueName: \"kubernetes.io/projected/11823665-4fce-4950-a6d3-bc34bafbc01d-kube-api-access-7xqw8\") pod \"11823665-4fce-4950-a6d3-bc34bafbc01d\" (UID: \"11823665-4fce-4950-a6d3-bc34bafbc01d\") " Jan 21 16:20:47 crc kubenswrapper[4902]: I0121 16:20:47.874386 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11823665-4fce-4950-a6d3-bc34bafbc01d-kube-api-access-7xqw8" (OuterVolumeSpecName: "kube-api-access-7xqw8") pod "11823665-4fce-4950-a6d3-bc34bafbc01d" (UID: "11823665-4fce-4950-a6d3-bc34bafbc01d"). InnerVolumeSpecName "kube-api-access-7xqw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:20:47 crc kubenswrapper[4902]: I0121 16:20:47.973148 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xqw8\" (UniqueName: \"kubernetes.io/projected/11823665-4fce-4950-a6d3-bc34bafbc01d-kube-api-access-7xqw8\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.308325 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76119988-951c-4bee-9832-7ac41e0335de" path="/var/lib/kubelet/pods/76119988-951c-4bee-9832-7ac41e0335de/volumes" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.499246 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"da0893d4-ad82-4a00-8ccf-5e33ead4d85d","Type":"ContainerStarted","Data":"66a3046a4895e9e5103faf47f578caecb488a2e5c0322f867d6a150de6e92f58"} Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.502688 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"11823665-4fce-4950-a6d3-bc34bafbc01d","Type":"ContainerDied","Data":"6cce11915f96493257a7b6fc755ce2c6cf10806ef6428a4421a57569fde4b038"} Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.502726 4902 scope.go:117] "RemoveContainer" containerID="2ef81e85f6901284a8b407191f27c64483362c30e1987b357fa5d21aa8dc8169" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.502868 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.540130 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.550867 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.561347 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:20:48 crc kubenswrapper[4902]: E0121 16:20:48.561855 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11823665-4fce-4950-a6d3-bc34bafbc01d" containerName="kube-state-metrics" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.561892 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="11823665-4fce-4950-a6d3-bc34bafbc01d" containerName="kube-state-metrics" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.562176 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="11823665-4fce-4950-a6d3-bc34bafbc01d" containerName="kube-state-metrics" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.563024 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.568548 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.570216 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.575286 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.692163 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln6bv\" (UniqueName: \"kubernetes.io/projected/c4fb45d4-a64d-4e42-86b5-9e3924f0f877-kube-api-access-ln6bv\") pod \"kube-state-metrics-0\" (UID: \"c4fb45d4-a64d-4e42-86b5-9e3924f0f877\") " pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.692364 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c4fb45d4-a64d-4e42-86b5-9e3924f0f877-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c4fb45d4-a64d-4e42-86b5-9e3924f0f877\") " pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.692546 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fb45d4-a64d-4e42-86b5-9e3924f0f877-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c4fb45d4-a64d-4e42-86b5-9e3924f0f877\") " pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.692593 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4fb45d4-a64d-4e42-86b5-9e3924f0f877-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c4fb45d4-a64d-4e42-86b5-9e3924f0f877\") " pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.794311 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c4fb45d4-a64d-4e42-86b5-9e3924f0f877-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c4fb45d4-a64d-4e42-86b5-9e3924f0f877\") " pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.794445 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fb45d4-a64d-4e42-86b5-9e3924f0f877-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c4fb45d4-a64d-4e42-86b5-9e3924f0f877\") " pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.794492 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4fb45d4-a64d-4e42-86b5-9e3924f0f877-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c4fb45d4-a64d-4e42-86b5-9e3924f0f877\") " pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.794615 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln6bv\" (UniqueName: \"kubernetes.io/projected/c4fb45d4-a64d-4e42-86b5-9e3924f0f877-kube-api-access-ln6bv\") pod \"kube-state-metrics-0\" (UID: \"c4fb45d4-a64d-4e42-86b5-9e3924f0f877\") " pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.799987 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4fb45d4-a64d-4e42-86b5-9e3924f0f877-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c4fb45d4-a64d-4e42-86b5-9e3924f0f877\") " pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.800159 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c4fb45d4-a64d-4e42-86b5-9e3924f0f877-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c4fb45d4-a64d-4e42-86b5-9e3924f0f877\") " pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.802350 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fb45d4-a64d-4e42-86b5-9e3924f0f877-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c4fb45d4-a64d-4e42-86b5-9e3924f0f877\") " pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.816565 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln6bv\" (UniqueName: \"kubernetes.io/projected/c4fb45d4-a64d-4e42-86b5-9e3924f0f877-kube-api-access-ln6bv\") pod \"kube-state-metrics-0\" (UID: \"c4fb45d4-a64d-4e42-86b5-9e3924f0f877\") " pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.891367 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.194540 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.195007 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="ceilometer-central-agent" containerID="cri-o://75fba54a577984f27ac176f357e52d94c5cdfda78ed3b9823956d8f8f3c0ca23" gracePeriod=30 Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.195099 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="ceilometer-notification-agent" containerID="cri-o://a8f0823ba1c0a5684f79d4de9630d7b181fe3de6041c572de1b9d8bd941a0b73" gracePeriod=30 Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.195123 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="proxy-httpd" containerID="cri-o://5caee1232afcd9c4c5a8dc77f05596a1ad92560cc083b64beb6fa8f0cbb3b5ef" gracePeriod=30 Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.195319 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="sg-core" containerID="cri-o://7fe3c3c2d648b699f01ffcd94a87024e19a4dd935851ffcf7e84ba63d370a012" gracePeriod=30 Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.401407 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:20:49 crc kubenswrapper[4902]: W0121 16:20:49.408497 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4fb45d4_a64d_4e42_86b5_9e3924f0f877.slice/crio-2d6cac04b94abd198ddc9f056233f15ee32d343ba2114d80269dbe12097e3669 WatchSource:0}: Error finding container 2d6cac04b94abd198ddc9f056233f15ee32d343ba2114d80269dbe12097e3669: Status 404 returned error can't find the container with id 2d6cac04b94abd198ddc9f056233f15ee32d343ba2114d80269dbe12097e3669 Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.521186 4902 generic.go:334] "Generic (PLEG): container finished" podID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerID="5caee1232afcd9c4c5a8dc77f05596a1ad92560cc083b64beb6fa8f0cbb3b5ef" exitCode=0 Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.521518 4902 generic.go:334] "Generic (PLEG): container finished" podID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerID="7fe3c3c2d648b699f01ffcd94a87024e19a4dd935851ffcf7e84ba63d370a012" exitCode=2 Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.521243 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4f95e4f-1f5c-4664-91c4-8c904bbac588","Type":"ContainerDied","Data":"5caee1232afcd9c4c5a8dc77f05596a1ad92560cc083b64beb6fa8f0cbb3b5ef"} Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.521638 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4f95e4f-1f5c-4664-91c4-8c904bbac588","Type":"ContainerDied","Data":"7fe3c3c2d648b699f01ffcd94a87024e19a4dd935851ffcf7e84ba63d370a012"} Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.523362 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c4fb45d4-a64d-4e42-86b5-9e3924f0f877","Type":"ContainerStarted","Data":"2d6cac04b94abd198ddc9f056233f15ee32d343ba2114d80269dbe12097e3669"} Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.611679 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cfddb65-w5qxr"] Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.613918 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.616123 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.630145 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cfddb65-w5qxr"] Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.720594 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28xdc\" (UniqueName: \"kubernetes.io/projected/1a6d8ddd-b000-4d99-a48a-394c9b673d67-kube-api-access-28xdc\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.720796 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-dns-svc\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.720825 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-sb\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.720847 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-nb\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.720915 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-openstack-cell1\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.720953 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-config\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.822792 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28xdc\" (UniqueName: \"kubernetes.io/projected/1a6d8ddd-b000-4d99-a48a-394c9b673d67-kube-api-access-28xdc\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.822983 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-dns-svc\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.823015 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-sb\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.823067 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-nb\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.823139 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-openstack-cell1\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.823196 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-config\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.824634 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-sb\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.824705 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-openstack-cell1\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.824734 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-nb\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.824734 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-dns-svc\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.824749 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-config\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.855908 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28xdc\" (UniqueName: \"kubernetes.io/projected/1a6d8ddd-b000-4d99-a48a-394c9b673d67-kube-api-access-28xdc\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.937109 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:50 crc kubenswrapper[4902]: I0121 16:20:50.309876 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11823665-4fce-4950-a6d3-bc34bafbc01d" path="/var/lib/kubelet/pods/11823665-4fce-4950-a6d3-bc34bafbc01d/volumes" Jan 21 16:20:50 crc kubenswrapper[4902]: I0121 16:20:50.535940 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"da0893d4-ad82-4a00-8ccf-5e33ead4d85d","Type":"ContainerStarted","Data":"dd134312a8e6d13b94423c2b0f77109d171de479dad512d546b77e5a94340278"} Jan 21 16:20:50 crc kubenswrapper[4902]: I0121 16:20:50.540885 4902 generic.go:334] "Generic (PLEG): container finished" podID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerID="75fba54a577984f27ac176f357e52d94c5cdfda78ed3b9823956d8f8f3c0ca23" exitCode=0 Jan 21 16:20:50 crc kubenswrapper[4902]: I0121 16:20:50.540940 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4f95e4f-1f5c-4664-91c4-8c904bbac588","Type":"ContainerDied","Data":"75fba54a577984f27ac176f357e52d94c5cdfda78ed3b9823956d8f8f3c0ca23"} Jan 21 16:20:50 crc kubenswrapper[4902]: I0121 16:20:50.552139 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cfddb65-w5qxr"] Jan 21 16:20:50 crc kubenswrapper[4902]: W0121 16:20:50.592269 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a6d8ddd_b000_4d99_a48a_394c9b673d67.slice/crio-861b75366a847a792132019b5bf5301f2b69db15d74e5a937b6410bd67658d40 WatchSource:0}: Error finding container 861b75366a847a792132019b5bf5301f2b69db15d74e5a937b6410bd67658d40: Status 404 returned error can't find the container with id 861b75366a847a792132019b5bf5301f2b69db15d74e5a937b6410bd67658d40 Jan 21 16:20:51 crc kubenswrapper[4902]: I0121 16:20:51.554595 4902 generic.go:334] "Generic (PLEG): container finished" podID="1a6d8ddd-b000-4d99-a48a-394c9b673d67" containerID="9bddc8d1320be1ebe8e83f1e5a8405452ed01e5c87ac037e91ffd39c0dec9810" exitCode=0 Jan 21 16:20:51 crc kubenswrapper[4902]: I0121 16:20:51.555084 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" event={"ID":"1a6d8ddd-b000-4d99-a48a-394c9b673d67","Type":"ContainerDied","Data":"9bddc8d1320be1ebe8e83f1e5a8405452ed01e5c87ac037e91ffd39c0dec9810"} Jan 21 16:20:51 crc kubenswrapper[4902]: I0121 16:20:51.555111 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" event={"ID":"1a6d8ddd-b000-4d99-a48a-394c9b673d67","Type":"ContainerStarted","Data":"861b75366a847a792132019b5bf5301f2b69db15d74e5a937b6410bd67658d40"} Jan 21 16:20:51 crc kubenswrapper[4902]: I0121 16:20:51.564081 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"da0893d4-ad82-4a00-8ccf-5e33ead4d85d","Type":"ContainerStarted","Data":"de87b2378d70a3b0fe8a6ee1a75d737943529d10890ad31f6bf3b5bbb1222f4e"} Jan 21 16:20:51 crc kubenswrapper[4902]: I0121 16:20:51.567605 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c4fb45d4-a64d-4e42-86b5-9e3924f0f877","Type":"ContainerStarted","Data":"fb40b5c4780738fe4caf73638dac1ee89b017da961e27171f3fa517cf1c6e91b"} Jan 21 16:20:51 crc kubenswrapper[4902]: I0121 16:20:51.582359 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 16:20:51 crc kubenswrapper[4902]: I0121 16:20:51.615794 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.025719205 podStartE2EDuration="3.615709707s" podCreationTimestamp="2026-01-21 16:20:48 +0000 UTC" firstStartedPulling="2026-01-21 16:20:49.410988796 +0000 UTC m=+6411.487821825" lastFinishedPulling="2026-01-21 16:20:51.000979288 +0000 UTC m=+6413.077812327" observedRunningTime="2026-01-21 16:20:51.598487113 +0000 UTC m=+6413.675320162" watchObservedRunningTime="2026-01-21 16:20:51.615709707 +0000 UTC m=+6413.692542756" Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.603417 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"da0893d4-ad82-4a00-8ccf-5e33ead4d85d","Type":"ContainerStarted","Data":"258ead796100088e593345257501311f4b8fdf6493f496733ef79b978d12e809"} Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.615389 4902 generic.go:334] "Generic (PLEG): container finished" podID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerID="a8f0823ba1c0a5684f79d4de9630d7b181fe3de6041c572de1b9d8bd941a0b73" exitCode=0 Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.615487 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4f95e4f-1f5c-4664-91c4-8c904bbac588","Type":"ContainerDied","Data":"a8f0823ba1c0a5684f79d4de9630d7b181fe3de6041c572de1b9d8bd941a0b73"} Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.617826 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" event={"ID":"1a6d8ddd-b000-4d99-a48a-394c9b673d67","Type":"ContainerStarted","Data":"eb92db1dc47cbf0d9a02655148d3afd612fd5dad1b60c90dd453bb59f7ad2d9f"} Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.631820 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.871327239 podStartE2EDuration="6.631800301s" podCreationTimestamp="2026-01-21 16:20:46 +0000 UTC" firstStartedPulling="2026-01-21 16:20:47.384142269 +0000 UTC m=+6409.460975298" lastFinishedPulling="2026-01-21 16:20:52.144615331 +0000 UTC m=+6414.221448360" observedRunningTime="2026-01-21 16:20:52.625888834 +0000 UTC m=+6414.702721863" watchObservedRunningTime="2026-01-21 16:20:52.631800301 +0000 UTC m=+6414.708633320" Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.680056 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" podStartSLOduration=3.680024738 podStartE2EDuration="3.680024738s" podCreationTimestamp="2026-01-21 16:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:20:52.662158485 +0000 UTC m=+6414.738991524" watchObservedRunningTime="2026-01-21 16:20:52.680024738 +0000 UTC m=+6414.756857767" Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.885008 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.899582 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-scripts\") pod \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.899630 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4f95e4f-1f5c-4664-91c4-8c904bbac588-run-httpd\") pod \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.899680 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4f95e4f-1f5c-4664-91c4-8c904bbac588-log-httpd\") pod \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.899707 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-combined-ca-bundle\") pod \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.899788 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-sg-core-conf-yaml\") pod \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.899859 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67rfz\" (UniqueName: \"kubernetes.io/projected/c4f95e4f-1f5c-4664-91c4-8c904bbac588-kube-api-access-67rfz\") pod \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.899914 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-config-data\") pod \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.901544 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4f95e4f-1f5c-4664-91c4-8c904bbac588-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c4f95e4f-1f5c-4664-91c4-8c904bbac588" (UID: "c4f95e4f-1f5c-4664-91c4-8c904bbac588"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.901950 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4f95e4f-1f5c-4664-91c4-8c904bbac588-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c4f95e4f-1f5c-4664-91c4-8c904bbac588" (UID: "c4f95e4f-1f5c-4664-91c4-8c904bbac588"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.913188 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-scripts" (OuterVolumeSpecName: "scripts") pod "c4f95e4f-1f5c-4664-91c4-8c904bbac588" (UID: "c4f95e4f-1f5c-4664-91c4-8c904bbac588"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.913331 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f95e4f-1f5c-4664-91c4-8c904bbac588-kube-api-access-67rfz" (OuterVolumeSpecName: "kube-api-access-67rfz") pod "c4f95e4f-1f5c-4664-91c4-8c904bbac588" (UID: "c4f95e4f-1f5c-4664-91c4-8c904bbac588"). InnerVolumeSpecName "kube-api-access-67rfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.968229 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c4f95e4f-1f5c-4664-91c4-8c904bbac588" (UID: "c4f95e4f-1f5c-4664-91c4-8c904bbac588"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.002075 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.002111 4902 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4f95e4f-1f5c-4664-91c4-8c904bbac588-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.002122 4902 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4f95e4f-1f5c-4664-91c4-8c904bbac588-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.002130 4902 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.002141 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67rfz\" (UniqueName: \"kubernetes.io/projected/c4f95e4f-1f5c-4664-91c4-8c904bbac588-kube-api-access-67rfz\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.024876 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4f95e4f-1f5c-4664-91c4-8c904bbac588" (UID: "c4f95e4f-1f5c-4664-91c4-8c904bbac588"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.043245 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-config-data" (OuterVolumeSpecName: "config-data") pod "c4f95e4f-1f5c-4664-91c4-8c904bbac588" (UID: "c4f95e4f-1f5c-4664-91c4-8c904bbac588"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.104431 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.104474 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.630512 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4f95e4f-1f5c-4664-91c4-8c904bbac588","Type":"ContainerDied","Data":"6a0ffc37c1dc6797f40e78442f47022b5947c62404a4648910e4832c3ca3e7c8"} Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.630583 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.630587 4902 scope.go:117] "RemoveContainer" containerID="5caee1232afcd9c4c5a8dc77f05596a1ad92560cc083b64beb6fa8f0cbb3b5ef" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.631844 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.663012 4902 scope.go:117] "RemoveContainer" containerID="7fe3c3c2d648b699f01ffcd94a87024e19a4dd935851ffcf7e84ba63d370a012" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.673952 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.687872 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.690913 4902 scope.go:117] "RemoveContainer" containerID="a8f0823ba1c0a5684f79d4de9630d7b181fe3de6041c572de1b9d8bd941a0b73" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.707547 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:20:53 crc kubenswrapper[4902]: E0121 16:20:53.708059 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="ceilometer-notification-agent" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.708081 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="ceilometer-notification-agent" Jan 21 16:20:53 crc kubenswrapper[4902]: E0121 16:20:53.708111 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="proxy-httpd" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.708120 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="proxy-httpd" Jan 21 16:20:53 crc kubenswrapper[4902]: E0121 16:20:53.708143 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="sg-core" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.708151 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="sg-core" Jan 21 16:20:53 crc kubenswrapper[4902]: E0121 16:20:53.708178 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="ceilometer-central-agent" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.708186 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="ceilometer-central-agent" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.708423 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="ceilometer-central-agent" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.708450 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="ceilometer-notification-agent" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.708459 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="sg-core" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.708480 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="proxy-httpd" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.710530 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.719924 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-scripts\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.720122 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-run-httpd\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.720268 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxfzc\" (UniqueName: \"kubernetes.io/projected/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-kube-api-access-fxfzc\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.720412 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.720431 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.720615 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.720731 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.720736 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.720918 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.721152 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-log-httpd\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.721295 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-config-data\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.721611 4902 scope.go:117] "RemoveContainer" containerID="75fba54a577984f27ac176f357e52d94c5cdfda78ed3b9823956d8f8f3c0ca23" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.733322 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.823290 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-log-httpd\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.823352 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-config-data\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.823428 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-scripts\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.823460 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-run-httpd\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.823497 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxfzc\" (UniqueName: \"kubernetes.io/projected/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-kube-api-access-fxfzc\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.823529 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.823559 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.823587 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.829070 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.829371 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-log-httpd\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.832760 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-config-data\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.832818 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-run-httpd\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.835469 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-scripts\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.836664 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.837407 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.849882 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxfzc\" (UniqueName: \"kubernetes.io/projected/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-kube-api-access-fxfzc\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:54 crc kubenswrapper[4902]: I0121 16:20:54.042942 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:20:54 crc kubenswrapper[4902]: I0121 16:20:54.308070 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" path="/var/lib/kubelet/pods/c4f95e4f-1f5c-4664-91c4-8c904bbac588/volumes" Jan 21 16:20:54 crc kubenswrapper[4902]: I0121 16:20:54.538422 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:20:54 crc kubenswrapper[4902]: I0121 16:20:54.640512 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57c9a2f0-4583-4438-b35f-f92aa9a7efe8","Type":"ContainerStarted","Data":"80650a1cb535f1364497f61a972bf62bf38dfe62ce6816554930d1dc44e55ac6"} Jan 21 16:20:56 crc kubenswrapper[4902]: I0121 16:20:56.663620 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57c9a2f0-4583-4438-b35f-f92aa9a7efe8","Type":"ContainerStarted","Data":"275286b6050db6af2bddfc616e19826798b370ed60cafa1df32c7ce30574461e"} Jan 21 16:20:57 crc kubenswrapper[4902]: I0121 16:20:57.675618 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57c9a2f0-4583-4438-b35f-f92aa9a7efe8","Type":"ContainerStarted","Data":"620a949064b4846ca7bf499ce55f7ea0f8524126db27987486422027d701a448"} Jan 21 16:20:58 crc kubenswrapper[4902]: I0121 16:20:58.686216 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57c9a2f0-4583-4438-b35f-f92aa9a7efe8","Type":"ContainerStarted","Data":"bf2bae481d78afd4a1c6ad1175134042339026c121d475e917c5d3605e92cc1c"} Jan 21 16:20:58 crc kubenswrapper[4902]: I0121 16:20:58.899569 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 16:20:59 crc kubenswrapper[4902]: I0121 16:20:59.698459 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57c9a2f0-4583-4438-b35f-f92aa9a7efe8","Type":"ContainerStarted","Data":"f7f13f07105599c6839260edb92a502a27890441ab8132ce835a2dd8d0fb2803"} Jan 21 16:20:59 crc kubenswrapper[4902]: I0121 16:20:59.698990 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:20:59 crc kubenswrapper[4902]: I0121 16:20:59.727768 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9860646100000001 podStartE2EDuration="6.727744413s" podCreationTimestamp="2026-01-21 16:20:53 +0000 UTC" firstStartedPulling="2026-01-21 16:20:54.542484248 +0000 UTC m=+6416.619317277" lastFinishedPulling="2026-01-21 16:20:59.284164051 +0000 UTC m=+6421.360997080" observedRunningTime="2026-01-21 16:20:59.719152302 +0000 UTC m=+6421.795985331" watchObservedRunningTime="2026-01-21 16:20:59.727744413 +0000 UTC m=+6421.804577442" Jan 21 16:20:59 crc kubenswrapper[4902]: I0121 16:20:59.939070 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.025088 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f7b5475f9-g5lzz"] Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.025718 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" podUID="e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" containerName="dnsmasq-dns" containerID="cri-o://baf3c482643b3ef05bb015530d7c001d912cf37cabd28f9882b045c54788e7f1" gracePeriod=10 Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.062778 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-974x9"] Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.083203 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-3bb8-account-create-update-k967z"] Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.101461 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-8csjv"] Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.120642 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-8csjv"] Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.133518 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-3bb8-account-create-update-k967z"] Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.148237 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-974x9"] Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.179793 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f4c775f77-hlsqd"] Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.182298 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.215090 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f4c775f77-hlsqd"] Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.314462 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chc7n\" (UniqueName: \"kubernetes.io/projected/45e057f7-f682-43f2-a02c-effad070763f-kube-api-access-chc7n\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.314521 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-openstack-cell1\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.314571 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-dns-svc\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.314596 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-ovsdbserver-nb\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.315100 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5963807a-fc48-485b-a3a5-7b07791dfdd0" path="/var/lib/kubelet/pods/5963807a-fc48-485b-a3a5-7b07791dfdd0/volumes" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.315412 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-ovsdbserver-sb\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.315455 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-config\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.317432 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c847ba2-4e65-4677-b8b6-514162b0c1bc" path="/var/lib/kubelet/pods/9c847ba2-4e65-4677-b8b6-514162b0c1bc/volumes" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.318717 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbe639d2-1844-47b8-b4c8-3b602547070a" path="/var/lib/kubelet/pods/fbe639d2-1844-47b8-b4c8-3b602547070a/volumes" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.339556 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" podUID="e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.88:5353: connect: connection refused" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.533372 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chc7n\" (UniqueName: \"kubernetes.io/projected/45e057f7-f682-43f2-a02c-effad070763f-kube-api-access-chc7n\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.533431 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-openstack-cell1\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.533494 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-dns-svc\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.534022 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-ovsdbserver-nb\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.534078 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-ovsdbserver-sb\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.534098 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-config\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.534324 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-openstack-cell1\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.534340 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-dns-svc\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.534819 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-ovsdbserver-nb\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.534953 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-config\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.535032 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-ovsdbserver-sb\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.571283 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chc7n\" (UniqueName: \"kubernetes.io/projected/45e057f7-f682-43f2-a02c-effad070763f-kube-api-access-chc7n\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.571883 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.757965 4902 generic.go:334] "Generic (PLEG): container finished" podID="e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" containerID="baf3c482643b3ef05bb015530d7c001d912cf37cabd28f9882b045c54788e7f1" exitCode=0 Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.758338 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" event={"ID":"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7","Type":"ContainerDied","Data":"baf3c482643b3ef05bb015530d7c001d912cf37cabd28f9882b045c54788e7f1"} Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.056355 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-6d31-account-create-update-v52m2"] Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.067097 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-9kql9"] Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.079097 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-6d31-account-create-update-v52m2"] Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.089407 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-9kql9"] Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.098947 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-4fdb-account-create-update-4c46m"] Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.108818 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-4fdb-account-create-update-4c46m"] Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.245766 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.255714 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-ovsdbserver-sb\") pod \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.255810 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-config\") pod \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.256522 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-ovsdbserver-nb\") pod \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.256649 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2vm2\" (UniqueName: \"kubernetes.io/projected/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-kube-api-access-h2vm2\") pod \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.256807 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-dns-svc\") pod \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.263472 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-kube-api-access-h2vm2" (OuterVolumeSpecName: "kube-api-access-h2vm2") pod "e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" (UID: "e49061e8-8daf-4a22-b1f0-4241f2b1c9c7"). InnerVolumeSpecName "kube-api-access-h2vm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.327333 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" (UID: "e49061e8-8daf-4a22-b1f0-4241f2b1c9c7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.329497 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" (UID: "e49061e8-8daf-4a22-b1f0-4241f2b1c9c7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.348924 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" (UID: "e49061e8-8daf-4a22-b1f0-4241f2b1c9c7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.360411 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.360443 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.360456 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2vm2\" (UniqueName: \"kubernetes.io/projected/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-kube-api-access-h2vm2\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.360469 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.390444 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-config" (OuterVolumeSpecName: "config") pod "e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" (UID: "e49061e8-8daf-4a22-b1f0-4241f2b1c9c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.463420 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.514605 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f4c775f77-hlsqd"] Jan 21 16:21:01 crc kubenswrapper[4902]: W0121 16:21:01.517219 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45e057f7_f682_43f2_a02c_effad070763f.slice/crio-967b3c6d41da05ee645c5169bf9065ab85463e55df1a04e9a7536c3d2ee1dff6 WatchSource:0}: Error finding container 967b3c6d41da05ee645c5169bf9065ab85463e55df1a04e9a7536c3d2ee1dff6: Status 404 returned error can't find the container with id 967b3c6d41da05ee645c5169bf9065ab85463e55df1a04e9a7536c3d2ee1dff6 Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.847444 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" event={"ID":"45e057f7-f682-43f2-a02c-effad070763f","Type":"ContainerStarted","Data":"967b3c6d41da05ee645c5169bf9065ab85463e55df1a04e9a7536c3d2ee1dff6"} Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.851601 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" event={"ID":"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7","Type":"ContainerDied","Data":"38677ca61f06b9260ed5f983f8682c334bd87743eff5be88bd87e6a5090aa3da"} Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.851654 4902 scope.go:117] "RemoveContainer" containerID="baf3c482643b3ef05bb015530d7c001d912cf37cabd28f9882b045c54788e7f1" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.851985 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.892219 4902 scope.go:117] "RemoveContainer" containerID="e76932770c6254b11b917bc645b83b0c1aaf28ee17d431c3d586506bef4ab067" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.896395 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f7b5475f9-g5lzz"] Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.920227 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f7b5475f9-g5lzz"] Jan 21 16:21:02 crc kubenswrapper[4902]: I0121 16:21:02.375082 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58" path="/var/lib/kubelet/pods/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58/volumes" Jan 21 16:21:02 crc kubenswrapper[4902]: I0121 16:21:02.378299 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172" path="/var/lib/kubelet/pods/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172/volumes" Jan 21 16:21:02 crc kubenswrapper[4902]: I0121 16:21:02.378959 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" path="/var/lib/kubelet/pods/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7/volumes" Jan 21 16:21:02 crc kubenswrapper[4902]: I0121 16:21:02.379805 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4f58498-29bd-47d8-8af1-ac98b4a9f510" path="/var/lib/kubelet/pods/e4f58498-29bd-47d8-8af1-ac98b4a9f510/volumes" Jan 21 16:21:02 crc kubenswrapper[4902]: I0121 16:21:02.863630 4902 generic.go:334] "Generic (PLEG): container finished" podID="45e057f7-f682-43f2-a02c-effad070763f" containerID="8e9e09464d5cd039c9442390e76b3f6f970a3878cbf93d936fdfb98fc79ed667" exitCode=0 Jan 21 16:21:02 crc kubenswrapper[4902]: I0121 16:21:02.863800 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" event={"ID":"45e057f7-f682-43f2-a02c-effad070763f","Type":"ContainerDied","Data":"8e9e09464d5cd039c9442390e76b3f6f970a3878cbf93d936fdfb98fc79ed667"} Jan 21 16:21:03 crc kubenswrapper[4902]: I0121 16:21:03.873944 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" event={"ID":"45e057f7-f682-43f2-a02c-effad070763f","Type":"ContainerStarted","Data":"71d693e87447205212c9352b002dc8c72d551deb1c29459ab33dc7d5f2feb14f"} Jan 21 16:21:03 crc kubenswrapper[4902]: I0121 16:21:03.874255 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:10 crc kubenswrapper[4902]: I0121 16:21:10.574203 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:10 crc kubenswrapper[4902]: I0121 16:21:10.598512 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" podStartSLOduration=10.59849317 podStartE2EDuration="10.59849317s" podCreationTimestamp="2026-01-21 16:21:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:21:03.901180606 +0000 UTC m=+6425.978013635" watchObservedRunningTime="2026-01-21 16:21:10.59849317 +0000 UTC m=+6432.675326199" Jan 21 16:21:10 crc kubenswrapper[4902]: I0121 16:21:10.673468 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cfddb65-w5qxr"] Jan 21 16:21:10 crc kubenswrapper[4902]: I0121 16:21:10.673751 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" podUID="1a6d8ddd-b000-4d99-a48a-394c9b673d67" containerName="dnsmasq-dns" containerID="cri-o://eb92db1dc47cbf0d9a02655148d3afd612fd5dad1b60c90dd453bb59f7ad2d9f" gracePeriod=10 Jan 21 16:21:10 crc kubenswrapper[4902]: I0121 16:21:10.955593 4902 generic.go:334] "Generic (PLEG): container finished" podID="1a6d8ddd-b000-4d99-a48a-394c9b673d67" containerID="eb92db1dc47cbf0d9a02655148d3afd612fd5dad1b60c90dd453bb59f7ad2d9f" exitCode=0 Jan 21 16:21:10 crc kubenswrapper[4902]: I0121 16:21:10.955652 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" event={"ID":"1a6d8ddd-b000-4d99-a48a-394c9b673d67","Type":"ContainerDied","Data":"eb92db1dc47cbf0d9a02655148d3afd612fd5dad1b60c90dd453bb59f7ad2d9f"} Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.047996 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zr8nj"] Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.061383 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zr8nj"] Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.242676 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.279878 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-dns-svc\") pod \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.280027 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28xdc\" (UniqueName: \"kubernetes.io/projected/1a6d8ddd-b000-4d99-a48a-394c9b673d67-kube-api-access-28xdc\") pod \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.280087 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-sb\") pod \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.280197 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-config\") pod \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.280694 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-nb\") pod \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.280740 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-openstack-cell1\") pod \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.328230 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a6d8ddd-b000-4d99-a48a-394c9b673d67-kube-api-access-28xdc" (OuterVolumeSpecName: "kube-api-access-28xdc") pod "1a6d8ddd-b000-4d99-a48a-394c9b673d67" (UID: "1a6d8ddd-b000-4d99-a48a-394c9b673d67"). InnerVolumeSpecName "kube-api-access-28xdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.381822 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1a6d8ddd-b000-4d99-a48a-394c9b673d67" (UID: "1a6d8ddd-b000-4d99-a48a-394c9b673d67"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.382442 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-sb\") pod \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " Jan 21 16:21:11 crc kubenswrapper[4902]: W0121 16:21:11.382648 4902 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1a6d8ddd-b000-4d99-a48a-394c9b673d67/volumes/kubernetes.io~configmap/ovsdbserver-sb Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.382669 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1a6d8ddd-b000-4d99-a48a-394c9b673d67" (UID: "1a6d8ddd-b000-4d99-a48a-394c9b673d67"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.387745 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28xdc\" (UniqueName: \"kubernetes.io/projected/1a6d8ddd-b000-4d99-a48a-394c9b673d67-kube-api-access-28xdc\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.388061 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.397426 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1a6d8ddd-b000-4d99-a48a-394c9b673d67" (UID: "1a6d8ddd-b000-4d99-a48a-394c9b673d67"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.398713 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-config" (OuterVolumeSpecName: "config") pod "1a6d8ddd-b000-4d99-a48a-394c9b673d67" (UID: "1a6d8ddd-b000-4d99-a48a-394c9b673d67"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.400812 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1a6d8ddd-b000-4d99-a48a-394c9b673d67" (UID: "1a6d8ddd-b000-4d99-a48a-394c9b673d67"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.401192 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "1a6d8ddd-b000-4d99-a48a-394c9b673d67" (UID: "1a6d8ddd-b000-4d99-a48a-394c9b673d67"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.491030 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.491082 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.491092 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.491122 4902 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.968695 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" event={"ID":"1a6d8ddd-b000-4d99-a48a-394c9b673d67","Type":"ContainerDied","Data":"861b75366a847a792132019b5bf5301f2b69db15d74e5a937b6410bd67658d40"} Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.968757 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.968755 4902 scope.go:117] "RemoveContainer" containerID="eb92db1dc47cbf0d9a02655148d3afd612fd5dad1b60c90dd453bb59f7ad2d9f" Jan 21 16:21:12 crc kubenswrapper[4902]: I0121 16:21:12.005876 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cfddb65-w5qxr"] Jan 21 16:21:12 crc kubenswrapper[4902]: I0121 16:21:12.010440 4902 scope.go:117] "RemoveContainer" containerID="9bddc8d1320be1ebe8e83f1e5a8405452ed01e5c87ac037e91ffd39c0dec9810" Jan 21 16:21:12 crc kubenswrapper[4902]: I0121 16:21:12.018143 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cfddb65-w5qxr"] Jan 21 16:21:12 crc kubenswrapper[4902]: I0121 16:21:12.307818 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a6d8ddd-b000-4d99-a48a-394c9b673d67" path="/var/lib/kubelet/pods/1a6d8ddd-b000-4d99-a48a-394c9b673d67/volumes" Jan 21 16:21:12 crc kubenswrapper[4902]: I0121 16:21:12.308807 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76e6442c-e6fd-498e-b20d-e994574644ea" path="/var/lib/kubelet/pods/76e6442c-e6fd-498e-b20d-e994574644ea/volumes" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.599183 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q"] Jan 21 16:21:20 crc kubenswrapper[4902]: E0121 16:21:20.600208 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" containerName="dnsmasq-dns" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.600229 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" containerName="dnsmasq-dns" Jan 21 16:21:20 crc kubenswrapper[4902]: E0121 16:21:20.600273 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a6d8ddd-b000-4d99-a48a-394c9b673d67" containerName="init" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.600281 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a6d8ddd-b000-4d99-a48a-394c9b673d67" containerName="init" Jan 21 16:21:20 crc kubenswrapper[4902]: E0121 16:21:20.600305 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a6d8ddd-b000-4d99-a48a-394c9b673d67" containerName="dnsmasq-dns" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.600312 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a6d8ddd-b000-4d99-a48a-394c9b673d67" containerName="dnsmasq-dns" Jan 21 16:21:20 crc kubenswrapper[4902]: E0121 16:21:20.600327 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" containerName="init" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.600335 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" containerName="init" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.600592 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a6d8ddd-b000-4d99-a48a-394c9b673d67" containerName="dnsmasq-dns" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.600620 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" containerName="dnsmasq-dns" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.601604 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.605578 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.605651 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.605895 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-c55r2" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.606145 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.618527 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q"] Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.719186 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.719234 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk4sf\" (UniqueName: \"kubernetes.io/projected/8b723bd7-4449-4516-bcc6-9d57d981fbda-kube-api-access-dk4sf\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.719418 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.719467 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.821075 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.821163 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.821213 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.821239 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk4sf\" (UniqueName: \"kubernetes.io/projected/8b723bd7-4449-4516-bcc6-9d57d981fbda-kube-api-access-dk4sf\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.827675 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.828325 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.828583 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.836579 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk4sf\" (UniqueName: \"kubernetes.io/projected/8b723bd7-4449-4516-bcc6-9d57d981fbda-kube-api-access-dk4sf\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.928563 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:21 crc kubenswrapper[4902]: I0121 16:21:21.646335 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q"] Jan 21 16:21:22 crc kubenswrapper[4902]: I0121 16:21:22.084617 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" event={"ID":"8b723bd7-4449-4516-bcc6-9d57d981fbda","Type":"ContainerStarted","Data":"1034a61bf078784c944c93b937eae597f0c63c5b54d928588db8926f39a5574c"} Jan 21 16:21:24 crc kubenswrapper[4902]: I0121 16:21:24.066652 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 16:21:28 crc kubenswrapper[4902]: I0121 16:21:28.450596 4902 scope.go:117] "RemoveContainer" containerID="99ee9f7749f725c9768c807df30815b54542175e3f04ac09d8600799af1e8a19" Jan 21 16:21:30 crc kubenswrapper[4902]: I0121 16:21:30.040297 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mqjfk"] Jan 21 16:21:30 crc kubenswrapper[4902]: I0121 16:21:30.049917 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mqjfk"] Jan 21 16:21:30 crc kubenswrapper[4902]: I0121 16:21:30.498112 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fafbdf5-1100-4f6f-831e-c7dd0fc63586" path="/var/lib/kubelet/pods/6fafbdf5-1100-4f6f-831e-c7dd0fc63586/volumes" Jan 21 16:21:31 crc kubenswrapper[4902]: I0121 16:21:31.035169 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-7ld7m"] Jan 21 16:21:31 crc kubenswrapper[4902]: I0121 16:21:31.048345 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-7ld7m"] Jan 21 16:21:32 crc kubenswrapper[4902]: I0121 16:21:32.349601 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beebb97d-c56a-4c7d-8ec0-f9982f9c2e32" path="/var/lib/kubelet/pods/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32/volumes" Jan 21 16:21:33 crc kubenswrapper[4902]: I0121 16:21:33.074828 4902 scope.go:117] "RemoveContainer" containerID="7e054620420f286eb319ea74bdca60ca0a6e43b9d52a5c4ad7043b88a7a02929" Jan 21 16:21:33 crc kubenswrapper[4902]: I0121 16:21:33.137052 4902 scope.go:117] "RemoveContainer" containerID="432f7ea37f3132bc52dfdced9ef97fb63c40a136694ea136586f2dee4c4a42b9" Jan 21 16:21:33 crc kubenswrapper[4902]: I0121 16:21:33.198772 4902 scope.go:117] "RemoveContainer" containerID="e2e258a3a1605851e7cb0ee36afe37bb54f98c9526d53b997a37f6c2cacd6192" Jan 21 16:21:33 crc kubenswrapper[4902]: I0121 16:21:33.237802 4902 scope.go:117] "RemoveContainer" containerID="f7c278e1da3c54353778da6f63a10b5d381146af280b9714be7ae6c71d2e3772" Jan 21 16:21:33 crc kubenswrapper[4902]: I0121 16:21:33.298246 4902 scope.go:117] "RemoveContainer" containerID="889fe026bf2a7b74189409dad70c2684f40ab43f381e9a39094266539161c3b9" Jan 21 16:21:33 crc kubenswrapper[4902]: I0121 16:21:33.444303 4902 scope.go:117] "RemoveContainer" containerID="1b0ff0cc281058854299a37c0eae467595b367d385ca015e5d0368dda142849e" Jan 21 16:21:33 crc kubenswrapper[4902]: I0121 16:21:33.464977 4902 scope.go:117] "RemoveContainer" containerID="9e04cfcc3e9b81819b9ca08bf91b4f4038827b55094f93cb2cd3586ac9a3d537" Jan 21 16:21:33 crc kubenswrapper[4902]: I0121 16:21:33.497205 4902 scope.go:117] "RemoveContainer" containerID="a9669cf760ec41fe8c9ac56172de1dfc2733858ea7763d6ffbfc15c535c182ce" Jan 21 16:21:34 crc kubenswrapper[4902]: I0121 16:21:34.280011 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" event={"ID":"8b723bd7-4449-4516-bcc6-9d57d981fbda","Type":"ContainerStarted","Data":"fe1f700c2f757bf5a46cbc09cb60490bbf221b2b62b3b42652e0e3b68bcf0dd9"} Jan 21 16:21:34 crc kubenswrapper[4902]: I0121 16:21:34.308938 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" podStartSLOduration=2.576475875 podStartE2EDuration="14.308905349s" podCreationTimestamp="2026-01-21 16:21:20 +0000 UTC" firstStartedPulling="2026-01-21 16:21:21.73515291 +0000 UTC m=+6443.811985939" lastFinishedPulling="2026-01-21 16:21:33.467582384 +0000 UTC m=+6455.544415413" observedRunningTime="2026-01-21 16:21:34.3050221 +0000 UTC m=+6456.381855189" watchObservedRunningTime="2026-01-21 16:21:34.308905349 +0000 UTC m=+6456.385738428" Jan 21 16:21:47 crc kubenswrapper[4902]: I0121 16:21:47.432972 4902 generic.go:334] "Generic (PLEG): container finished" podID="8b723bd7-4449-4516-bcc6-9d57d981fbda" containerID="fe1f700c2f757bf5a46cbc09cb60490bbf221b2b62b3b42652e0e3b68bcf0dd9" exitCode=0 Jan 21 16:21:47 crc kubenswrapper[4902]: I0121 16:21:47.433075 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" event={"ID":"8b723bd7-4449-4516-bcc6-9d57d981fbda","Type":"ContainerDied","Data":"fe1f700c2f757bf5a46cbc09cb60490bbf221b2b62b3b42652e0e3b68bcf0dd9"} Jan 21 16:21:47 crc kubenswrapper[4902]: I0121 16:21:47.770566 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:21:47 crc kubenswrapper[4902]: I0121 16:21:47.770656 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:21:48 crc kubenswrapper[4902]: I0121 16:21:48.897866 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.002349 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-inventory\") pod \"8b723bd7-4449-4516-bcc6-9d57d981fbda\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.002821 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-ssh-key-openstack-cell1\") pod \"8b723bd7-4449-4516-bcc6-9d57d981fbda\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.002903 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk4sf\" (UniqueName: \"kubernetes.io/projected/8b723bd7-4449-4516-bcc6-9d57d981fbda-kube-api-access-dk4sf\") pod \"8b723bd7-4449-4516-bcc6-9d57d981fbda\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.002945 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-pre-adoption-validation-combined-ca-bundle\") pod \"8b723bd7-4449-4516-bcc6-9d57d981fbda\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.009641 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b723bd7-4449-4516-bcc6-9d57d981fbda-kube-api-access-dk4sf" (OuterVolumeSpecName: "kube-api-access-dk4sf") pod "8b723bd7-4449-4516-bcc6-9d57d981fbda" (UID: "8b723bd7-4449-4516-bcc6-9d57d981fbda"). InnerVolumeSpecName "kube-api-access-dk4sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.009632 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "8b723bd7-4449-4516-bcc6-9d57d981fbda" (UID: "8b723bd7-4449-4516-bcc6-9d57d981fbda"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.033362 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-inventory" (OuterVolumeSpecName: "inventory") pod "8b723bd7-4449-4516-bcc6-9d57d981fbda" (UID: "8b723bd7-4449-4516-bcc6-9d57d981fbda"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.041993 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "8b723bd7-4449-4516-bcc6-9d57d981fbda" (UID: "8b723bd7-4449-4516-bcc6-9d57d981fbda"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.070889 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-qd5pv"] Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.086906 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-qd5pv"] Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.106113 4902 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.106175 4902 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.106187 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk4sf\" (UniqueName: \"kubernetes.io/projected/8b723bd7-4449-4516-bcc6-9d57d981fbda-kube-api-access-dk4sf\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.106228 4902 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.458967 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" event={"ID":"8b723bd7-4449-4516-bcc6-9d57d981fbda","Type":"ContainerDied","Data":"1034a61bf078784c944c93b937eae597f0c63c5b54d928588db8926f39a5574c"} Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.459010 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1034a61bf078784c944c93b937eae597f0c63c5b54d928588db8926f39a5574c" Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.459089 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:50 crc kubenswrapper[4902]: I0121 16:21:50.306848 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87c2e205-1cb6-4b63-89d5-c03370d5cb02" path="/var/lib/kubelet/pods/87c2e205-1cb6-4b63-89d5-c03370d5cb02/volumes" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.362385 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c"] Jan 21 16:21:53 crc kubenswrapper[4902]: E0121 16:21:53.363435 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b723bd7-4449-4516-bcc6-9d57d981fbda" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.363456 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b723bd7-4449-4516-bcc6-9d57d981fbda" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.363775 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b723bd7-4449-4516-bcc6-9d57d981fbda" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.364772 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.370299 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.370668 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-c55r2" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.370869 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.371035 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.400632 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c"] Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.409738 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.409813 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.409845 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.410003 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28wbd\" (UniqueName: \"kubernetes.io/projected/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-kube-api-access-28wbd\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.511816 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.511898 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.511939 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.512186 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28wbd\" (UniqueName: \"kubernetes.io/projected/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-kube-api-access-28wbd\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.518819 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.520532 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.521742 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.529303 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28wbd\" (UniqueName: \"kubernetes.io/projected/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-kube-api-access-28wbd\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.689483 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:54 crc kubenswrapper[4902]: I0121 16:21:54.315084 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c"] Jan 21 16:21:54 crc kubenswrapper[4902]: I0121 16:21:54.505542 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" event={"ID":"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f","Type":"ContainerStarted","Data":"d937f1a62ac88d359e95c410ee456b4680107ca512a37ba97d0e11eaf1bd08e7"} Jan 21 16:21:55 crc kubenswrapper[4902]: I0121 16:21:55.529451 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" event={"ID":"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f","Type":"ContainerStarted","Data":"b5588d16688a7ebc8d6fd23427c875175924aa3ba2e94e6335eed27cd3b25dfb"} Jan 21 16:21:55 crc kubenswrapper[4902]: I0121 16:21:55.554946 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" podStartSLOduration=1.877800626 podStartE2EDuration="2.55492171s" podCreationTimestamp="2026-01-21 16:21:53 +0000 UTC" firstStartedPulling="2026-01-21 16:21:54.29943043 +0000 UTC m=+6476.376263459" lastFinishedPulling="2026-01-21 16:21:54.976551514 +0000 UTC m=+6477.053384543" observedRunningTime="2026-01-21 16:21:55.545830524 +0000 UTC m=+6477.622663553" watchObservedRunningTime="2026-01-21 16:21:55.55492171 +0000 UTC m=+6477.631754729" Jan 21 16:22:17 crc kubenswrapper[4902]: I0121 16:22:17.769649 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:22:17 crc kubenswrapper[4902]: I0121 16:22:17.770298 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:22:33 crc kubenswrapper[4902]: I0121 16:22:33.797515 4902 scope.go:117] "RemoveContainer" containerID="4c0781a19d8b6c48f488f12d1b70c08865b23377c77473fa64a8a1663801f2cb" Jan 21 16:22:33 crc kubenswrapper[4902]: I0121 16:22:33.849839 4902 scope.go:117] "RemoveContainer" containerID="d6b0eff18372cd37ddfe92c986fb4a923d9dbd3f107869f09f0fdbe8e2eaaa5c" Jan 21 16:22:33 crc kubenswrapper[4902]: I0121 16:22:33.906545 4902 scope.go:117] "RemoveContainer" containerID="2c69e68e7d02d1de6bf68e1e65e17ee7498b6d1191ba5efd74e3f15243d799ed" Jan 21 16:22:33 crc kubenswrapper[4902]: I0121 16:22:33.968729 4902 scope.go:117] "RemoveContainer" containerID="dd9c814774718de26b2a6f5f159c980f718ec5bd198d471d2426d82a67f32ddd" Jan 21 16:22:34 crc kubenswrapper[4902]: I0121 16:22:34.011818 4902 scope.go:117] "RemoveContainer" containerID="0f5fdb1f77ee5e53923e8edceba05628177b711a2533fe02fb33769c82576bcf" Jan 21 16:22:34 crc kubenswrapper[4902]: I0121 16:22:34.043268 4902 scope.go:117] "RemoveContainer" containerID="b5b92e7f1cc27fed5221f05667fdb25b332ac410148a8012346660a03a7b0fdf" Jan 21 16:22:47 crc kubenswrapper[4902]: I0121 16:22:47.769572 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:22:47 crc kubenswrapper[4902]: I0121 16:22:47.770195 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:22:47 crc kubenswrapper[4902]: I0121 16:22:47.770251 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 16:22:47 crc kubenswrapper[4902]: I0121 16:22:47.772386 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dd9f943d521b68000af79f0fd73624ba084fada704e30191659b3cc0a8066bce"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:22:47 crc kubenswrapper[4902]: I0121 16:22:47.772458 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://dd9f943d521b68000af79f0fd73624ba084fada704e30191659b3cc0a8066bce" gracePeriod=600 Jan 21 16:22:48 crc kubenswrapper[4902]: I0121 16:22:48.182158 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="dd9f943d521b68000af79f0fd73624ba084fada704e30191659b3cc0a8066bce" exitCode=0 Jan 21 16:22:48 crc kubenswrapper[4902]: I0121 16:22:48.182186 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"dd9f943d521b68000af79f0fd73624ba084fada704e30191659b3cc0a8066bce"} Jan 21 16:22:48 crc kubenswrapper[4902]: I0121 16:22:48.182508 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8"} Jan 21 16:22:48 crc kubenswrapper[4902]: I0121 16:22:48.182540 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:23:22 crc kubenswrapper[4902]: I0121 16:23:22.049896 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-5zjhh"] Jan 21 16:23:22 crc kubenswrapper[4902]: I0121 16:23:22.084609 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-ae8b-account-create-update-q86xl"] Jan 21 16:23:22 crc kubenswrapper[4902]: I0121 16:23:22.095599 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-ae8b-account-create-update-q86xl"] Jan 21 16:23:22 crc kubenswrapper[4902]: I0121 16:23:22.107443 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-5zjhh"] Jan 21 16:23:22 crc kubenswrapper[4902]: I0121 16:23:22.316818 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1baaefdd-ea47-4ac0-98d0-d370180b0eb0" path="/var/lib/kubelet/pods/1baaefdd-ea47-4ac0-98d0-d370180b0eb0/volumes" Jan 21 16:23:22 crc kubenswrapper[4902]: I0121 16:23:22.318262 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="507bf37f-b9da-4064-970b-89f9a27589fe" path="/var/lib/kubelet/pods/507bf37f-b9da-4064-970b-89f9a27589fe/volumes" Jan 21 16:23:28 crc kubenswrapper[4902]: I0121 16:23:28.059204 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-q8nvb"] Jan 21 16:23:28 crc kubenswrapper[4902]: I0121 16:23:28.082608 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-q8nvb"] Jan 21 16:23:28 crc kubenswrapper[4902]: I0121 16:23:28.308931 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4a4e549-a509-40db-8756-e37432024793" path="/var/lib/kubelet/pods/f4a4e549-a509-40db-8756-e37432024793/volumes" Jan 21 16:23:30 crc kubenswrapper[4902]: I0121 16:23:30.027471 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-d1a6-account-create-update-cw969"] Jan 21 16:23:30 crc kubenswrapper[4902]: I0121 16:23:30.037447 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-d1a6-account-create-update-cw969"] Jan 21 16:23:30 crc kubenswrapper[4902]: I0121 16:23:30.306768 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="502e21f3-ea57-4f04-8e23-9b45c7a07ca2" path="/var/lib/kubelet/pods/502e21f3-ea57-4f04-8e23-9b45c7a07ca2/volumes" Jan 21 16:23:34 crc kubenswrapper[4902]: I0121 16:23:34.318998 4902 scope.go:117] "RemoveContainer" containerID="c51c67b3d5eb2547d5d118a566d05127b5232bbe2fa2468af4680ad00279aa48" Jan 21 16:23:34 crc kubenswrapper[4902]: I0121 16:23:34.351537 4902 scope.go:117] "RemoveContainer" containerID="26b51b45f191ff662cf71fe75dfa0a28808489ff71c63772b28558abe727c5a5" Jan 21 16:23:34 crc kubenswrapper[4902]: I0121 16:23:34.385459 4902 scope.go:117] "RemoveContainer" containerID="a5edfafdeacf21f426cc5bd6281b1cd868d12717fac023895ab55ea3fbcafc1e" Jan 21 16:23:34 crc kubenswrapper[4902]: I0121 16:23:34.432600 4902 scope.go:117] "RemoveContainer" containerID="bbd0af7b0e6a302b723bb3848d085087ea3fb23417c8175750a0c41598fe534f" Jan 21 16:23:34 crc kubenswrapper[4902]: I0121 16:23:34.496954 4902 scope.go:117] "RemoveContainer" containerID="5cf0b5bdbf01f12d44cd41471171a9c5244aec958a6477fc8835553eabc2f3b6" Jan 21 16:23:34 crc kubenswrapper[4902]: I0121 16:23:34.568184 4902 scope.go:117] "RemoveContainer" containerID="69c36b0bae9178724a6d97de46722cf5b0cc80d59e4ce8f2e0554584489171d5" Jan 21 16:23:34 crc kubenswrapper[4902]: I0121 16:23:34.620190 4902 scope.go:117] "RemoveContainer" containerID="7bdedfb5108f3ffecf10a0859392a7cf8d5159f213fdc4909c0c06024f91b0c1" Jan 21 16:23:34 crc kubenswrapper[4902]: I0121 16:23:34.644321 4902 scope.go:117] "RemoveContainer" containerID="9ceea852acb3ca8b99175935197b72276107562be97cda3fb8e5495a3f66a192" Jan 21 16:24:12 crc kubenswrapper[4902]: I0121 16:24:12.043730 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-vrr2k"] Jan 21 16:24:12 crc kubenswrapper[4902]: I0121 16:24:12.052179 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-vrr2k"] Jan 21 16:24:12 crc kubenswrapper[4902]: I0121 16:24:12.309256 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60a6ab47-0bbe-428a-82f5-478fc4c52e8a" path="/var/lib/kubelet/pods/60a6ab47-0bbe-428a-82f5-478fc4c52e8a/volumes" Jan 21 16:24:34 crc kubenswrapper[4902]: I0121 16:24:34.839392 4902 scope.go:117] "RemoveContainer" containerID="abc9a540052a00b1952e4ccbff28d0fd5e66b03f552886a2028474527bd5343e" Jan 21 16:24:34 crc kubenswrapper[4902]: I0121 16:24:34.872840 4902 scope.go:117] "RemoveContainer" containerID="f4cdf18149c84ac20ab00cae2362d90191fa45e99a1761f8508af240e2f326b6" Jan 21 16:25:17 crc kubenswrapper[4902]: I0121 16:25:17.769596 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:25:17 crc kubenswrapper[4902]: I0121 16:25:17.770552 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:25:47 crc kubenswrapper[4902]: I0121 16:25:47.769946 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:25:47 crc kubenswrapper[4902]: I0121 16:25:47.770632 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:26:17 crc kubenswrapper[4902]: I0121 16:26:17.770125 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:26:17 crc kubenswrapper[4902]: I0121 16:26:17.770818 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:26:17 crc kubenswrapper[4902]: I0121 16:26:17.771014 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 16:26:17 crc kubenswrapper[4902]: I0121 16:26:17.772735 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:26:17 crc kubenswrapper[4902]: I0121 16:26:17.772876 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" gracePeriod=600 Jan 21 16:26:17 crc kubenswrapper[4902]: E0121 16:26:17.913811 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:26:18 crc kubenswrapper[4902]: I0121 16:26:18.157936 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" exitCode=0 Jan 21 16:26:18 crc kubenswrapper[4902]: I0121 16:26:18.158010 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8"} Jan 21 16:26:18 crc kubenswrapper[4902]: I0121 16:26:18.158342 4902 scope.go:117] "RemoveContainer" containerID="dd9f943d521b68000af79f0fd73624ba084fada704e30191659b3cc0a8066bce" Jan 21 16:26:18 crc kubenswrapper[4902]: I0121 16:26:18.161996 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:26:18 crc kubenswrapper[4902]: E0121 16:26:18.162641 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:26:29 crc kubenswrapper[4902]: I0121 16:26:29.295481 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:26:29 crc kubenswrapper[4902]: E0121 16:26:29.296329 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:26:41 crc kubenswrapper[4902]: I0121 16:26:41.295535 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:26:41 crc kubenswrapper[4902]: E0121 16:26:41.296773 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:26:49 crc kubenswrapper[4902]: I0121 16:26:49.830382 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6smhb"] Jan 21 16:26:49 crc kubenswrapper[4902]: I0121 16:26:49.833345 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:26:49 crc kubenswrapper[4902]: I0121 16:26:49.853034 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6smhb"] Jan 21 16:26:49 crc kubenswrapper[4902]: I0121 16:26:49.964522 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1d2f8ef-0175-4070-881a-825ccd1219b8-catalog-content\") pod \"redhat-marketplace-6smhb\" (UID: \"e1d2f8ef-0175-4070-881a-825ccd1219b8\") " pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:26:49 crc kubenswrapper[4902]: I0121 16:26:49.965101 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fctfd\" (UniqueName: \"kubernetes.io/projected/e1d2f8ef-0175-4070-881a-825ccd1219b8-kube-api-access-fctfd\") pod \"redhat-marketplace-6smhb\" (UID: \"e1d2f8ef-0175-4070-881a-825ccd1219b8\") " pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:26:49 crc kubenswrapper[4902]: I0121 16:26:49.965687 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1d2f8ef-0175-4070-881a-825ccd1219b8-utilities\") pod \"redhat-marketplace-6smhb\" (UID: \"e1d2f8ef-0175-4070-881a-825ccd1219b8\") " pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:26:50 crc kubenswrapper[4902]: I0121 16:26:50.068215 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fctfd\" (UniqueName: \"kubernetes.io/projected/e1d2f8ef-0175-4070-881a-825ccd1219b8-kube-api-access-fctfd\") pod \"redhat-marketplace-6smhb\" (UID: \"e1d2f8ef-0175-4070-881a-825ccd1219b8\") " pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:26:50 crc kubenswrapper[4902]: I0121 16:26:50.068461 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1d2f8ef-0175-4070-881a-825ccd1219b8-utilities\") pod \"redhat-marketplace-6smhb\" (UID: \"e1d2f8ef-0175-4070-881a-825ccd1219b8\") " pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:26:50 crc kubenswrapper[4902]: I0121 16:26:50.068538 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1d2f8ef-0175-4070-881a-825ccd1219b8-catalog-content\") pod \"redhat-marketplace-6smhb\" (UID: \"e1d2f8ef-0175-4070-881a-825ccd1219b8\") " pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:26:50 crc kubenswrapper[4902]: I0121 16:26:50.069130 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1d2f8ef-0175-4070-881a-825ccd1219b8-utilities\") pod \"redhat-marketplace-6smhb\" (UID: \"e1d2f8ef-0175-4070-881a-825ccd1219b8\") " pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:26:50 crc kubenswrapper[4902]: I0121 16:26:50.069149 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1d2f8ef-0175-4070-881a-825ccd1219b8-catalog-content\") pod \"redhat-marketplace-6smhb\" (UID: \"e1d2f8ef-0175-4070-881a-825ccd1219b8\") " pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:26:50 crc kubenswrapper[4902]: I0121 16:26:50.095145 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fctfd\" (UniqueName: \"kubernetes.io/projected/e1d2f8ef-0175-4070-881a-825ccd1219b8-kube-api-access-fctfd\") pod \"redhat-marketplace-6smhb\" (UID: \"e1d2f8ef-0175-4070-881a-825ccd1219b8\") " pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:26:50 crc kubenswrapper[4902]: I0121 16:26:50.161077 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:26:50 crc kubenswrapper[4902]: I0121 16:26:50.673949 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6smhb"] Jan 21 16:26:51 crc kubenswrapper[4902]: I0121 16:26:51.512339 4902 generic.go:334] "Generic (PLEG): container finished" podID="e1d2f8ef-0175-4070-881a-825ccd1219b8" containerID="32cdb44674e6374f766b65eaed6a61b60758360dd1e8e594ab7a3baf4d914d87" exitCode=0 Jan 21 16:26:51 crc kubenswrapper[4902]: I0121 16:26:51.512462 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6smhb" event={"ID":"e1d2f8ef-0175-4070-881a-825ccd1219b8","Type":"ContainerDied","Data":"32cdb44674e6374f766b65eaed6a61b60758360dd1e8e594ab7a3baf4d914d87"} Jan 21 16:26:51 crc kubenswrapper[4902]: I0121 16:26:51.512584 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6smhb" event={"ID":"e1d2f8ef-0175-4070-881a-825ccd1219b8","Type":"ContainerStarted","Data":"26557bb8550a70ceb69c02f48276580205f7ed70b0b2b78a7ba9c236ae6b41de"} Jan 21 16:26:51 crc kubenswrapper[4902]: I0121 16:26:51.514832 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:26:53 crc kubenswrapper[4902]: I0121 16:26:53.542431 4902 generic.go:334] "Generic (PLEG): container finished" podID="e1d2f8ef-0175-4070-881a-825ccd1219b8" containerID="04aced0c0b567c17119cd21528fe883b24627e7fda15f96134eacb5302158c50" exitCode=0 Jan 21 16:26:53 crc kubenswrapper[4902]: I0121 16:26:53.542480 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6smhb" event={"ID":"e1d2f8ef-0175-4070-881a-825ccd1219b8","Type":"ContainerDied","Data":"04aced0c0b567c17119cd21528fe883b24627e7fda15f96134eacb5302158c50"} Jan 21 16:26:54 crc kubenswrapper[4902]: I0121 16:26:54.556497 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6smhb" event={"ID":"e1d2f8ef-0175-4070-881a-825ccd1219b8","Type":"ContainerStarted","Data":"b373e6919ca764e66afc03ed68fe6af0501058a2ad9ef7fa08c0b3af4ce3215b"} Jan 21 16:26:54 crc kubenswrapper[4902]: I0121 16:26:54.582526 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6smhb" podStartSLOduration=2.859371059 podStartE2EDuration="5.58249938s" podCreationTimestamp="2026-01-21 16:26:49 +0000 UTC" firstStartedPulling="2026-01-21 16:26:51.514559285 +0000 UTC m=+6773.591392314" lastFinishedPulling="2026-01-21 16:26:54.237687596 +0000 UTC m=+6776.314520635" observedRunningTime="2026-01-21 16:26:54.575765011 +0000 UTC m=+6776.652598080" watchObservedRunningTime="2026-01-21 16:26:54.58249938 +0000 UTC m=+6776.659332449" Jan 21 16:26:55 crc kubenswrapper[4902]: I0121 16:26:55.295382 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:26:55 crc kubenswrapper[4902]: E0121 16:26:55.295690 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:27:00 crc kubenswrapper[4902]: I0121 16:27:00.162379 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:27:00 crc kubenswrapper[4902]: I0121 16:27:00.163176 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:27:00 crc kubenswrapper[4902]: I0121 16:27:00.212302 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:27:00 crc kubenswrapper[4902]: I0121 16:27:00.684465 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:27:00 crc kubenswrapper[4902]: I0121 16:27:00.739359 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6smhb"] Jan 21 16:27:02 crc kubenswrapper[4902]: I0121 16:27:02.652204 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6smhb" podUID="e1d2f8ef-0175-4070-881a-825ccd1219b8" containerName="registry-server" containerID="cri-o://b373e6919ca764e66afc03ed68fe6af0501058a2ad9ef7fa08c0b3af4ce3215b" gracePeriod=2 Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.041777 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-bjrq8"] Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.050410 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-bjrq8"] Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.665100 4902 generic.go:334] "Generic (PLEG): container finished" podID="e1d2f8ef-0175-4070-881a-825ccd1219b8" containerID="b373e6919ca764e66afc03ed68fe6af0501058a2ad9ef7fa08c0b3af4ce3215b" exitCode=0 Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.665167 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6smhb" event={"ID":"e1d2f8ef-0175-4070-881a-825ccd1219b8","Type":"ContainerDied","Data":"b373e6919ca764e66afc03ed68fe6af0501058a2ad9ef7fa08c0b3af4ce3215b"} Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.665382 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6smhb" event={"ID":"e1d2f8ef-0175-4070-881a-825ccd1219b8","Type":"ContainerDied","Data":"26557bb8550a70ceb69c02f48276580205f7ed70b0b2b78a7ba9c236ae6b41de"} Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.665423 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26557bb8550a70ceb69c02f48276580205f7ed70b0b2b78a7ba9c236ae6b41de" Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.694290 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.810436 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fctfd\" (UniqueName: \"kubernetes.io/projected/e1d2f8ef-0175-4070-881a-825ccd1219b8-kube-api-access-fctfd\") pod \"e1d2f8ef-0175-4070-881a-825ccd1219b8\" (UID: \"e1d2f8ef-0175-4070-881a-825ccd1219b8\") " Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.810619 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1d2f8ef-0175-4070-881a-825ccd1219b8-catalog-content\") pod \"e1d2f8ef-0175-4070-881a-825ccd1219b8\" (UID: \"e1d2f8ef-0175-4070-881a-825ccd1219b8\") " Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.810640 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1d2f8ef-0175-4070-881a-825ccd1219b8-utilities\") pod \"e1d2f8ef-0175-4070-881a-825ccd1219b8\" (UID: \"e1d2f8ef-0175-4070-881a-825ccd1219b8\") " Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.811554 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1d2f8ef-0175-4070-881a-825ccd1219b8-utilities" (OuterVolumeSpecName: "utilities") pod "e1d2f8ef-0175-4070-881a-825ccd1219b8" (UID: "e1d2f8ef-0175-4070-881a-825ccd1219b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.816292 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2f8ef-0175-4070-881a-825ccd1219b8-kube-api-access-fctfd" (OuterVolumeSpecName: "kube-api-access-fctfd") pod "e1d2f8ef-0175-4070-881a-825ccd1219b8" (UID: "e1d2f8ef-0175-4070-881a-825ccd1219b8"). InnerVolumeSpecName "kube-api-access-fctfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.833411 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1d2f8ef-0175-4070-881a-825ccd1219b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1d2f8ef-0175-4070-881a-825ccd1219b8" (UID: "e1d2f8ef-0175-4070-881a-825ccd1219b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.913203 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1d2f8ef-0175-4070-881a-825ccd1219b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.913587 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1d2f8ef-0175-4070-881a-825ccd1219b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.913601 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fctfd\" (UniqueName: \"kubernetes.io/projected/e1d2f8ef-0175-4070-881a-825ccd1219b8-kube-api-access-fctfd\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:04 crc kubenswrapper[4902]: I0121 16:27:04.035382 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-11d1-account-create-update-c7r42"] Jan 21 16:27:04 crc kubenswrapper[4902]: I0121 16:27:04.044363 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-11d1-account-create-update-c7r42"] Jan 21 16:27:04 crc kubenswrapper[4902]: I0121 16:27:04.307586 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c" path="/var/lib/kubelet/pods/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c/volumes" Jan 21 16:27:04 crc kubenswrapper[4902]: I0121 16:27:04.308260 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff9e17b7-5e08-4042-9b1b-ccad64651eef" path="/var/lib/kubelet/pods/ff9e17b7-5e08-4042-9b1b-ccad64651eef/volumes" Jan 21 16:27:04 crc kubenswrapper[4902]: I0121 16:27:04.683411 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:27:04 crc kubenswrapper[4902]: I0121 16:27:04.721815 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6smhb"] Jan 21 16:27:04 crc kubenswrapper[4902]: I0121 16:27:04.736993 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6smhb"] Jan 21 16:27:06 crc kubenswrapper[4902]: I0121 16:27:06.308203 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2f8ef-0175-4070-881a-825ccd1219b8" path="/var/lib/kubelet/pods/e1d2f8ef-0175-4070-881a-825ccd1219b8/volumes" Jan 21 16:27:07 crc kubenswrapper[4902]: I0121 16:27:07.294685 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:27:07 crc kubenswrapper[4902]: E0121 16:27:07.295258 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.714634 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rdqrp"] Jan 21 16:27:09 crc kubenswrapper[4902]: E0121 16:27:09.715738 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1d2f8ef-0175-4070-881a-825ccd1219b8" containerName="registry-server" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.715770 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1d2f8ef-0175-4070-881a-825ccd1219b8" containerName="registry-server" Jan 21 16:27:09 crc kubenswrapper[4902]: E0121 16:27:09.715800 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1d2f8ef-0175-4070-881a-825ccd1219b8" containerName="extract-utilities" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.715808 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1d2f8ef-0175-4070-881a-825ccd1219b8" containerName="extract-utilities" Jan 21 16:27:09 crc kubenswrapper[4902]: E0121 16:27:09.715829 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1d2f8ef-0175-4070-881a-825ccd1219b8" containerName="extract-content" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.715836 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1d2f8ef-0175-4070-881a-825ccd1219b8" containerName="extract-content" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.716144 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1d2f8ef-0175-4070-881a-825ccd1219b8" containerName="registry-server" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.718266 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.731825 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rdqrp"] Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.737593 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/789280b2-0f33-468a-b0c8-9fe9a3843e3c-catalog-content\") pod \"certified-operators-rdqrp\" (UID: \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\") " pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.737711 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/789280b2-0f33-468a-b0c8-9fe9a3843e3c-utilities\") pod \"certified-operators-rdqrp\" (UID: \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\") " pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.737946 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dzsq\" (UniqueName: \"kubernetes.io/projected/789280b2-0f33-468a-b0c8-9fe9a3843e3c-kube-api-access-7dzsq\") pod \"certified-operators-rdqrp\" (UID: \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\") " pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.840852 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/789280b2-0f33-468a-b0c8-9fe9a3843e3c-utilities\") pod \"certified-operators-rdqrp\" (UID: \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\") " pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.841364 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dzsq\" (UniqueName: \"kubernetes.io/projected/789280b2-0f33-468a-b0c8-9fe9a3843e3c-kube-api-access-7dzsq\") pod \"certified-operators-rdqrp\" (UID: \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\") " pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.841393 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/789280b2-0f33-468a-b0c8-9fe9a3843e3c-utilities\") pod \"certified-operators-rdqrp\" (UID: \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\") " pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.841521 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/789280b2-0f33-468a-b0c8-9fe9a3843e3c-catalog-content\") pod \"certified-operators-rdqrp\" (UID: \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\") " pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.841798 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/789280b2-0f33-468a-b0c8-9fe9a3843e3c-catalog-content\") pod \"certified-operators-rdqrp\" (UID: \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\") " pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.862245 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dzsq\" (UniqueName: \"kubernetes.io/projected/789280b2-0f33-468a-b0c8-9fe9a3843e3c-kube-api-access-7dzsq\") pod \"certified-operators-rdqrp\" (UID: \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\") " pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:10 crc kubenswrapper[4902]: I0121 16:27:10.043538 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:10 crc kubenswrapper[4902]: I0121 16:27:10.505338 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rdqrp"] Jan 21 16:27:10 crc kubenswrapper[4902]: I0121 16:27:10.756594 4902 generic.go:334] "Generic (PLEG): container finished" podID="789280b2-0f33-468a-b0c8-9fe9a3843e3c" containerID="ecc4d1b7ad6d3c3e3e91d4bd9e4657053e105bd206863129b0c9caecb3844760" exitCode=0 Jan 21 16:27:10 crc kubenswrapper[4902]: I0121 16:27:10.756692 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdqrp" event={"ID":"789280b2-0f33-468a-b0c8-9fe9a3843e3c","Type":"ContainerDied","Data":"ecc4d1b7ad6d3c3e3e91d4bd9e4657053e105bd206863129b0c9caecb3844760"} Jan 21 16:27:10 crc kubenswrapper[4902]: I0121 16:27:10.757152 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdqrp" event={"ID":"789280b2-0f33-468a-b0c8-9fe9a3843e3c","Type":"ContainerStarted","Data":"d7bcc8cece54e32e70746c7d4793c0eec0b3c6c2bff3c7a2c469217cd9ee806c"} Jan 21 16:27:12 crc kubenswrapper[4902]: I0121 16:27:12.776736 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdqrp" event={"ID":"789280b2-0f33-468a-b0c8-9fe9a3843e3c","Type":"ContainerStarted","Data":"0b5fb853e79c68c6241f67d5b7bbcb7d13dc083797c50a00f83b6ef27ef4b827"} Jan 21 16:27:13 crc kubenswrapper[4902]: I0121 16:27:13.789212 4902 generic.go:334] "Generic (PLEG): container finished" podID="789280b2-0f33-468a-b0c8-9fe9a3843e3c" containerID="0b5fb853e79c68c6241f67d5b7bbcb7d13dc083797c50a00f83b6ef27ef4b827" exitCode=0 Jan 21 16:27:13 crc kubenswrapper[4902]: I0121 16:27:13.789255 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdqrp" event={"ID":"789280b2-0f33-468a-b0c8-9fe9a3843e3c","Type":"ContainerDied","Data":"0b5fb853e79c68c6241f67d5b7bbcb7d13dc083797c50a00f83b6ef27ef4b827"} Jan 21 16:27:15 crc kubenswrapper[4902]: I0121 16:27:15.816389 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdqrp" event={"ID":"789280b2-0f33-468a-b0c8-9fe9a3843e3c","Type":"ContainerStarted","Data":"35f73d651eeaa6573d9033ccbf674b8ce47b749239de3eb8f9420a462171ab10"} Jan 21 16:27:18 crc kubenswrapper[4902]: I0121 16:27:18.044585 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rdqrp" podStartSLOduration=4.305061785 podStartE2EDuration="9.044560816s" podCreationTimestamp="2026-01-21 16:27:09 +0000 UTC" firstStartedPulling="2026-01-21 16:27:10.759142892 +0000 UTC m=+6792.835975921" lastFinishedPulling="2026-01-21 16:27:15.498641923 +0000 UTC m=+6797.575474952" observedRunningTime="2026-01-21 16:27:15.841665537 +0000 UTC m=+6797.918498576" watchObservedRunningTime="2026-01-21 16:27:18.044560816 +0000 UTC m=+6800.121393855" Jan 21 16:27:18 crc kubenswrapper[4902]: I0121 16:27:18.046277 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-5zjtz"] Jan 21 16:27:18 crc kubenswrapper[4902]: I0121 16:27:18.057211 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-5zjtz"] Jan 21 16:27:18 crc kubenswrapper[4902]: I0121 16:27:18.306784 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1a02641-de79-49cd-91a4-d689c669a38c" path="/var/lib/kubelet/pods/b1a02641-de79-49cd-91a4-d689c669a38c/volumes" Jan 21 16:27:20 crc kubenswrapper[4902]: I0121 16:27:20.044673 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:20 crc kubenswrapper[4902]: I0121 16:27:20.045142 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:20 crc kubenswrapper[4902]: I0121 16:27:20.087789 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:20 crc kubenswrapper[4902]: I0121 16:27:20.928572 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:21 crc kubenswrapper[4902]: I0121 16:27:21.039611 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rdqrp"] Jan 21 16:27:22 crc kubenswrapper[4902]: I0121 16:27:22.295089 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:27:22 crc kubenswrapper[4902]: E0121 16:27:22.295494 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:27:22 crc kubenswrapper[4902]: I0121 16:27:22.880824 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rdqrp" podUID="789280b2-0f33-468a-b0c8-9fe9a3843e3c" containerName="registry-server" containerID="cri-o://35f73d651eeaa6573d9033ccbf674b8ce47b749239de3eb8f9420a462171ab10" gracePeriod=2 Jan 21 16:27:23 crc kubenswrapper[4902]: I0121 16:27:23.891690 4902 generic.go:334] "Generic (PLEG): container finished" podID="789280b2-0f33-468a-b0c8-9fe9a3843e3c" containerID="35f73d651eeaa6573d9033ccbf674b8ce47b749239de3eb8f9420a462171ab10" exitCode=0 Jan 21 16:27:23 crc kubenswrapper[4902]: I0121 16:27:23.891794 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdqrp" event={"ID":"789280b2-0f33-468a-b0c8-9fe9a3843e3c","Type":"ContainerDied","Data":"35f73d651eeaa6573d9033ccbf674b8ce47b749239de3eb8f9420a462171ab10"} Jan 21 16:27:23 crc kubenswrapper[4902]: I0121 16:27:23.892104 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdqrp" event={"ID":"789280b2-0f33-468a-b0c8-9fe9a3843e3c","Type":"ContainerDied","Data":"d7bcc8cece54e32e70746c7d4793c0eec0b3c6c2bff3c7a2c469217cd9ee806c"} Jan 21 16:27:23 crc kubenswrapper[4902]: I0121 16:27:23.892121 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7bcc8cece54e32e70746c7d4793c0eec0b3c6c2bff3c7a2c469217cd9ee806c" Jan 21 16:27:23 crc kubenswrapper[4902]: I0121 16:27:23.930678 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:24 crc kubenswrapper[4902]: I0121 16:27:24.057313 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dzsq\" (UniqueName: \"kubernetes.io/projected/789280b2-0f33-468a-b0c8-9fe9a3843e3c-kube-api-access-7dzsq\") pod \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\" (UID: \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\") " Jan 21 16:27:24 crc kubenswrapper[4902]: I0121 16:27:24.057760 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/789280b2-0f33-468a-b0c8-9fe9a3843e3c-utilities\") pod \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\" (UID: \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\") " Jan 21 16:27:24 crc kubenswrapper[4902]: I0121 16:27:24.057927 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/789280b2-0f33-468a-b0c8-9fe9a3843e3c-catalog-content\") pod \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\" (UID: \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\") " Jan 21 16:27:24 crc kubenswrapper[4902]: I0121 16:27:24.058968 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/789280b2-0f33-468a-b0c8-9fe9a3843e3c-utilities" (OuterVolumeSpecName: "utilities") pod "789280b2-0f33-468a-b0c8-9fe9a3843e3c" (UID: "789280b2-0f33-468a-b0c8-9fe9a3843e3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:27:24 crc kubenswrapper[4902]: I0121 16:27:24.066557 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/789280b2-0f33-468a-b0c8-9fe9a3843e3c-kube-api-access-7dzsq" (OuterVolumeSpecName: "kube-api-access-7dzsq") pod "789280b2-0f33-468a-b0c8-9fe9a3843e3c" (UID: "789280b2-0f33-468a-b0c8-9fe9a3843e3c"). InnerVolumeSpecName "kube-api-access-7dzsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:27:24 crc kubenswrapper[4902]: I0121 16:27:24.110967 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/789280b2-0f33-468a-b0c8-9fe9a3843e3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "789280b2-0f33-468a-b0c8-9fe9a3843e3c" (UID: "789280b2-0f33-468a-b0c8-9fe9a3843e3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:27:24 crc kubenswrapper[4902]: I0121 16:27:24.160400 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/789280b2-0f33-468a-b0c8-9fe9a3843e3c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:24 crc kubenswrapper[4902]: I0121 16:27:24.160440 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dzsq\" (UniqueName: \"kubernetes.io/projected/789280b2-0f33-468a-b0c8-9fe9a3843e3c-kube-api-access-7dzsq\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:24 crc kubenswrapper[4902]: I0121 16:27:24.160451 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/789280b2-0f33-468a-b0c8-9fe9a3843e3c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:24 crc kubenswrapper[4902]: I0121 16:27:24.899989 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:24 crc kubenswrapper[4902]: I0121 16:27:24.923215 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rdqrp"] Jan 21 16:27:24 crc kubenswrapper[4902]: I0121 16:27:24.931429 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rdqrp"] Jan 21 16:27:26 crc kubenswrapper[4902]: I0121 16:27:26.312164 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="789280b2-0f33-468a-b0c8-9fe9a3843e3c" path="/var/lib/kubelet/pods/789280b2-0f33-468a-b0c8-9fe9a3843e3c/volumes" Jan 21 16:27:34 crc kubenswrapper[4902]: I0121 16:27:34.992187 4902 scope.go:117] "RemoveContainer" containerID="745a40ea71fa0659b994aca7c2aff73301bd6c551946f45e224b9ab71b71e18f" Jan 21 16:27:35 crc kubenswrapper[4902]: I0121 16:27:35.028291 4902 scope.go:117] "RemoveContainer" containerID="8e3ea4085f3e9419958669812fbb80d867719697fa5d6f29fd25013487806482" Jan 21 16:27:35 crc kubenswrapper[4902]: I0121 16:27:35.111400 4902 scope.go:117] "RemoveContainer" containerID="12cbd897a8c963b1753af6838fe6f74f721c8f8e6f46ac0835b5c50a96042e89" Jan 21 16:27:36 crc kubenswrapper[4902]: I0121 16:27:36.295461 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:27:36 crc kubenswrapper[4902]: E0121 16:27:36.297225 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.679368 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9lrkm"] Jan 21 16:27:44 crc kubenswrapper[4902]: E0121 16:27:44.680537 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="789280b2-0f33-468a-b0c8-9fe9a3843e3c" containerName="extract-content" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.680552 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="789280b2-0f33-468a-b0c8-9fe9a3843e3c" containerName="extract-content" Jan 21 16:27:44 crc kubenswrapper[4902]: E0121 16:27:44.680593 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="789280b2-0f33-468a-b0c8-9fe9a3843e3c" containerName="registry-server" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.680600 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="789280b2-0f33-468a-b0c8-9fe9a3843e3c" containerName="registry-server" Jan 21 16:27:44 crc kubenswrapper[4902]: E0121 16:27:44.680610 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="789280b2-0f33-468a-b0c8-9fe9a3843e3c" containerName="extract-utilities" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.680617 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="789280b2-0f33-468a-b0c8-9fe9a3843e3c" containerName="extract-utilities" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.680818 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="789280b2-0f33-468a-b0c8-9fe9a3843e3c" containerName="registry-server" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.682270 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.694777 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wphks\" (UniqueName: \"kubernetes.io/projected/e9b6c94a-d638-4e6d-8976-17a191b91565-kube-api-access-wphks\") pod \"redhat-operators-9lrkm\" (UID: \"e9b6c94a-d638-4e6d-8976-17a191b91565\") " pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.694842 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b6c94a-d638-4e6d-8976-17a191b91565-utilities\") pod \"redhat-operators-9lrkm\" (UID: \"e9b6c94a-d638-4e6d-8976-17a191b91565\") " pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.694974 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b6c94a-d638-4e6d-8976-17a191b91565-catalog-content\") pod \"redhat-operators-9lrkm\" (UID: \"e9b6c94a-d638-4e6d-8976-17a191b91565\") " pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.701169 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9lrkm"] Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.795930 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wphks\" (UniqueName: \"kubernetes.io/projected/e9b6c94a-d638-4e6d-8976-17a191b91565-kube-api-access-wphks\") pod \"redhat-operators-9lrkm\" (UID: \"e9b6c94a-d638-4e6d-8976-17a191b91565\") " pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.796317 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b6c94a-d638-4e6d-8976-17a191b91565-utilities\") pod \"redhat-operators-9lrkm\" (UID: \"e9b6c94a-d638-4e6d-8976-17a191b91565\") " pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.796445 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b6c94a-d638-4e6d-8976-17a191b91565-catalog-content\") pod \"redhat-operators-9lrkm\" (UID: \"e9b6c94a-d638-4e6d-8976-17a191b91565\") " pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.796896 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b6c94a-d638-4e6d-8976-17a191b91565-utilities\") pod \"redhat-operators-9lrkm\" (UID: \"e9b6c94a-d638-4e6d-8976-17a191b91565\") " pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.796976 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b6c94a-d638-4e6d-8976-17a191b91565-catalog-content\") pod \"redhat-operators-9lrkm\" (UID: \"e9b6c94a-d638-4e6d-8976-17a191b91565\") " pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.821130 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wphks\" (UniqueName: \"kubernetes.io/projected/e9b6c94a-d638-4e6d-8976-17a191b91565-kube-api-access-wphks\") pod \"redhat-operators-9lrkm\" (UID: \"e9b6c94a-d638-4e6d-8976-17a191b91565\") " pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:27:45 crc kubenswrapper[4902]: I0121 16:27:45.012492 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:27:45 crc kubenswrapper[4902]: I0121 16:27:45.494844 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9lrkm"] Jan 21 16:27:46 crc kubenswrapper[4902]: I0121 16:27:46.159716 4902 generic.go:334] "Generic (PLEG): container finished" podID="e9b6c94a-d638-4e6d-8976-17a191b91565" containerID="289f8c8b76e39387321299c200e0a344656044fab965ab40645f1291c75ddcc4" exitCode=0 Jan 21 16:27:46 crc kubenswrapper[4902]: I0121 16:27:46.159927 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lrkm" event={"ID":"e9b6c94a-d638-4e6d-8976-17a191b91565","Type":"ContainerDied","Data":"289f8c8b76e39387321299c200e0a344656044fab965ab40645f1291c75ddcc4"} Jan 21 16:27:46 crc kubenswrapper[4902]: I0121 16:27:46.160025 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lrkm" event={"ID":"e9b6c94a-d638-4e6d-8976-17a191b91565","Type":"ContainerStarted","Data":"f2424df31c1a55e9e46726ec4a04a3834f75f121678bf943db69fe8fa9105763"} Jan 21 16:27:48 crc kubenswrapper[4902]: I0121 16:27:48.179777 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lrkm" event={"ID":"e9b6c94a-d638-4e6d-8976-17a191b91565","Type":"ContainerStarted","Data":"35be7633197a294376dab6ab430dea3231a76827796e3ca5c51115e75db54ace"} Jan 21 16:27:49 crc kubenswrapper[4902]: I0121 16:27:49.295583 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:27:49 crc kubenswrapper[4902]: E0121 16:27:49.295841 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:27:50 crc kubenswrapper[4902]: I0121 16:27:50.587264 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-xpzj8" podUID="6fc6639b-9150-4158-836f-1ffc1c4f5339" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:27:54 crc kubenswrapper[4902]: I0121 16:27:54.232914 4902 generic.go:334] "Generic (PLEG): container finished" podID="e9b6c94a-d638-4e6d-8976-17a191b91565" containerID="35be7633197a294376dab6ab430dea3231a76827796e3ca5c51115e75db54ace" exitCode=0 Jan 21 16:27:54 crc kubenswrapper[4902]: I0121 16:27:54.232993 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lrkm" event={"ID":"e9b6c94a-d638-4e6d-8976-17a191b91565","Type":"ContainerDied","Data":"35be7633197a294376dab6ab430dea3231a76827796e3ca5c51115e75db54ace"} Jan 21 16:27:55 crc kubenswrapper[4902]: I0121 16:27:55.244147 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lrkm" event={"ID":"e9b6c94a-d638-4e6d-8976-17a191b91565","Type":"ContainerStarted","Data":"928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9"} Jan 21 16:27:55 crc kubenswrapper[4902]: I0121 16:27:55.265217 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9lrkm" podStartSLOduration=2.620986804 podStartE2EDuration="11.265195061s" podCreationTimestamp="2026-01-21 16:27:44 +0000 UTC" firstStartedPulling="2026-01-21 16:27:46.161975235 +0000 UTC m=+6828.238808264" lastFinishedPulling="2026-01-21 16:27:54.806183492 +0000 UTC m=+6836.883016521" observedRunningTime="2026-01-21 16:27:55.264697077 +0000 UTC m=+6837.341530116" watchObservedRunningTime="2026-01-21 16:27:55.265195061 +0000 UTC m=+6837.342028100" Jan 21 16:28:03 crc kubenswrapper[4902]: I0121 16:28:03.295136 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:28:03 crc kubenswrapper[4902]: E0121 16:28:03.298067 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:28:05 crc kubenswrapper[4902]: I0121 16:28:05.012620 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:28:05 crc kubenswrapper[4902]: I0121 16:28:05.012674 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:28:05 crc kubenswrapper[4902]: I0121 16:28:05.066822 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:28:05 crc kubenswrapper[4902]: I0121 16:28:05.392443 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:28:05 crc kubenswrapper[4902]: I0121 16:28:05.439700 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9lrkm"] Jan 21 16:28:07 crc kubenswrapper[4902]: I0121 16:28:07.366636 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9lrkm" podUID="e9b6c94a-d638-4e6d-8976-17a191b91565" containerName="registry-server" containerID="cri-o://928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9" gracePeriod=2 Jan 21 16:28:07 crc kubenswrapper[4902]: I0121 16:28:07.824870 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:28:07 crc kubenswrapper[4902]: I0121 16:28:07.932512 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b6c94a-d638-4e6d-8976-17a191b91565-utilities\") pod \"e9b6c94a-d638-4e6d-8976-17a191b91565\" (UID: \"e9b6c94a-d638-4e6d-8976-17a191b91565\") " Jan 21 16:28:07 crc kubenswrapper[4902]: I0121 16:28:07.932562 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wphks\" (UniqueName: \"kubernetes.io/projected/e9b6c94a-d638-4e6d-8976-17a191b91565-kube-api-access-wphks\") pod \"e9b6c94a-d638-4e6d-8976-17a191b91565\" (UID: \"e9b6c94a-d638-4e6d-8976-17a191b91565\") " Jan 21 16:28:07 crc kubenswrapper[4902]: I0121 16:28:07.932859 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b6c94a-d638-4e6d-8976-17a191b91565-catalog-content\") pod \"e9b6c94a-d638-4e6d-8976-17a191b91565\" (UID: \"e9b6c94a-d638-4e6d-8976-17a191b91565\") " Jan 21 16:28:07 crc kubenswrapper[4902]: I0121 16:28:07.933565 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9b6c94a-d638-4e6d-8976-17a191b91565-utilities" (OuterVolumeSpecName: "utilities") pod "e9b6c94a-d638-4e6d-8976-17a191b91565" (UID: "e9b6c94a-d638-4e6d-8976-17a191b91565"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:28:07 crc kubenswrapper[4902]: I0121 16:28:07.938747 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9b6c94a-d638-4e6d-8976-17a191b91565-kube-api-access-wphks" (OuterVolumeSpecName: "kube-api-access-wphks") pod "e9b6c94a-d638-4e6d-8976-17a191b91565" (UID: "e9b6c94a-d638-4e6d-8976-17a191b91565"). InnerVolumeSpecName "kube-api-access-wphks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.035310 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b6c94a-d638-4e6d-8976-17a191b91565-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.035351 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wphks\" (UniqueName: \"kubernetes.io/projected/e9b6c94a-d638-4e6d-8976-17a191b91565-kube-api-access-wphks\") on node \"crc\" DevicePath \"\"" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.072664 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9b6c94a-d638-4e6d-8976-17a191b91565-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9b6c94a-d638-4e6d-8976-17a191b91565" (UID: "e9b6c94a-d638-4e6d-8976-17a191b91565"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.137672 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b6c94a-d638-4e6d-8976-17a191b91565-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.377581 4902 generic.go:334] "Generic (PLEG): container finished" podID="e9b6c94a-d638-4e6d-8976-17a191b91565" containerID="928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9" exitCode=0 Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.377626 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lrkm" event={"ID":"e9b6c94a-d638-4e6d-8976-17a191b91565","Type":"ContainerDied","Data":"928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9"} Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.377698 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lrkm" event={"ID":"e9b6c94a-d638-4e6d-8976-17a191b91565","Type":"ContainerDied","Data":"f2424df31c1a55e9e46726ec4a04a3834f75f121678bf943db69fe8fa9105763"} Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.377728 4902 scope.go:117] "RemoveContainer" containerID="928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.378871 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.419677 4902 scope.go:117] "RemoveContainer" containerID="35be7633197a294376dab6ab430dea3231a76827796e3ca5c51115e75db54ace" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.425151 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9lrkm"] Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.435759 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9lrkm"] Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.466758 4902 scope.go:117] "RemoveContainer" containerID="289f8c8b76e39387321299c200e0a344656044fab965ab40645f1291c75ddcc4" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.505240 4902 scope.go:117] "RemoveContainer" containerID="928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9" Jan 21 16:28:08 crc kubenswrapper[4902]: E0121 16:28:08.506317 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9\": container with ID starting with 928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9 not found: ID does not exist" containerID="928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.506370 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9"} err="failed to get container status \"928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9\": rpc error: code = NotFound desc = could not find container \"928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9\": container with ID starting with 928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9 not found: ID does not exist" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.506406 4902 scope.go:117] "RemoveContainer" containerID="35be7633197a294376dab6ab430dea3231a76827796e3ca5c51115e75db54ace" Jan 21 16:28:08 crc kubenswrapper[4902]: E0121 16:28:08.507487 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35be7633197a294376dab6ab430dea3231a76827796e3ca5c51115e75db54ace\": container with ID starting with 35be7633197a294376dab6ab430dea3231a76827796e3ca5c51115e75db54ace not found: ID does not exist" containerID="35be7633197a294376dab6ab430dea3231a76827796e3ca5c51115e75db54ace" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.507563 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35be7633197a294376dab6ab430dea3231a76827796e3ca5c51115e75db54ace"} err="failed to get container status \"35be7633197a294376dab6ab430dea3231a76827796e3ca5c51115e75db54ace\": rpc error: code = NotFound desc = could not find container \"35be7633197a294376dab6ab430dea3231a76827796e3ca5c51115e75db54ace\": container with ID starting with 35be7633197a294376dab6ab430dea3231a76827796e3ca5c51115e75db54ace not found: ID does not exist" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.507615 4902 scope.go:117] "RemoveContainer" containerID="289f8c8b76e39387321299c200e0a344656044fab965ab40645f1291c75ddcc4" Jan 21 16:28:08 crc kubenswrapper[4902]: E0121 16:28:08.508302 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"289f8c8b76e39387321299c200e0a344656044fab965ab40645f1291c75ddcc4\": container with ID starting with 289f8c8b76e39387321299c200e0a344656044fab965ab40645f1291c75ddcc4 not found: ID does not exist" containerID="289f8c8b76e39387321299c200e0a344656044fab965ab40645f1291c75ddcc4" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.508352 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"289f8c8b76e39387321299c200e0a344656044fab965ab40645f1291c75ddcc4"} err="failed to get container status \"289f8c8b76e39387321299c200e0a344656044fab965ab40645f1291c75ddcc4\": rpc error: code = NotFound desc = could not find container \"289f8c8b76e39387321299c200e0a344656044fab965ab40645f1291c75ddcc4\": container with ID starting with 289f8c8b76e39387321299c200e0a344656044fab965ab40645f1291c75ddcc4 not found: ID does not exist" Jan 21 16:28:10 crc kubenswrapper[4902]: I0121 16:28:10.307234 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9b6c94a-d638-4e6d-8976-17a191b91565" path="/var/lib/kubelet/pods/e9b6c94a-d638-4e6d-8976-17a191b91565/volumes" Jan 21 16:28:15 crc kubenswrapper[4902]: I0121 16:28:15.295375 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:28:15 crc kubenswrapper[4902]: E0121 16:28:15.296231 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:28:29 crc kubenswrapper[4902]: I0121 16:28:29.295818 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:28:29 crc kubenswrapper[4902]: E0121 16:28:29.296632 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:28:40 crc kubenswrapper[4902]: I0121 16:28:40.295829 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:28:40 crc kubenswrapper[4902]: E0121 16:28:40.296757 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:28:43 crc kubenswrapper[4902]: I0121 16:28:43.851185 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nddsl"] Jan 21 16:28:43 crc kubenswrapper[4902]: E0121 16:28:43.851996 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b6c94a-d638-4e6d-8976-17a191b91565" containerName="registry-server" Jan 21 16:28:43 crc kubenswrapper[4902]: I0121 16:28:43.852012 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b6c94a-d638-4e6d-8976-17a191b91565" containerName="registry-server" Jan 21 16:28:43 crc kubenswrapper[4902]: E0121 16:28:43.852056 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b6c94a-d638-4e6d-8976-17a191b91565" containerName="extract-utilities" Jan 21 16:28:43 crc kubenswrapper[4902]: I0121 16:28:43.852068 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b6c94a-d638-4e6d-8976-17a191b91565" containerName="extract-utilities" Jan 21 16:28:43 crc kubenswrapper[4902]: E0121 16:28:43.852113 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b6c94a-d638-4e6d-8976-17a191b91565" containerName="extract-content" Jan 21 16:28:43 crc kubenswrapper[4902]: I0121 16:28:43.852123 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b6c94a-d638-4e6d-8976-17a191b91565" containerName="extract-content" Jan 21 16:28:43 crc kubenswrapper[4902]: I0121 16:28:43.852418 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9b6c94a-d638-4e6d-8976-17a191b91565" containerName="registry-server" Jan 21 16:28:43 crc kubenswrapper[4902]: I0121 16:28:43.854901 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:43 crc kubenswrapper[4902]: I0121 16:28:43.865016 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nddsl"] Jan 21 16:28:44 crc kubenswrapper[4902]: I0121 16:28:44.054071 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pfpl\" (UniqueName: \"kubernetes.io/projected/859a97e5-04f4-47a3-af07-4546c61e21fc-kube-api-access-7pfpl\") pod \"community-operators-nddsl\" (UID: \"859a97e5-04f4-47a3-af07-4546c61e21fc\") " pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:44 crc kubenswrapper[4902]: I0121 16:28:44.054271 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/859a97e5-04f4-47a3-af07-4546c61e21fc-catalog-content\") pod \"community-operators-nddsl\" (UID: \"859a97e5-04f4-47a3-af07-4546c61e21fc\") " pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:44 crc kubenswrapper[4902]: I0121 16:28:44.054307 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/859a97e5-04f4-47a3-af07-4546c61e21fc-utilities\") pod \"community-operators-nddsl\" (UID: \"859a97e5-04f4-47a3-af07-4546c61e21fc\") " pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:44 crc kubenswrapper[4902]: I0121 16:28:44.156776 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pfpl\" (UniqueName: \"kubernetes.io/projected/859a97e5-04f4-47a3-af07-4546c61e21fc-kube-api-access-7pfpl\") pod \"community-operators-nddsl\" (UID: \"859a97e5-04f4-47a3-af07-4546c61e21fc\") " pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:44 crc kubenswrapper[4902]: I0121 16:28:44.157004 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/859a97e5-04f4-47a3-af07-4546c61e21fc-catalog-content\") pod \"community-operators-nddsl\" (UID: \"859a97e5-04f4-47a3-af07-4546c61e21fc\") " pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:44 crc kubenswrapper[4902]: I0121 16:28:44.157068 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/859a97e5-04f4-47a3-af07-4546c61e21fc-utilities\") pod \"community-operators-nddsl\" (UID: \"859a97e5-04f4-47a3-af07-4546c61e21fc\") " pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:44 crc kubenswrapper[4902]: I0121 16:28:44.157457 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/859a97e5-04f4-47a3-af07-4546c61e21fc-catalog-content\") pod \"community-operators-nddsl\" (UID: \"859a97e5-04f4-47a3-af07-4546c61e21fc\") " pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:44 crc kubenswrapper[4902]: I0121 16:28:44.157498 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/859a97e5-04f4-47a3-af07-4546c61e21fc-utilities\") pod \"community-operators-nddsl\" (UID: \"859a97e5-04f4-47a3-af07-4546c61e21fc\") " pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:44 crc kubenswrapper[4902]: I0121 16:28:44.177568 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pfpl\" (UniqueName: \"kubernetes.io/projected/859a97e5-04f4-47a3-af07-4546c61e21fc-kube-api-access-7pfpl\") pod \"community-operators-nddsl\" (UID: \"859a97e5-04f4-47a3-af07-4546c61e21fc\") " pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:44 crc kubenswrapper[4902]: I0121 16:28:44.183575 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:44 crc kubenswrapper[4902]: I0121 16:28:44.756194 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nddsl"] Jan 21 16:28:44 crc kubenswrapper[4902]: W0121 16:28:44.768643 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod859a97e5_04f4_47a3_af07_4546c61e21fc.slice/crio-4f9eb2131357be0ae15daf60bdcecde4c6eef7573e8a4b52b5979cf63b502812 WatchSource:0}: Error finding container 4f9eb2131357be0ae15daf60bdcecde4c6eef7573e8a4b52b5979cf63b502812: Status 404 returned error can't find the container with id 4f9eb2131357be0ae15daf60bdcecde4c6eef7573e8a4b52b5979cf63b502812 Jan 21 16:28:45 crc kubenswrapper[4902]: I0121 16:28:45.778313 4902 generic.go:334] "Generic (PLEG): container finished" podID="859a97e5-04f4-47a3-af07-4546c61e21fc" containerID="0a17499b0a47b04dd96bb141fa4c802436c9e69df843badf80e1391521a7c7f5" exitCode=0 Jan 21 16:28:45 crc kubenswrapper[4902]: I0121 16:28:45.778432 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nddsl" event={"ID":"859a97e5-04f4-47a3-af07-4546c61e21fc","Type":"ContainerDied","Data":"0a17499b0a47b04dd96bb141fa4c802436c9e69df843badf80e1391521a7c7f5"} Jan 21 16:28:45 crc kubenswrapper[4902]: I0121 16:28:45.778686 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nddsl" event={"ID":"859a97e5-04f4-47a3-af07-4546c61e21fc","Type":"ContainerStarted","Data":"4f9eb2131357be0ae15daf60bdcecde4c6eef7573e8a4b52b5979cf63b502812"} Jan 21 16:28:47 crc kubenswrapper[4902]: I0121 16:28:47.808850 4902 generic.go:334] "Generic (PLEG): container finished" podID="859a97e5-04f4-47a3-af07-4546c61e21fc" containerID="8d575eeabe9275c2809f916e7d6f0ad522a19ed93ce920bc2e2264427e99dbdf" exitCode=0 Jan 21 16:28:47 crc kubenswrapper[4902]: I0121 16:28:47.809281 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nddsl" event={"ID":"859a97e5-04f4-47a3-af07-4546c61e21fc","Type":"ContainerDied","Data":"8d575eeabe9275c2809f916e7d6f0ad522a19ed93ce920bc2e2264427e99dbdf"} Jan 21 16:28:48 crc kubenswrapper[4902]: I0121 16:28:48.824523 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nddsl" event={"ID":"859a97e5-04f4-47a3-af07-4546c61e21fc","Type":"ContainerStarted","Data":"929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5"} Jan 21 16:28:48 crc kubenswrapper[4902]: I0121 16:28:48.857213 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nddsl" podStartSLOduration=3.424128021 podStartE2EDuration="5.857186527s" podCreationTimestamp="2026-01-21 16:28:43 +0000 UTC" firstStartedPulling="2026-01-21 16:28:45.780903007 +0000 UTC m=+6887.857736036" lastFinishedPulling="2026-01-21 16:28:48.213961513 +0000 UTC m=+6890.290794542" observedRunningTime="2026-01-21 16:28:48.841972409 +0000 UTC m=+6890.918805438" watchObservedRunningTime="2026-01-21 16:28:48.857186527 +0000 UTC m=+6890.934019556" Jan 21 16:28:51 crc kubenswrapper[4902]: I0121 16:28:51.295520 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:28:51 crc kubenswrapper[4902]: E0121 16:28:51.296516 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:28:54 crc kubenswrapper[4902]: I0121 16:28:54.185343 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:54 crc kubenswrapper[4902]: I0121 16:28:54.185609 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:54 crc kubenswrapper[4902]: I0121 16:28:54.234465 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:54 crc kubenswrapper[4902]: I0121 16:28:54.937611 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:55 crc kubenswrapper[4902]: I0121 16:28:55.009702 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nddsl"] Jan 21 16:28:56 crc kubenswrapper[4902]: I0121 16:28:56.899252 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nddsl" podUID="859a97e5-04f4-47a3-af07-4546c61e21fc" containerName="registry-server" containerID="cri-o://929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5" gracePeriod=2 Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.731071 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.870161 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pfpl\" (UniqueName: \"kubernetes.io/projected/859a97e5-04f4-47a3-af07-4546c61e21fc-kube-api-access-7pfpl\") pod \"859a97e5-04f4-47a3-af07-4546c61e21fc\" (UID: \"859a97e5-04f4-47a3-af07-4546c61e21fc\") " Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.870347 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/859a97e5-04f4-47a3-af07-4546c61e21fc-catalog-content\") pod \"859a97e5-04f4-47a3-af07-4546c61e21fc\" (UID: \"859a97e5-04f4-47a3-af07-4546c61e21fc\") " Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.870542 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/859a97e5-04f4-47a3-af07-4546c61e21fc-utilities\") pod \"859a97e5-04f4-47a3-af07-4546c61e21fc\" (UID: \"859a97e5-04f4-47a3-af07-4546c61e21fc\") " Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.871339 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/859a97e5-04f4-47a3-af07-4546c61e21fc-utilities" (OuterVolumeSpecName: "utilities") pod "859a97e5-04f4-47a3-af07-4546c61e21fc" (UID: "859a97e5-04f4-47a3-af07-4546c61e21fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.877436 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/859a97e5-04f4-47a3-af07-4546c61e21fc-kube-api-access-7pfpl" (OuterVolumeSpecName: "kube-api-access-7pfpl") pod "859a97e5-04f4-47a3-af07-4546c61e21fc" (UID: "859a97e5-04f4-47a3-af07-4546c61e21fc"). InnerVolumeSpecName "kube-api-access-7pfpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.916006 4902 generic.go:334] "Generic (PLEG): container finished" podID="859a97e5-04f4-47a3-af07-4546c61e21fc" containerID="929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5" exitCode=0 Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.916097 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nddsl" event={"ID":"859a97e5-04f4-47a3-af07-4546c61e21fc","Type":"ContainerDied","Data":"929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5"} Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.916140 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.916151 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nddsl" event={"ID":"859a97e5-04f4-47a3-af07-4546c61e21fc","Type":"ContainerDied","Data":"4f9eb2131357be0ae15daf60bdcecde4c6eef7573e8a4b52b5979cf63b502812"} Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.916182 4902 scope.go:117] "RemoveContainer" containerID="929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5" Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.937710 4902 scope.go:117] "RemoveContainer" containerID="8d575eeabe9275c2809f916e7d6f0ad522a19ed93ce920bc2e2264427e99dbdf" Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.941059 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/859a97e5-04f4-47a3-af07-4546c61e21fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "859a97e5-04f4-47a3-af07-4546c61e21fc" (UID: "859a97e5-04f4-47a3-af07-4546c61e21fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.963394 4902 scope.go:117] "RemoveContainer" containerID="0a17499b0a47b04dd96bb141fa4c802436c9e69df843badf80e1391521a7c7f5" Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.973934 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/859a97e5-04f4-47a3-af07-4546c61e21fc-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.973976 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pfpl\" (UniqueName: \"kubernetes.io/projected/859a97e5-04f4-47a3-af07-4546c61e21fc-kube-api-access-7pfpl\") on node \"crc\" DevicePath \"\"" Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.973992 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/859a97e5-04f4-47a3-af07-4546c61e21fc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:28:58 crc kubenswrapper[4902]: I0121 16:28:58.020165 4902 scope.go:117] "RemoveContainer" containerID="929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5" Jan 21 16:28:58 crc kubenswrapper[4902]: E0121 16:28:58.021743 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5\": container with ID starting with 929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5 not found: ID does not exist" containerID="929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5" Jan 21 16:28:58 crc kubenswrapper[4902]: I0121 16:28:58.021801 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5"} err="failed to get container status \"929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5\": rpc error: code = NotFound desc = could not find container \"929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5\": container with ID starting with 929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5 not found: ID does not exist" Jan 21 16:28:58 crc kubenswrapper[4902]: I0121 16:28:58.021832 4902 scope.go:117] "RemoveContainer" containerID="8d575eeabe9275c2809f916e7d6f0ad522a19ed93ce920bc2e2264427e99dbdf" Jan 21 16:28:58 crc kubenswrapper[4902]: E0121 16:28:58.022341 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d575eeabe9275c2809f916e7d6f0ad522a19ed93ce920bc2e2264427e99dbdf\": container with ID starting with 8d575eeabe9275c2809f916e7d6f0ad522a19ed93ce920bc2e2264427e99dbdf not found: ID does not exist" containerID="8d575eeabe9275c2809f916e7d6f0ad522a19ed93ce920bc2e2264427e99dbdf" Jan 21 16:28:58 crc kubenswrapper[4902]: I0121 16:28:58.022391 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d575eeabe9275c2809f916e7d6f0ad522a19ed93ce920bc2e2264427e99dbdf"} err="failed to get container status \"8d575eeabe9275c2809f916e7d6f0ad522a19ed93ce920bc2e2264427e99dbdf\": rpc error: code = NotFound desc = could not find container \"8d575eeabe9275c2809f916e7d6f0ad522a19ed93ce920bc2e2264427e99dbdf\": container with ID starting with 8d575eeabe9275c2809f916e7d6f0ad522a19ed93ce920bc2e2264427e99dbdf not found: ID does not exist" Jan 21 16:28:58 crc kubenswrapper[4902]: I0121 16:28:58.022430 4902 scope.go:117] "RemoveContainer" containerID="0a17499b0a47b04dd96bb141fa4c802436c9e69df843badf80e1391521a7c7f5" Jan 21 16:28:58 crc kubenswrapper[4902]: E0121 16:28:58.022828 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a17499b0a47b04dd96bb141fa4c802436c9e69df843badf80e1391521a7c7f5\": container with ID starting with 0a17499b0a47b04dd96bb141fa4c802436c9e69df843badf80e1391521a7c7f5 not found: ID does not exist" containerID="0a17499b0a47b04dd96bb141fa4c802436c9e69df843badf80e1391521a7c7f5" Jan 21 16:28:58 crc kubenswrapper[4902]: I0121 16:28:58.022863 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a17499b0a47b04dd96bb141fa4c802436c9e69df843badf80e1391521a7c7f5"} err="failed to get container status \"0a17499b0a47b04dd96bb141fa4c802436c9e69df843badf80e1391521a7c7f5\": rpc error: code = NotFound desc = could not find container \"0a17499b0a47b04dd96bb141fa4c802436c9e69df843badf80e1391521a7c7f5\": container with ID starting with 0a17499b0a47b04dd96bb141fa4c802436c9e69df843badf80e1391521a7c7f5 not found: ID does not exist" Jan 21 16:28:58 crc kubenswrapper[4902]: I0121 16:28:58.338563 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nddsl"] Jan 21 16:28:58 crc kubenswrapper[4902]: I0121 16:28:58.343980 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nddsl"] Jan 21 16:29:00 crc kubenswrapper[4902]: I0121 16:29:00.319898 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="859a97e5-04f4-47a3-af07-4546c61e21fc" path="/var/lib/kubelet/pods/859a97e5-04f4-47a3-af07-4546c61e21fc/volumes" Jan 21 16:29:02 crc kubenswrapper[4902]: I0121 16:29:02.297398 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:29:02 crc kubenswrapper[4902]: E0121 16:29:02.298352 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:29:13 crc kubenswrapper[4902]: I0121 16:29:13.294826 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:29:13 crc kubenswrapper[4902]: E0121 16:29:13.296851 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:29:26 crc kubenswrapper[4902]: I0121 16:29:26.295110 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:29:26 crc kubenswrapper[4902]: E0121 16:29:26.295821 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:29:40 crc kubenswrapper[4902]: I0121 16:29:40.296469 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:29:40 crc kubenswrapper[4902]: E0121 16:29:40.297857 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:29:45 crc kubenswrapper[4902]: I0121 16:29:45.058953 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-fe19-account-create-update-m4ndc"] Jan 21 16:29:45 crc kubenswrapper[4902]: I0121 16:29:45.071760 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-fe19-account-create-update-m4ndc"] Jan 21 16:29:46 crc kubenswrapper[4902]: I0121 16:29:46.034685 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-v4xqk"] Jan 21 16:29:46 crc kubenswrapper[4902]: I0121 16:29:46.051117 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-v4xqk"] Jan 21 16:29:46 crc kubenswrapper[4902]: I0121 16:29:46.312389 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f9de683-01b0-4513-8e18-d56361ae4bc6" path="/var/lib/kubelet/pods/4f9de683-01b0-4513-8e18-d56361ae4bc6/volumes" Jan 21 16:29:46 crc kubenswrapper[4902]: I0121 16:29:46.314346 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="947c6da7-eea1-412b-8f8d-f1cdfadcf4ea" path="/var/lib/kubelet/pods/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea/volumes" Jan 21 16:29:54 crc kubenswrapper[4902]: I0121 16:29:54.294910 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:29:54 crc kubenswrapper[4902]: E0121 16:29:54.295682 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.182058 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf"] Jan 21 16:30:00 crc kubenswrapper[4902]: E0121 16:30:00.183093 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859a97e5-04f4-47a3-af07-4546c61e21fc" containerName="registry-server" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.183109 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="859a97e5-04f4-47a3-af07-4546c61e21fc" containerName="registry-server" Jan 21 16:30:00 crc kubenswrapper[4902]: E0121 16:30:00.183128 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859a97e5-04f4-47a3-af07-4546c61e21fc" containerName="extract-content" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.183138 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="859a97e5-04f4-47a3-af07-4546c61e21fc" containerName="extract-content" Jan 21 16:30:00 crc kubenswrapper[4902]: E0121 16:30:00.183163 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859a97e5-04f4-47a3-af07-4546c61e21fc" containerName="extract-utilities" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.183171 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="859a97e5-04f4-47a3-af07-4546c61e21fc" containerName="extract-utilities" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.183526 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="859a97e5-04f4-47a3-af07-4546c61e21fc" containerName="registry-server" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.186247 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.191553 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.192208 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.192627 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf"] Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.219517 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbfsx\" (UniqueName: \"kubernetes.io/projected/8598a357-73ed-4850-bbd3-ce46d3d9a623-kube-api-access-bbfsx\") pod \"collect-profiles-29483550-vz8jf\" (UID: \"8598a357-73ed-4850-bbd3-ce46d3d9a623\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.219815 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8598a357-73ed-4850-bbd3-ce46d3d9a623-secret-volume\") pod \"collect-profiles-29483550-vz8jf\" (UID: \"8598a357-73ed-4850-bbd3-ce46d3d9a623\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.220118 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8598a357-73ed-4850-bbd3-ce46d3d9a623-config-volume\") pod \"collect-profiles-29483550-vz8jf\" (UID: \"8598a357-73ed-4850-bbd3-ce46d3d9a623\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.321492 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8598a357-73ed-4850-bbd3-ce46d3d9a623-config-volume\") pod \"collect-profiles-29483550-vz8jf\" (UID: \"8598a357-73ed-4850-bbd3-ce46d3d9a623\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.321571 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbfsx\" (UniqueName: \"kubernetes.io/projected/8598a357-73ed-4850-bbd3-ce46d3d9a623-kube-api-access-bbfsx\") pod \"collect-profiles-29483550-vz8jf\" (UID: \"8598a357-73ed-4850-bbd3-ce46d3d9a623\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.321616 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8598a357-73ed-4850-bbd3-ce46d3d9a623-secret-volume\") pod \"collect-profiles-29483550-vz8jf\" (UID: \"8598a357-73ed-4850-bbd3-ce46d3d9a623\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.323538 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8598a357-73ed-4850-bbd3-ce46d3d9a623-config-volume\") pod \"collect-profiles-29483550-vz8jf\" (UID: \"8598a357-73ed-4850-bbd3-ce46d3d9a623\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.328193 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8598a357-73ed-4850-bbd3-ce46d3d9a623-secret-volume\") pod \"collect-profiles-29483550-vz8jf\" (UID: \"8598a357-73ed-4850-bbd3-ce46d3d9a623\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.338442 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbfsx\" (UniqueName: \"kubernetes.io/projected/8598a357-73ed-4850-bbd3-ce46d3d9a623-kube-api-access-bbfsx\") pod \"collect-profiles-29483550-vz8jf\" (UID: \"8598a357-73ed-4850-bbd3-ce46d3d9a623\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.506869 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:01 crc kubenswrapper[4902]: I0121 16:30:01.014601 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf"] Jan 21 16:30:01 crc kubenswrapper[4902]: W0121 16:30:01.034865 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8598a357_73ed_4850_bbd3_ce46d3d9a623.slice/crio-4b18519e5ad76fb7f93e6cc068f4bac59a84ecb67cd9c9339373bf79b7174b78 WatchSource:0}: Error finding container 4b18519e5ad76fb7f93e6cc068f4bac59a84ecb67cd9c9339373bf79b7174b78: Status 404 returned error can't find the container with id 4b18519e5ad76fb7f93e6cc068f4bac59a84ecb67cd9c9339373bf79b7174b78 Jan 21 16:30:01 crc kubenswrapper[4902]: I0121 16:30:01.047551 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-bvsxp"] Jan 21 16:30:01 crc kubenswrapper[4902]: I0121 16:30:01.063096 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-bvsxp"] Jan 21 16:30:01 crc kubenswrapper[4902]: I0121 16:30:01.540858 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" event={"ID":"8598a357-73ed-4850-bbd3-ce46d3d9a623","Type":"ContainerStarted","Data":"1150c7694232d9425d7e1595d33c3ffaecb94a439744ef680974e317c8ea6ae2"} Jan 21 16:30:01 crc kubenswrapper[4902]: I0121 16:30:01.540914 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" event={"ID":"8598a357-73ed-4850-bbd3-ce46d3d9a623","Type":"ContainerStarted","Data":"4b18519e5ad76fb7f93e6cc068f4bac59a84ecb67cd9c9339373bf79b7174b78"} Jan 21 16:30:01 crc kubenswrapper[4902]: I0121 16:30:01.563109 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" podStartSLOduration=1.563092874 podStartE2EDuration="1.563092874s" podCreationTimestamp="2026-01-21 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:30:01.554446531 +0000 UTC m=+6963.631279560" watchObservedRunningTime="2026-01-21 16:30:01.563092874 +0000 UTC m=+6963.639925903" Jan 21 16:30:02 crc kubenswrapper[4902]: I0121 16:30:02.308502 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ad5c1ce-9471-430a-b273-873699a86d57" path="/var/lib/kubelet/pods/7ad5c1ce-9471-430a-b273-873699a86d57/volumes" Jan 21 16:30:02 crc kubenswrapper[4902]: I0121 16:30:02.551586 4902 generic.go:334] "Generic (PLEG): container finished" podID="8598a357-73ed-4850-bbd3-ce46d3d9a623" containerID="1150c7694232d9425d7e1595d33c3ffaecb94a439744ef680974e317c8ea6ae2" exitCode=0 Jan 21 16:30:02 crc kubenswrapper[4902]: I0121 16:30:02.551648 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" event={"ID":"8598a357-73ed-4850-bbd3-ce46d3d9a623","Type":"ContainerDied","Data":"1150c7694232d9425d7e1595d33c3ffaecb94a439744ef680974e317c8ea6ae2"} Jan 21 16:30:03 crc kubenswrapper[4902]: I0121 16:30:03.956453 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.012330 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbfsx\" (UniqueName: \"kubernetes.io/projected/8598a357-73ed-4850-bbd3-ce46d3d9a623-kube-api-access-bbfsx\") pod \"8598a357-73ed-4850-bbd3-ce46d3d9a623\" (UID: \"8598a357-73ed-4850-bbd3-ce46d3d9a623\") " Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.012603 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8598a357-73ed-4850-bbd3-ce46d3d9a623-config-volume\") pod \"8598a357-73ed-4850-bbd3-ce46d3d9a623\" (UID: \"8598a357-73ed-4850-bbd3-ce46d3d9a623\") " Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.012659 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8598a357-73ed-4850-bbd3-ce46d3d9a623-secret-volume\") pod \"8598a357-73ed-4850-bbd3-ce46d3d9a623\" (UID: \"8598a357-73ed-4850-bbd3-ce46d3d9a623\") " Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.013222 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8598a357-73ed-4850-bbd3-ce46d3d9a623-config-volume" (OuterVolumeSpecName: "config-volume") pod "8598a357-73ed-4850-bbd3-ce46d3d9a623" (UID: "8598a357-73ed-4850-bbd3-ce46d3d9a623"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.013501 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8598a357-73ed-4850-bbd3-ce46d3d9a623-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.018562 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8598a357-73ed-4850-bbd3-ce46d3d9a623-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8598a357-73ed-4850-bbd3-ce46d3d9a623" (UID: "8598a357-73ed-4850-bbd3-ce46d3d9a623"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.021299 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8598a357-73ed-4850-bbd3-ce46d3d9a623-kube-api-access-bbfsx" (OuterVolumeSpecName: "kube-api-access-bbfsx") pod "8598a357-73ed-4850-bbd3-ce46d3d9a623" (UID: "8598a357-73ed-4850-bbd3-ce46d3d9a623"). InnerVolumeSpecName "kube-api-access-bbfsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.114717 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbfsx\" (UniqueName: \"kubernetes.io/projected/8598a357-73ed-4850-bbd3-ce46d3d9a623-kube-api-access-bbfsx\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.114758 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8598a357-73ed-4850-bbd3-ce46d3d9a623-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.571804 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" event={"ID":"8598a357-73ed-4850-bbd3-ce46d3d9a623","Type":"ContainerDied","Data":"4b18519e5ad76fb7f93e6cc068f4bac59a84ecb67cd9c9339373bf79b7174b78"} Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.571841 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b18519e5ad76fb7f93e6cc068f4bac59a84ecb67cd9c9339373bf79b7174b78" Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.571877 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.623764 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m"] Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.631832 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m"] Jan 21 16:30:06 crc kubenswrapper[4902]: I0121 16:30:06.298174 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:30:06 crc kubenswrapper[4902]: E0121 16:30:06.298818 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:30:06 crc kubenswrapper[4902]: I0121 16:30:06.309223 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6893ec42-9882-4d98-9d44-ab57d7366115" path="/var/lib/kubelet/pods/6893ec42-9882-4d98-9d44-ab57d7366115/volumes" Jan 21 16:30:20 crc kubenswrapper[4902]: I0121 16:30:20.295187 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:30:20 crc kubenswrapper[4902]: E0121 16:30:20.295921 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:30:31 crc kubenswrapper[4902]: I0121 16:30:31.294572 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:30:31 crc kubenswrapper[4902]: E0121 16:30:31.295303 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:30:35 crc kubenswrapper[4902]: I0121 16:30:35.307205 4902 scope.go:117] "RemoveContainer" containerID="8ce585bfe7e263f38d6e4b6cf4cca542c267ca3f4df18725b7e9510d21180fb3" Jan 21 16:30:35 crc kubenswrapper[4902]: I0121 16:30:35.338626 4902 scope.go:117] "RemoveContainer" containerID="d06aac15e4e0103b43e5e004729564b5803ddb7e6af160a1d792ad3827466cc3" Jan 21 16:30:35 crc kubenswrapper[4902]: I0121 16:30:35.384606 4902 scope.go:117] "RemoveContainer" containerID="a17204ae8500af5c3ac489e63a42369874fd6943aaf98b293789e79f2dc7c291" Jan 21 16:30:35 crc kubenswrapper[4902]: I0121 16:30:35.461298 4902 scope.go:117] "RemoveContainer" containerID="04e8685d31a4c1b85ba91615c510f74e4584d6a0993549e22bc5847f14ee429d" Jan 21 16:30:44 crc kubenswrapper[4902]: I0121 16:30:44.295717 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:30:44 crc kubenswrapper[4902]: E0121 16:30:44.296719 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:30:57 crc kubenswrapper[4902]: I0121 16:30:57.295549 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:30:57 crc kubenswrapper[4902]: E0121 16:30:57.296394 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:31:08 crc kubenswrapper[4902]: I0121 16:31:08.304570 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:31:08 crc kubenswrapper[4902]: E0121 16:31:08.305489 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:31:23 crc kubenswrapper[4902]: I0121 16:31:23.295373 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:31:24 crc kubenswrapper[4902]: I0121 16:31:24.339634 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"8b5417485127dae8a96d83b6c2fcfd1f6e929b87a550a052576012cdd78d3701"} Jan 21 16:32:56 crc kubenswrapper[4902]: E0121 16:32:56.769253 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18a1d8a3_fcb5_408d_88ab_97d74bad0a8f.slice/crio-conmon-b5588d16688a7ebc8d6fd23427c875175924aa3ba2e94e6335eed27cd3b25dfb.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:32:57 crc kubenswrapper[4902]: I0121 16:32:57.225477 4902 generic.go:334] "Generic (PLEG): container finished" podID="18a1d8a3-fcb5-408d-88ab-97d74bad0a8f" containerID="b5588d16688a7ebc8d6fd23427c875175924aa3ba2e94e6335eed27cd3b25dfb" exitCode=0 Jan 21 16:32:57 crc kubenswrapper[4902]: I0121 16:32:57.225518 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" event={"ID":"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f","Type":"ContainerDied","Data":"b5588d16688a7ebc8d6fd23427c875175924aa3ba2e94e6335eed27cd3b25dfb"} Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.731620 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.881536 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-tripleo-cleanup-combined-ca-bundle\") pod \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.881610 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-ssh-key-openstack-cell1\") pod \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.881847 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-inventory\") pod \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.881928 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28wbd\" (UniqueName: \"kubernetes.io/projected/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-kube-api-access-28wbd\") pod \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.888287 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-kube-api-access-28wbd" (OuterVolumeSpecName: "kube-api-access-28wbd") pod "18a1d8a3-fcb5-408d-88ab-97d74bad0a8f" (UID: "18a1d8a3-fcb5-408d-88ab-97d74bad0a8f"). InnerVolumeSpecName "kube-api-access-28wbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.892262 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "18a1d8a3-fcb5-408d-88ab-97d74bad0a8f" (UID: "18a1d8a3-fcb5-408d-88ab-97d74bad0a8f"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.916834 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-inventory" (OuterVolumeSpecName: "inventory") pod "18a1d8a3-fcb5-408d-88ab-97d74bad0a8f" (UID: "18a1d8a3-fcb5-408d-88ab-97d74bad0a8f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.922185 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "18a1d8a3-fcb5-408d-88ab-97d74bad0a8f" (UID: "18a1d8a3-fcb5-408d-88ab-97d74bad0a8f"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.984956 4902 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.984991 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28wbd\" (UniqueName: \"kubernetes.io/projected/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-kube-api-access-28wbd\") on node \"crc\" DevicePath \"\"" Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.985003 4902 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.985015 4902 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:32:59 crc kubenswrapper[4902]: I0121 16:32:59.259739 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" event={"ID":"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f","Type":"ContainerDied","Data":"d937f1a62ac88d359e95c410ee456b4680107ca512a37ba97d0e11eaf1bd08e7"} Jan 21 16:32:59 crc kubenswrapper[4902]: I0121 16:32:59.259795 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d937f1a62ac88d359e95c410ee456b4680107ca512a37ba97d0e11eaf1bd08e7" Jan 21 16:32:59 crc kubenswrapper[4902]: I0121 16:32:59.259799 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.113242 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-zwtbg"] Jan 21 16:33:06 crc kubenswrapper[4902]: E0121 16:33:06.114376 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a1d8a3-fcb5-408d-88ab-97d74bad0a8f" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.114396 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a1d8a3-fcb5-408d-88ab-97d74bad0a8f" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 21 16:33:06 crc kubenswrapper[4902]: E0121 16:33:06.114497 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8598a357-73ed-4850-bbd3-ce46d3d9a623" containerName="collect-profiles" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.114508 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8598a357-73ed-4850-bbd3-ce46d3d9a623" containerName="collect-profiles" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.114792 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a1d8a3-fcb5-408d-88ab-97d74bad0a8f" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.114838 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8598a357-73ed-4850-bbd3-ce46d3d9a623" containerName="collect-profiles" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.115923 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.118789 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.119020 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.119277 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-c55r2" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.120099 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.130588 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-zwtbg"] Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.156806 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-inventory\") pod \"bootstrap-openstack-openstack-cell1-zwtbg\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.156874 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvswj\" (UniqueName: \"kubernetes.io/projected/03ebbaac-5961-4e6e-8709-93bb85975c9c-kube-api-access-fvswj\") pod \"bootstrap-openstack-openstack-cell1-zwtbg\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.156930 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-zwtbg\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.157001 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-zwtbg\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.258733 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-inventory\") pod \"bootstrap-openstack-openstack-cell1-zwtbg\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.258815 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvswj\" (UniqueName: \"kubernetes.io/projected/03ebbaac-5961-4e6e-8709-93bb85975c9c-kube-api-access-fvswj\") pod \"bootstrap-openstack-openstack-cell1-zwtbg\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.258865 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-zwtbg\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.258914 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-zwtbg\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.264415 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-zwtbg\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.264559 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-zwtbg\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.265573 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-inventory\") pod \"bootstrap-openstack-openstack-cell1-zwtbg\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.276894 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvswj\" (UniqueName: \"kubernetes.io/projected/03ebbaac-5961-4e6e-8709-93bb85975c9c-kube-api-access-fvswj\") pod \"bootstrap-openstack-openstack-cell1-zwtbg\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.436668 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.995553 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-zwtbg"] Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.996343 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:33:07 crc kubenswrapper[4902]: I0121 16:33:07.338774 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" event={"ID":"03ebbaac-5961-4e6e-8709-93bb85975c9c","Type":"ContainerStarted","Data":"09aa8b83385d63d462672b53c56cdfbd5ebc8b48d5b861d719dd5d15fd038fc7"} Jan 21 16:33:08 crc kubenswrapper[4902]: I0121 16:33:08.350622 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" event={"ID":"03ebbaac-5961-4e6e-8709-93bb85975c9c","Type":"ContainerStarted","Data":"8ba6b111039dcfe25a533eabe26035e6c80ba704480ef20d3bc95434f920bf57"} Jan 21 16:33:08 crc kubenswrapper[4902]: I0121 16:33:08.379034 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" podStartSLOduration=1.348833035 podStartE2EDuration="2.379005334s" podCreationTimestamp="2026-01-21 16:33:06 +0000 UTC" firstStartedPulling="2026-01-21 16:33:06.996090584 +0000 UTC m=+7149.072923613" lastFinishedPulling="2026-01-21 16:33:08.026262883 +0000 UTC m=+7150.103095912" observedRunningTime="2026-01-21 16:33:08.368554078 +0000 UTC m=+7150.445387117" watchObservedRunningTime="2026-01-21 16:33:08.379005334 +0000 UTC m=+7150.455838363" Jan 21 16:33:35 crc kubenswrapper[4902]: I0121 16:33:35.621245 4902 scope.go:117] "RemoveContainer" containerID="b373e6919ca764e66afc03ed68fe6af0501058a2ad9ef7fa08c0b3af4ce3215b" Jan 21 16:33:35 crc kubenswrapper[4902]: I0121 16:33:35.664212 4902 scope.go:117] "RemoveContainer" containerID="0b5fb853e79c68c6241f67d5b7bbcb7d13dc083797c50a00f83b6ef27ef4b827" Jan 21 16:33:35 crc kubenswrapper[4902]: I0121 16:33:35.685467 4902 scope.go:117] "RemoveContainer" containerID="35f73d651eeaa6573d9033ccbf674b8ce47b749239de3eb8f9420a462171ab10" Jan 21 16:33:35 crc kubenswrapper[4902]: I0121 16:33:35.732097 4902 scope.go:117] "RemoveContainer" containerID="32cdb44674e6374f766b65eaed6a61b60758360dd1e8e594ab7a3baf4d914d87" Jan 21 16:33:35 crc kubenswrapper[4902]: I0121 16:33:35.753443 4902 scope.go:117] "RemoveContainer" containerID="ecc4d1b7ad6d3c3e3e91d4bd9e4657053e105bd206863129b0c9caecb3844760" Jan 21 16:33:35 crc kubenswrapper[4902]: I0121 16:33:35.810164 4902 scope.go:117] "RemoveContainer" containerID="04aced0c0b567c17119cd21528fe883b24627e7fda15f96134eacb5302158c50" Jan 21 16:33:47 crc kubenswrapper[4902]: I0121 16:33:47.769521 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:33:47 crc kubenswrapper[4902]: I0121 16:33:47.770138 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:34:17 crc kubenswrapper[4902]: I0121 16:34:17.769696 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:34:17 crc kubenswrapper[4902]: I0121 16:34:17.772541 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:34:47 crc kubenswrapper[4902]: I0121 16:34:47.770475 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:34:47 crc kubenswrapper[4902]: I0121 16:34:47.771188 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:34:47 crc kubenswrapper[4902]: I0121 16:34:47.771242 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 16:34:47 crc kubenswrapper[4902]: I0121 16:34:47.772219 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8b5417485127dae8a96d83b6c2fcfd1f6e929b87a550a052576012cdd78d3701"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:34:47 crc kubenswrapper[4902]: I0121 16:34:47.772291 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://8b5417485127dae8a96d83b6c2fcfd1f6e929b87a550a052576012cdd78d3701" gracePeriod=600 Jan 21 16:34:48 crc kubenswrapper[4902]: I0121 16:34:48.295725 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="8b5417485127dae8a96d83b6c2fcfd1f6e929b87a550a052576012cdd78d3701" exitCode=0 Jan 21 16:34:48 crc kubenswrapper[4902]: I0121 16:34:48.306841 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"8b5417485127dae8a96d83b6c2fcfd1f6e929b87a550a052576012cdd78d3701"} Jan 21 16:34:48 crc kubenswrapper[4902]: I0121 16:34:48.306895 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d"} Jan 21 16:34:48 crc kubenswrapper[4902]: I0121 16:34:48.306918 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:36:21 crc kubenswrapper[4902]: I0121 16:36:21.213712 4902 generic.go:334] "Generic (PLEG): container finished" podID="03ebbaac-5961-4e6e-8709-93bb85975c9c" containerID="8ba6b111039dcfe25a533eabe26035e6c80ba704480ef20d3bc95434f920bf57" exitCode=0 Jan 21 16:36:21 crc kubenswrapper[4902]: I0121 16:36:21.213829 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" event={"ID":"03ebbaac-5961-4e6e-8709-93bb85975c9c","Type":"ContainerDied","Data":"8ba6b111039dcfe25a533eabe26035e6c80ba704480ef20d3bc95434f920bf57"} Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.669505 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.840787 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-inventory\") pod \"03ebbaac-5961-4e6e-8709-93bb85975c9c\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.840861 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvswj\" (UniqueName: \"kubernetes.io/projected/03ebbaac-5961-4e6e-8709-93bb85975c9c-kube-api-access-fvswj\") pod \"03ebbaac-5961-4e6e-8709-93bb85975c9c\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.840941 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-bootstrap-combined-ca-bundle\") pod \"03ebbaac-5961-4e6e-8709-93bb85975c9c\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.840972 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-ssh-key-openstack-cell1\") pod \"03ebbaac-5961-4e6e-8709-93bb85975c9c\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.847319 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "03ebbaac-5961-4e6e-8709-93bb85975c9c" (UID: "03ebbaac-5961-4e6e-8709-93bb85975c9c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.847383 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03ebbaac-5961-4e6e-8709-93bb85975c9c-kube-api-access-fvswj" (OuterVolumeSpecName: "kube-api-access-fvswj") pod "03ebbaac-5961-4e6e-8709-93bb85975c9c" (UID: "03ebbaac-5961-4e6e-8709-93bb85975c9c"). InnerVolumeSpecName "kube-api-access-fvswj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.877780 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "03ebbaac-5961-4e6e-8709-93bb85975c9c" (UID: "03ebbaac-5961-4e6e-8709-93bb85975c9c"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.878281 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-inventory" (OuterVolumeSpecName: "inventory") pod "03ebbaac-5961-4e6e-8709-93bb85975c9c" (UID: "03ebbaac-5961-4e6e-8709-93bb85975c9c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.943927 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvswj\" (UniqueName: \"kubernetes.io/projected/03ebbaac-5961-4e6e-8709-93bb85975c9c-kube-api-access-fvswj\") on node \"crc\" DevicePath \"\"" Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.943962 4902 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.943971 4902 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.943983 4902 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.240313 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" event={"ID":"03ebbaac-5961-4e6e-8709-93bb85975c9c","Type":"ContainerDied","Data":"09aa8b83385d63d462672b53c56cdfbd5ebc8b48d5b861d719dd5d15fd038fc7"} Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.240366 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09aa8b83385d63d462672b53c56cdfbd5ebc8b48d5b861d719dd5d15fd038fc7" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.240483 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.337232 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-lvw72"] Jan 21 16:36:23 crc kubenswrapper[4902]: E0121 16:36:23.338105 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ebbaac-5961-4e6e-8709-93bb85975c9c" containerName="bootstrap-openstack-openstack-cell1" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.338127 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ebbaac-5961-4e6e-8709-93bb85975c9c" containerName="bootstrap-openstack-openstack-cell1" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.338401 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ebbaac-5961-4e6e-8709-93bb85975c9c" containerName="bootstrap-openstack-openstack-cell1" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.339273 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.343700 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.344475 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.344994 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-c55r2" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.346701 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.355982 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-lvw72"] Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.489269 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6ns4\" (UniqueName: \"kubernetes.io/projected/d171dc59-1575-4895-b80f-0886e901b704-kube-api-access-p6ns4\") pod \"download-cache-openstack-openstack-cell1-lvw72\" (UID: \"d171dc59-1575-4895-b80f-0886e901b704\") " pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.489378 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d171dc59-1575-4895-b80f-0886e901b704-inventory\") pod \"download-cache-openstack-openstack-cell1-lvw72\" (UID: \"d171dc59-1575-4895-b80f-0886e901b704\") " pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.489515 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d171dc59-1575-4895-b80f-0886e901b704-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-lvw72\" (UID: \"d171dc59-1575-4895-b80f-0886e901b704\") " pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.591548 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6ns4\" (UniqueName: \"kubernetes.io/projected/d171dc59-1575-4895-b80f-0886e901b704-kube-api-access-p6ns4\") pod \"download-cache-openstack-openstack-cell1-lvw72\" (UID: \"d171dc59-1575-4895-b80f-0886e901b704\") " pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.591789 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d171dc59-1575-4895-b80f-0886e901b704-inventory\") pod \"download-cache-openstack-openstack-cell1-lvw72\" (UID: \"d171dc59-1575-4895-b80f-0886e901b704\") " pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.592004 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d171dc59-1575-4895-b80f-0886e901b704-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-lvw72\" (UID: \"d171dc59-1575-4895-b80f-0886e901b704\") " pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.599701 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d171dc59-1575-4895-b80f-0886e901b704-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-lvw72\" (UID: \"d171dc59-1575-4895-b80f-0886e901b704\") " pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.604515 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d171dc59-1575-4895-b80f-0886e901b704-inventory\") pod \"download-cache-openstack-openstack-cell1-lvw72\" (UID: \"d171dc59-1575-4895-b80f-0886e901b704\") " pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.609952 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6ns4\" (UniqueName: \"kubernetes.io/projected/d171dc59-1575-4895-b80f-0886e901b704-kube-api-access-p6ns4\") pod \"download-cache-openstack-openstack-cell1-lvw72\" (UID: \"d171dc59-1575-4895-b80f-0886e901b704\") " pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.660563 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:36:24 crc kubenswrapper[4902]: I0121 16:36:24.242156 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-lvw72"] Jan 21 16:36:25 crc kubenswrapper[4902]: I0121 16:36:25.383203 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-lvw72" event={"ID":"d171dc59-1575-4895-b80f-0886e901b704","Type":"ContainerStarted","Data":"7b3fb5c8e9391f6b9622a0cf5505767f0407f225f074edaa215ecff368c0b7eb"} Jan 21 16:36:25 crc kubenswrapper[4902]: I0121 16:36:25.383627 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-lvw72" event={"ID":"d171dc59-1575-4895-b80f-0886e901b704","Type":"ContainerStarted","Data":"48dbb757376e3a511ff5530e99c8c98362a3ac0c8c52ba32a7a1bbd83c254216"} Jan 21 16:36:25 crc kubenswrapper[4902]: I0121 16:36:25.403835 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-lvw72" podStartSLOduration=1.833955939 podStartE2EDuration="2.403818873s" podCreationTimestamp="2026-01-21 16:36:23 +0000 UTC" firstStartedPulling="2026-01-21 16:36:24.244301934 +0000 UTC m=+7346.321134963" lastFinishedPulling="2026-01-21 16:36:24.814164858 +0000 UTC m=+7346.890997897" observedRunningTime="2026-01-21 16:36:25.401185018 +0000 UTC m=+7347.478018047" watchObservedRunningTime="2026-01-21 16:36:25.403818873 +0000 UTC m=+7347.480651902" Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.489424 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d6z22"] Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.492746 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.500101 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6z22"] Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.638039 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd62113d-9826-4317-8ad0-b2f1d06c81c0-utilities\") pod \"redhat-marketplace-d6z22\" (UID: \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\") " pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.638174 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2sff\" (UniqueName: \"kubernetes.io/projected/bd62113d-9826-4317-8ad0-b2f1d06c81c0-kube-api-access-l2sff\") pod \"redhat-marketplace-d6z22\" (UID: \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\") " pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.638263 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd62113d-9826-4317-8ad0-b2f1d06c81c0-catalog-content\") pod \"redhat-marketplace-d6z22\" (UID: \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\") " pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.740230 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd62113d-9826-4317-8ad0-b2f1d06c81c0-utilities\") pod \"redhat-marketplace-d6z22\" (UID: \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\") " pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.740361 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2sff\" (UniqueName: \"kubernetes.io/projected/bd62113d-9826-4317-8ad0-b2f1d06c81c0-kube-api-access-l2sff\") pod \"redhat-marketplace-d6z22\" (UID: \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\") " pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.740449 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd62113d-9826-4317-8ad0-b2f1d06c81c0-catalog-content\") pod \"redhat-marketplace-d6z22\" (UID: \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\") " pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.740876 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd62113d-9826-4317-8ad0-b2f1d06c81c0-utilities\") pod \"redhat-marketplace-d6z22\" (UID: \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\") " pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.740912 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd62113d-9826-4317-8ad0-b2f1d06c81c0-catalog-content\") pod \"redhat-marketplace-d6z22\" (UID: \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\") " pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.761915 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2sff\" (UniqueName: \"kubernetes.io/projected/bd62113d-9826-4317-8ad0-b2f1d06c81c0-kube-api-access-l2sff\") pod \"redhat-marketplace-d6z22\" (UID: \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\") " pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.829633 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:36:58 crc kubenswrapper[4902]: I0121 16:36:58.326399 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6z22"] Jan 21 16:36:58 crc kubenswrapper[4902]: I0121 16:36:58.747234 4902 generic.go:334] "Generic (PLEG): container finished" podID="bd62113d-9826-4317-8ad0-b2f1d06c81c0" containerID="04c9dfbda7546710be97e89fde0e4f2e24be0ef900c3e5eeabcf76b37543d0ee" exitCode=0 Jan 21 16:36:58 crc kubenswrapper[4902]: I0121 16:36:58.747359 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6z22" event={"ID":"bd62113d-9826-4317-8ad0-b2f1d06c81c0","Type":"ContainerDied","Data":"04c9dfbda7546710be97e89fde0e4f2e24be0ef900c3e5eeabcf76b37543d0ee"} Jan 21 16:36:58 crc kubenswrapper[4902]: I0121 16:36:58.747583 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6z22" event={"ID":"bd62113d-9826-4317-8ad0-b2f1d06c81c0","Type":"ContainerStarted","Data":"10ec2ea5668a87f8e5065a6bf22b0becd71a72a8a05e59574aa952a1a8d3d6b1"} Jan 21 16:37:00 crc kubenswrapper[4902]: I0121 16:37:00.768424 4902 generic.go:334] "Generic (PLEG): container finished" podID="bd62113d-9826-4317-8ad0-b2f1d06c81c0" containerID="b84ffdd8dc29fc15798528239989ebeecfbcdf66fdbe010632913ceb584678ae" exitCode=0 Jan 21 16:37:00 crc kubenswrapper[4902]: I0121 16:37:00.768485 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6z22" event={"ID":"bd62113d-9826-4317-8ad0-b2f1d06c81c0","Type":"ContainerDied","Data":"b84ffdd8dc29fc15798528239989ebeecfbcdf66fdbe010632913ceb584678ae"} Jan 21 16:37:01 crc kubenswrapper[4902]: I0121 16:37:01.781533 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6z22" event={"ID":"bd62113d-9826-4317-8ad0-b2f1d06c81c0","Type":"ContainerStarted","Data":"18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144"} Jan 21 16:37:01 crc kubenswrapper[4902]: I0121 16:37:01.820576 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d6z22" podStartSLOduration=2.313476762 podStartE2EDuration="4.820555932s" podCreationTimestamp="2026-01-21 16:36:57 +0000 UTC" firstStartedPulling="2026-01-21 16:36:58.749930095 +0000 UTC m=+7380.826763124" lastFinishedPulling="2026-01-21 16:37:01.257009255 +0000 UTC m=+7383.333842294" observedRunningTime="2026-01-21 16:37:01.812069181 +0000 UTC m=+7383.888902230" watchObservedRunningTime="2026-01-21 16:37:01.820555932 +0000 UTC m=+7383.897388961" Jan 21 16:37:07 crc kubenswrapper[4902]: I0121 16:37:07.830481 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:37:07 crc kubenswrapper[4902]: I0121 16:37:07.831016 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:37:07 crc kubenswrapper[4902]: I0121 16:37:07.880735 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:37:07 crc kubenswrapper[4902]: I0121 16:37:07.942648 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:37:08 crc kubenswrapper[4902]: I0121 16:37:08.126485 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6z22"] Jan 21 16:37:09 crc kubenswrapper[4902]: I0121 16:37:09.873319 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d6z22" podUID="bd62113d-9826-4317-8ad0-b2f1d06c81c0" containerName="registry-server" containerID="cri-o://18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144" gracePeriod=2 Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.336200 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.471961 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd62113d-9826-4317-8ad0-b2f1d06c81c0-catalog-content\") pod \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\" (UID: \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\") " Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.472494 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2sff\" (UniqueName: \"kubernetes.io/projected/bd62113d-9826-4317-8ad0-b2f1d06c81c0-kube-api-access-l2sff\") pod \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\" (UID: \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\") " Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.472589 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd62113d-9826-4317-8ad0-b2f1d06c81c0-utilities\") pod \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\" (UID: \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\") " Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.475057 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd62113d-9826-4317-8ad0-b2f1d06c81c0-utilities" (OuterVolumeSpecName: "utilities") pod "bd62113d-9826-4317-8ad0-b2f1d06c81c0" (UID: "bd62113d-9826-4317-8ad0-b2f1d06c81c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.478770 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd62113d-9826-4317-8ad0-b2f1d06c81c0-kube-api-access-l2sff" (OuterVolumeSpecName: "kube-api-access-l2sff") pod "bd62113d-9826-4317-8ad0-b2f1d06c81c0" (UID: "bd62113d-9826-4317-8ad0-b2f1d06c81c0"). InnerVolumeSpecName "kube-api-access-l2sff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.499256 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd62113d-9826-4317-8ad0-b2f1d06c81c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd62113d-9826-4317-8ad0-b2f1d06c81c0" (UID: "bd62113d-9826-4317-8ad0-b2f1d06c81c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.575763 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd62113d-9826-4317-8ad0-b2f1d06c81c0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.575829 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2sff\" (UniqueName: \"kubernetes.io/projected/bd62113d-9826-4317-8ad0-b2f1d06c81c0-kube-api-access-l2sff\") on node \"crc\" DevicePath \"\"" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.575843 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd62113d-9826-4317-8ad0-b2f1d06c81c0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.896966 4902 generic.go:334] "Generic (PLEG): container finished" podID="bd62113d-9826-4317-8ad0-b2f1d06c81c0" containerID="18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144" exitCode=0 Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.897015 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6z22" event={"ID":"bd62113d-9826-4317-8ad0-b2f1d06c81c0","Type":"ContainerDied","Data":"18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144"} Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.897115 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6z22" event={"ID":"bd62113d-9826-4317-8ad0-b2f1d06c81c0","Type":"ContainerDied","Data":"10ec2ea5668a87f8e5065a6bf22b0becd71a72a8a05e59574aa952a1a8d3d6b1"} Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.897138 4902 scope.go:117] "RemoveContainer" containerID="18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.897177 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.919677 4902 scope.go:117] "RemoveContainer" containerID="b84ffdd8dc29fc15798528239989ebeecfbcdf66fdbe010632913ceb584678ae" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.940112 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6z22"] Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.951073 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6z22"] Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.956821 4902 scope.go:117] "RemoveContainer" containerID="04c9dfbda7546710be97e89fde0e4f2e24be0ef900c3e5eeabcf76b37543d0ee" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.987297 4902 scope.go:117] "RemoveContainer" containerID="18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144" Jan 21 16:37:10 crc kubenswrapper[4902]: E0121 16:37:10.987873 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144\": container with ID starting with 18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144 not found: ID does not exist" containerID="18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.987904 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144"} err="failed to get container status \"18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144\": rpc error: code = NotFound desc = could not find container \"18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144\": container with ID starting with 18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144 not found: ID does not exist" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.987925 4902 scope.go:117] "RemoveContainer" containerID="b84ffdd8dc29fc15798528239989ebeecfbcdf66fdbe010632913ceb584678ae" Jan 21 16:37:10 crc kubenswrapper[4902]: E0121 16:37:10.988294 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b84ffdd8dc29fc15798528239989ebeecfbcdf66fdbe010632913ceb584678ae\": container with ID starting with b84ffdd8dc29fc15798528239989ebeecfbcdf66fdbe010632913ceb584678ae not found: ID does not exist" containerID="b84ffdd8dc29fc15798528239989ebeecfbcdf66fdbe010632913ceb584678ae" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.988336 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b84ffdd8dc29fc15798528239989ebeecfbcdf66fdbe010632913ceb584678ae"} err="failed to get container status \"b84ffdd8dc29fc15798528239989ebeecfbcdf66fdbe010632913ceb584678ae\": rpc error: code = NotFound desc = could not find container \"b84ffdd8dc29fc15798528239989ebeecfbcdf66fdbe010632913ceb584678ae\": container with ID starting with b84ffdd8dc29fc15798528239989ebeecfbcdf66fdbe010632913ceb584678ae not found: ID does not exist" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.988364 4902 scope.go:117] "RemoveContainer" containerID="04c9dfbda7546710be97e89fde0e4f2e24be0ef900c3e5eeabcf76b37543d0ee" Jan 21 16:37:10 crc kubenswrapper[4902]: E0121 16:37:10.988788 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04c9dfbda7546710be97e89fde0e4f2e24be0ef900c3e5eeabcf76b37543d0ee\": container with ID starting with 04c9dfbda7546710be97e89fde0e4f2e24be0ef900c3e5eeabcf76b37543d0ee not found: ID does not exist" containerID="04c9dfbda7546710be97e89fde0e4f2e24be0ef900c3e5eeabcf76b37543d0ee" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.988863 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04c9dfbda7546710be97e89fde0e4f2e24be0ef900c3e5eeabcf76b37543d0ee"} err="failed to get container status \"04c9dfbda7546710be97e89fde0e4f2e24be0ef900c3e5eeabcf76b37543d0ee\": rpc error: code = NotFound desc = could not find container \"04c9dfbda7546710be97e89fde0e4f2e24be0ef900c3e5eeabcf76b37543d0ee\": container with ID starting with 04c9dfbda7546710be97e89fde0e4f2e24be0ef900c3e5eeabcf76b37543d0ee not found: ID does not exist" Jan 21 16:37:12 crc kubenswrapper[4902]: I0121 16:37:12.315301 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd62113d-9826-4317-8ad0-b2f1d06c81c0" path="/var/lib/kubelet/pods/bd62113d-9826-4317-8ad0-b2f1d06c81c0/volumes" Jan 21 16:37:17 crc kubenswrapper[4902]: I0121 16:37:17.769891 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:37:17 crc kubenswrapper[4902]: I0121 16:37:17.775259 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:37:47 crc kubenswrapper[4902]: I0121 16:37:47.771869 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:37:47 crc kubenswrapper[4902]: I0121 16:37:47.772472 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:37:57 crc kubenswrapper[4902]: I0121 16:37:57.321378 4902 generic.go:334] "Generic (PLEG): container finished" podID="d171dc59-1575-4895-b80f-0886e901b704" containerID="7b3fb5c8e9391f6b9622a0cf5505767f0407f225f074edaa215ecff368c0b7eb" exitCode=0 Jan 21 16:37:57 crc kubenswrapper[4902]: I0121 16:37:57.321561 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-lvw72" event={"ID":"d171dc59-1575-4895-b80f-0886e901b704","Type":"ContainerDied","Data":"7b3fb5c8e9391f6b9622a0cf5505767f0407f225f074edaa215ecff368c0b7eb"} Jan 21 16:37:58 crc kubenswrapper[4902]: I0121 16:37:58.777262 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:37:58 crc kubenswrapper[4902]: I0121 16:37:58.816093 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d171dc59-1575-4895-b80f-0886e901b704-ssh-key-openstack-cell1\") pod \"d171dc59-1575-4895-b80f-0886e901b704\" (UID: \"d171dc59-1575-4895-b80f-0886e901b704\") " Jan 21 16:37:58 crc kubenswrapper[4902]: I0121 16:37:58.816207 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d171dc59-1575-4895-b80f-0886e901b704-inventory\") pod \"d171dc59-1575-4895-b80f-0886e901b704\" (UID: \"d171dc59-1575-4895-b80f-0886e901b704\") " Jan 21 16:37:58 crc kubenswrapper[4902]: I0121 16:37:58.816287 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6ns4\" (UniqueName: \"kubernetes.io/projected/d171dc59-1575-4895-b80f-0886e901b704-kube-api-access-p6ns4\") pod \"d171dc59-1575-4895-b80f-0886e901b704\" (UID: \"d171dc59-1575-4895-b80f-0886e901b704\") " Jan 21 16:37:58 crc kubenswrapper[4902]: I0121 16:37:58.824559 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d171dc59-1575-4895-b80f-0886e901b704-kube-api-access-p6ns4" (OuterVolumeSpecName: "kube-api-access-p6ns4") pod "d171dc59-1575-4895-b80f-0886e901b704" (UID: "d171dc59-1575-4895-b80f-0886e901b704"). InnerVolumeSpecName "kube-api-access-p6ns4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:37:58 crc kubenswrapper[4902]: I0121 16:37:58.846858 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d171dc59-1575-4895-b80f-0886e901b704-inventory" (OuterVolumeSpecName: "inventory") pod "d171dc59-1575-4895-b80f-0886e901b704" (UID: "d171dc59-1575-4895-b80f-0886e901b704"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:37:58 crc kubenswrapper[4902]: I0121 16:37:58.847069 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d171dc59-1575-4895-b80f-0886e901b704-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "d171dc59-1575-4895-b80f-0886e901b704" (UID: "d171dc59-1575-4895-b80f-0886e901b704"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:37:58 crc kubenswrapper[4902]: I0121 16:37:58.918323 4902 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d171dc59-1575-4895-b80f-0886e901b704-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:37:58 crc kubenswrapper[4902]: I0121 16:37:58.918356 4902 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d171dc59-1575-4895-b80f-0886e901b704-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:37:58 crc kubenswrapper[4902]: I0121 16:37:58.918365 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6ns4\" (UniqueName: \"kubernetes.io/projected/d171dc59-1575-4895-b80f-0886e901b704-kube-api-access-p6ns4\") on node \"crc\" DevicePath \"\"" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.344488 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-lvw72" event={"ID":"d171dc59-1575-4895-b80f-0886e901b704","Type":"ContainerDied","Data":"48dbb757376e3a511ff5530e99c8c98362a3ac0c8c52ba32a7a1bbd83c254216"} Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.344533 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48dbb757376e3a511ff5530e99c8c98362a3ac0c8c52ba32a7a1bbd83c254216" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.344594 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.436616 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-jgd86"] Jan 21 16:37:59 crc kubenswrapper[4902]: E0121 16:37:59.437010 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62113d-9826-4317-8ad0-b2f1d06c81c0" containerName="registry-server" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.437026 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62113d-9826-4317-8ad0-b2f1d06c81c0" containerName="registry-server" Jan 21 16:37:59 crc kubenswrapper[4902]: E0121 16:37:59.437037 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d171dc59-1575-4895-b80f-0886e901b704" containerName="download-cache-openstack-openstack-cell1" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.437055 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d171dc59-1575-4895-b80f-0886e901b704" containerName="download-cache-openstack-openstack-cell1" Jan 21 16:37:59 crc kubenswrapper[4902]: E0121 16:37:59.437075 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62113d-9826-4317-8ad0-b2f1d06c81c0" containerName="extract-utilities" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.437082 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62113d-9826-4317-8ad0-b2f1d06c81c0" containerName="extract-utilities" Jan 21 16:37:59 crc kubenswrapper[4902]: E0121 16:37:59.437104 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62113d-9826-4317-8ad0-b2f1d06c81c0" containerName="extract-content" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.437109 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62113d-9826-4317-8ad0-b2f1d06c81c0" containerName="extract-content" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.437290 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62113d-9826-4317-8ad0-b2f1d06c81c0" containerName="registry-server" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.437342 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d171dc59-1575-4895-b80f-0886e901b704" containerName="download-cache-openstack-openstack-cell1" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.438035 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.441778 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-c55r2" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.441780 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.441844 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.441795 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.448325 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-jgd86"] Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.532772 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-jgd86\" (UID: \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\") " pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.532878 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpjvq\" (UniqueName: \"kubernetes.io/projected/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-kube-api-access-jpjvq\") pod \"configure-network-openstack-openstack-cell1-jgd86\" (UID: \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\") " pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.532945 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-inventory\") pod \"configure-network-openstack-openstack-cell1-jgd86\" (UID: \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\") " pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.636280 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-inventory\") pod \"configure-network-openstack-openstack-cell1-jgd86\" (UID: \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\") " pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.636449 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-jgd86\" (UID: \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\") " pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.636490 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpjvq\" (UniqueName: \"kubernetes.io/projected/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-kube-api-access-jpjvq\") pod \"configure-network-openstack-openstack-cell1-jgd86\" (UID: \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\") " pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.647756 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-inventory\") pod \"configure-network-openstack-openstack-cell1-jgd86\" (UID: \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\") " pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.650642 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-jgd86\" (UID: \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\") " pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.673820 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpjvq\" (UniqueName: \"kubernetes.io/projected/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-kube-api-access-jpjvq\") pod \"configure-network-openstack-openstack-cell1-jgd86\" (UID: \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\") " pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.756783 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:38:00 crc kubenswrapper[4902]: I0121 16:38:00.356775 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-jgd86"] Jan 21 16:38:01 crc kubenswrapper[4902]: I0121 16:38:01.365021 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-jgd86" event={"ID":"2418bfc5-bf9b-4397-bc7f-20aa86aa582a","Type":"ContainerStarted","Data":"69f60bb136372fb2378f342b345859b169e746d2f6f9374d0fb348efe83cb1b2"} Jan 21 16:38:01 crc kubenswrapper[4902]: I0121 16:38:01.365364 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-jgd86" event={"ID":"2418bfc5-bf9b-4397-bc7f-20aa86aa582a","Type":"ContainerStarted","Data":"c302a71944841262a867537b73d171a54730ef06ab00ad3abc0cf5946248e3eb"} Jan 21 16:38:01 crc kubenswrapper[4902]: I0121 16:38:01.391616 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-jgd86" podStartSLOduration=1.952529207 podStartE2EDuration="2.391594201s" podCreationTimestamp="2026-01-21 16:37:59 +0000 UTC" firstStartedPulling="2026-01-21 16:38:00.36736182 +0000 UTC m=+7442.444194849" lastFinishedPulling="2026-01-21 16:38:00.806426794 +0000 UTC m=+7442.883259843" observedRunningTime="2026-01-21 16:38:01.387249568 +0000 UTC m=+7443.464082607" watchObservedRunningTime="2026-01-21 16:38:01.391594201 +0000 UTC m=+7443.468427240" Jan 21 16:38:17 crc kubenswrapper[4902]: I0121 16:38:17.769935 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:38:17 crc kubenswrapper[4902]: I0121 16:38:17.770472 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:38:17 crc kubenswrapper[4902]: I0121 16:38:17.770521 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 16:38:17 crc kubenswrapper[4902]: I0121 16:38:17.771446 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:38:17 crc kubenswrapper[4902]: I0121 16:38:17.771511 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" gracePeriod=600 Jan 21 16:38:17 crc kubenswrapper[4902]: E0121 16:38:17.896349 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:38:18 crc kubenswrapper[4902]: I0121 16:38:18.527345 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" exitCode=0 Jan 21 16:38:18 crc kubenswrapper[4902]: I0121 16:38:18.527782 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d"} Jan 21 16:38:18 crc kubenswrapper[4902]: I0121 16:38:18.527847 4902 scope.go:117] "RemoveContainer" containerID="8b5417485127dae8a96d83b6c2fcfd1f6e929b87a550a052576012cdd78d3701" Jan 21 16:38:18 crc kubenswrapper[4902]: I0121 16:38:18.528737 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:38:18 crc kubenswrapper[4902]: E0121 16:38:18.529121 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:38:33 crc kubenswrapper[4902]: I0121 16:38:33.295798 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:38:33 crc kubenswrapper[4902]: E0121 16:38:33.296557 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:38:46 crc kubenswrapper[4902]: I0121 16:38:46.296254 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:38:46 crc kubenswrapper[4902]: E0121 16:38:46.297722 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:38:58 crc kubenswrapper[4902]: I0121 16:38:58.303299 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:38:58 crc kubenswrapper[4902]: E0121 16:38:58.304153 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:39:13 crc kubenswrapper[4902]: I0121 16:39:13.295034 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:39:13 crc kubenswrapper[4902]: E0121 16:39:13.296166 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:39:23 crc kubenswrapper[4902]: I0121 16:39:23.180426 4902 generic.go:334] "Generic (PLEG): container finished" podID="2418bfc5-bf9b-4397-bc7f-20aa86aa582a" containerID="69f60bb136372fb2378f342b345859b169e746d2f6f9374d0fb348efe83cb1b2" exitCode=0 Jan 21 16:39:23 crc kubenswrapper[4902]: I0121 16:39:23.180519 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-jgd86" event={"ID":"2418bfc5-bf9b-4397-bc7f-20aa86aa582a","Type":"ContainerDied","Data":"69f60bb136372fb2378f342b345859b169e746d2f6f9374d0fb348efe83cb1b2"} Jan 21 16:39:24 crc kubenswrapper[4902]: I0121 16:39:24.796134 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:39:24 crc kubenswrapper[4902]: I0121 16:39:24.981980 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpjvq\" (UniqueName: \"kubernetes.io/projected/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-kube-api-access-jpjvq\") pod \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\" (UID: \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\") " Jan 21 16:39:24 crc kubenswrapper[4902]: I0121 16:39:24.982449 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-inventory\") pod \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\" (UID: \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\") " Jan 21 16:39:24 crc kubenswrapper[4902]: I0121 16:39:24.982600 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-ssh-key-openstack-cell1\") pod \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\" (UID: \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\") " Jan 21 16:39:24 crc kubenswrapper[4902]: I0121 16:39:24.990945 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-kube-api-access-jpjvq" (OuterVolumeSpecName: "kube-api-access-jpjvq") pod "2418bfc5-bf9b-4397-bc7f-20aa86aa582a" (UID: "2418bfc5-bf9b-4397-bc7f-20aa86aa582a"). InnerVolumeSpecName "kube-api-access-jpjvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.012700 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-inventory" (OuterVolumeSpecName: "inventory") pod "2418bfc5-bf9b-4397-bc7f-20aa86aa582a" (UID: "2418bfc5-bf9b-4397-bc7f-20aa86aa582a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.016491 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "2418bfc5-bf9b-4397-bc7f-20aa86aa582a" (UID: "2418bfc5-bf9b-4397-bc7f-20aa86aa582a"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.085335 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpjvq\" (UniqueName: \"kubernetes.io/projected/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-kube-api-access-jpjvq\") on node \"crc\" DevicePath \"\"" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.087224 4902 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.087379 4902 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.382730 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-jgd86" event={"ID":"2418bfc5-bf9b-4397-bc7f-20aa86aa582a","Type":"ContainerDied","Data":"c302a71944841262a867537b73d171a54730ef06ab00ad3abc0cf5946248e3eb"} Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.382776 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c302a71944841262a867537b73d171a54730ef06ab00ad3abc0cf5946248e3eb" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.382828 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.397494 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-5c9t8"] Jan 21 16:39:25 crc kubenswrapper[4902]: E0121 16:39:25.400361 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2418bfc5-bf9b-4397-bc7f-20aa86aa582a" containerName="configure-network-openstack-openstack-cell1" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.400384 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2418bfc5-bf9b-4397-bc7f-20aa86aa582a" containerName="configure-network-openstack-openstack-cell1" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.401985 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="2418bfc5-bf9b-4397-bc7f-20aa86aa582a" containerName="configure-network-openstack-openstack-cell1" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.430914 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-5c9t8"] Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.431102 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.437717 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.438243 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.438265 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-c55r2" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.438417 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.510292 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vghhx\" (UniqueName: \"kubernetes.io/projected/ffce6892-25f4-48d1-b314-24d784fbc43f-kube-api-access-vghhx\") pod \"validate-network-openstack-openstack-cell1-5c9t8\" (UID: \"ffce6892-25f4-48d1-b314-24d784fbc43f\") " pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.510538 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffce6892-25f4-48d1-b314-24d784fbc43f-inventory\") pod \"validate-network-openstack-openstack-cell1-5c9t8\" (UID: \"ffce6892-25f4-48d1-b314-24d784fbc43f\") " pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.510873 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ffce6892-25f4-48d1-b314-24d784fbc43f-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-5c9t8\" (UID: \"ffce6892-25f4-48d1-b314-24d784fbc43f\") " pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.612523 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ffce6892-25f4-48d1-b314-24d784fbc43f-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-5c9t8\" (UID: \"ffce6892-25f4-48d1-b314-24d784fbc43f\") " pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.612589 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vghhx\" (UniqueName: \"kubernetes.io/projected/ffce6892-25f4-48d1-b314-24d784fbc43f-kube-api-access-vghhx\") pod \"validate-network-openstack-openstack-cell1-5c9t8\" (UID: \"ffce6892-25f4-48d1-b314-24d784fbc43f\") " pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.612714 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffce6892-25f4-48d1-b314-24d784fbc43f-inventory\") pod \"validate-network-openstack-openstack-cell1-5c9t8\" (UID: \"ffce6892-25f4-48d1-b314-24d784fbc43f\") " pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.618305 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ffce6892-25f4-48d1-b314-24d784fbc43f-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-5c9t8\" (UID: \"ffce6892-25f4-48d1-b314-24d784fbc43f\") " pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.626623 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffce6892-25f4-48d1-b314-24d784fbc43f-inventory\") pod \"validate-network-openstack-openstack-cell1-5c9t8\" (UID: \"ffce6892-25f4-48d1-b314-24d784fbc43f\") " pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.631891 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vghhx\" (UniqueName: \"kubernetes.io/projected/ffce6892-25f4-48d1-b314-24d784fbc43f-kube-api-access-vghhx\") pod \"validate-network-openstack-openstack-cell1-5c9t8\" (UID: \"ffce6892-25f4-48d1-b314-24d784fbc43f\") " pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.760973 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:26 crc kubenswrapper[4902]: W0121 16:39:26.323270 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffce6892_25f4_48d1_b314_24d784fbc43f.slice/crio-20e348370114a2e50aab14d772457f260853eab9ef63a14d38be5fe459f2ea9e WatchSource:0}: Error finding container 20e348370114a2e50aab14d772457f260853eab9ef63a14d38be5fe459f2ea9e: Status 404 returned error can't find the container with id 20e348370114a2e50aab14d772457f260853eab9ef63a14d38be5fe459f2ea9e Jan 21 16:39:26 crc kubenswrapper[4902]: I0121 16:39:26.324596 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-5c9t8"] Jan 21 16:39:26 crc kubenswrapper[4902]: I0121 16:39:26.327018 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:39:26 crc kubenswrapper[4902]: I0121 16:39:26.395512 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" event={"ID":"ffce6892-25f4-48d1-b314-24d784fbc43f","Type":"ContainerStarted","Data":"20e348370114a2e50aab14d772457f260853eab9ef63a14d38be5fe459f2ea9e"} Jan 21 16:39:27 crc kubenswrapper[4902]: I0121 16:39:27.413667 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" event={"ID":"ffce6892-25f4-48d1-b314-24d784fbc43f","Type":"ContainerStarted","Data":"2debda954ce460ba3ebdc1bc42e8959780a79f976e8a7784022fb6fa887a3fd5"} Jan 21 16:39:28 crc kubenswrapper[4902]: I0121 16:39:28.302619 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:39:28 crc kubenswrapper[4902]: E0121 16:39:28.303199 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:39:32 crc kubenswrapper[4902]: I0121 16:39:32.468913 4902 generic.go:334] "Generic (PLEG): container finished" podID="ffce6892-25f4-48d1-b314-24d784fbc43f" containerID="2debda954ce460ba3ebdc1bc42e8959780a79f976e8a7784022fb6fa887a3fd5" exitCode=0 Jan 21 16:39:32 crc kubenswrapper[4902]: I0121 16:39:32.469031 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" event={"ID":"ffce6892-25f4-48d1-b314-24d784fbc43f","Type":"ContainerDied","Data":"2debda954ce460ba3ebdc1bc42e8959780a79f976e8a7784022fb6fa887a3fd5"} Jan 21 16:39:33 crc kubenswrapper[4902]: I0121 16:39:33.995994 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.098716 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffce6892-25f4-48d1-b314-24d784fbc43f-inventory\") pod \"ffce6892-25f4-48d1-b314-24d784fbc43f\" (UID: \"ffce6892-25f4-48d1-b314-24d784fbc43f\") " Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.098798 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ffce6892-25f4-48d1-b314-24d784fbc43f-ssh-key-openstack-cell1\") pod \"ffce6892-25f4-48d1-b314-24d784fbc43f\" (UID: \"ffce6892-25f4-48d1-b314-24d784fbc43f\") " Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.098850 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vghhx\" (UniqueName: \"kubernetes.io/projected/ffce6892-25f4-48d1-b314-24d784fbc43f-kube-api-access-vghhx\") pod \"ffce6892-25f4-48d1-b314-24d784fbc43f\" (UID: \"ffce6892-25f4-48d1-b314-24d784fbc43f\") " Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.114244 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffce6892-25f4-48d1-b314-24d784fbc43f-kube-api-access-vghhx" (OuterVolumeSpecName: "kube-api-access-vghhx") pod "ffce6892-25f4-48d1-b314-24d784fbc43f" (UID: "ffce6892-25f4-48d1-b314-24d784fbc43f"). InnerVolumeSpecName "kube-api-access-vghhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.133771 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffce6892-25f4-48d1-b314-24d784fbc43f-inventory" (OuterVolumeSpecName: "inventory") pod "ffce6892-25f4-48d1-b314-24d784fbc43f" (UID: "ffce6892-25f4-48d1-b314-24d784fbc43f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.140411 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffce6892-25f4-48d1-b314-24d784fbc43f-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "ffce6892-25f4-48d1-b314-24d784fbc43f" (UID: "ffce6892-25f4-48d1-b314-24d784fbc43f"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.201974 4902 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffce6892-25f4-48d1-b314-24d784fbc43f-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.202017 4902 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ffce6892-25f4-48d1-b314-24d784fbc43f-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.202032 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vghhx\" (UniqueName: \"kubernetes.io/projected/ffce6892-25f4-48d1-b314-24d784fbc43f-kube-api-access-vghhx\") on node \"crc\" DevicePath \"\"" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.499152 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" event={"ID":"ffce6892-25f4-48d1-b314-24d784fbc43f","Type":"ContainerDied","Data":"20e348370114a2e50aab14d772457f260853eab9ef63a14d38be5fe459f2ea9e"} Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.499483 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20e348370114a2e50aab14d772457f260853eab9ef63a14d38be5fe459f2ea9e" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.499548 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.587264 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-7xpxk"] Jan 21 16:39:34 crc kubenswrapper[4902]: E0121 16:39:34.587841 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffce6892-25f4-48d1-b314-24d784fbc43f" containerName="validate-network-openstack-openstack-cell1" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.587866 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffce6892-25f4-48d1-b314-24d784fbc43f" containerName="validate-network-openstack-openstack-cell1" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.588172 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffce6892-25f4-48d1-b314-24d784fbc43f" containerName="validate-network-openstack-openstack-cell1" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.589156 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.592971 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.593058 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.593118 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-c55r2" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.593260 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.601114 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-7xpxk"] Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.723307 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd7ld\" (UniqueName: \"kubernetes.io/projected/e253be6c-dccb-456f-b4ca-0aed1b901c43-kube-api-access-kd7ld\") pod \"install-os-openstack-openstack-cell1-7xpxk\" (UID: \"e253be6c-dccb-456f-b4ca-0aed1b901c43\") " pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.723368 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e253be6c-dccb-456f-b4ca-0aed1b901c43-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-7xpxk\" (UID: \"e253be6c-dccb-456f-b4ca-0aed1b901c43\") " pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.723489 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e253be6c-dccb-456f-b4ca-0aed1b901c43-inventory\") pod \"install-os-openstack-openstack-cell1-7xpxk\" (UID: \"e253be6c-dccb-456f-b4ca-0aed1b901c43\") " pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.826386 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd7ld\" (UniqueName: \"kubernetes.io/projected/e253be6c-dccb-456f-b4ca-0aed1b901c43-kube-api-access-kd7ld\") pod \"install-os-openstack-openstack-cell1-7xpxk\" (UID: \"e253be6c-dccb-456f-b4ca-0aed1b901c43\") " pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.826435 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e253be6c-dccb-456f-b4ca-0aed1b901c43-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-7xpxk\" (UID: \"e253be6c-dccb-456f-b4ca-0aed1b901c43\") " pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.826462 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e253be6c-dccb-456f-b4ca-0aed1b901c43-inventory\") pod \"install-os-openstack-openstack-cell1-7xpxk\" (UID: \"e253be6c-dccb-456f-b4ca-0aed1b901c43\") " pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.830776 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e253be6c-dccb-456f-b4ca-0aed1b901c43-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-7xpxk\" (UID: \"e253be6c-dccb-456f-b4ca-0aed1b901c43\") " pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.830859 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e253be6c-dccb-456f-b4ca-0aed1b901c43-inventory\") pod \"install-os-openstack-openstack-cell1-7xpxk\" (UID: \"e253be6c-dccb-456f-b4ca-0aed1b901c43\") " pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.847665 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd7ld\" (UniqueName: \"kubernetes.io/projected/e253be6c-dccb-456f-b4ca-0aed1b901c43-kube-api-access-kd7ld\") pod \"install-os-openstack-openstack-cell1-7xpxk\" (UID: \"e253be6c-dccb-456f-b4ca-0aed1b901c43\") " pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.923670 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:39:35 crc kubenswrapper[4902]: I0121 16:39:35.502517 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-7xpxk"] Jan 21 16:39:36 crc kubenswrapper[4902]: I0121 16:39:36.522312 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-7xpxk" event={"ID":"e253be6c-dccb-456f-b4ca-0aed1b901c43","Type":"ContainerStarted","Data":"7f14e700c3c08bd2436965f63df6596d4264b1913725352a693d9211f6ae13f3"} Jan 21 16:39:36 crc kubenswrapper[4902]: I0121 16:39:36.522868 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-7xpxk" event={"ID":"e253be6c-dccb-456f-b4ca-0aed1b901c43","Type":"ContainerStarted","Data":"d0c3082fb7a35a7b9b6397aac0ac9cface842fa578167f4301df76aea1c35137"} Jan 21 16:39:36 crc kubenswrapper[4902]: I0121 16:39:36.547491 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-7xpxk" podStartSLOduration=2.054464694 podStartE2EDuration="2.547473474s" podCreationTimestamp="2026-01-21 16:39:34 +0000 UTC" firstStartedPulling="2026-01-21 16:39:35.525415004 +0000 UTC m=+7537.602248043" lastFinishedPulling="2026-01-21 16:39:36.018423794 +0000 UTC m=+7538.095256823" observedRunningTime="2026-01-21 16:39:36.546379443 +0000 UTC m=+7538.623212472" watchObservedRunningTime="2026-01-21 16:39:36.547473474 +0000 UTC m=+7538.624306503" Jan 21 16:39:40 crc kubenswrapper[4902]: I0121 16:39:40.295201 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:39:40 crc kubenswrapper[4902]: E0121 16:39:40.296116 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:39:43 crc kubenswrapper[4902]: I0121 16:39:43.731133 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8lxc7"] Jan 21 16:39:43 crc kubenswrapper[4902]: I0121 16:39:43.733472 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:43 crc kubenswrapper[4902]: I0121 16:39:43.745459 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8lxc7"] Jan 21 16:39:43 crc kubenswrapper[4902]: I0121 16:39:43.854256 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/086170f4-76bd-43a5-861d-eca144befed6-utilities\") pod \"community-operators-8lxc7\" (UID: \"086170f4-76bd-43a5-861d-eca144befed6\") " pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:43 crc kubenswrapper[4902]: I0121 16:39:43.854362 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cwd6\" (UniqueName: \"kubernetes.io/projected/086170f4-76bd-43a5-861d-eca144befed6-kube-api-access-2cwd6\") pod \"community-operators-8lxc7\" (UID: \"086170f4-76bd-43a5-861d-eca144befed6\") " pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:43 crc kubenswrapper[4902]: I0121 16:39:43.854598 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/086170f4-76bd-43a5-861d-eca144befed6-catalog-content\") pod \"community-operators-8lxc7\" (UID: \"086170f4-76bd-43a5-861d-eca144befed6\") " pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:43 crc kubenswrapper[4902]: I0121 16:39:43.956950 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/086170f4-76bd-43a5-861d-eca144befed6-utilities\") pod \"community-operators-8lxc7\" (UID: \"086170f4-76bd-43a5-861d-eca144befed6\") " pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:43 crc kubenswrapper[4902]: I0121 16:39:43.957035 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cwd6\" (UniqueName: \"kubernetes.io/projected/086170f4-76bd-43a5-861d-eca144befed6-kube-api-access-2cwd6\") pod \"community-operators-8lxc7\" (UID: \"086170f4-76bd-43a5-861d-eca144befed6\") " pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:43 crc kubenswrapper[4902]: I0121 16:39:43.957239 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/086170f4-76bd-43a5-861d-eca144befed6-catalog-content\") pod \"community-operators-8lxc7\" (UID: \"086170f4-76bd-43a5-861d-eca144befed6\") " pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:43 crc kubenswrapper[4902]: I0121 16:39:43.957954 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/086170f4-76bd-43a5-861d-eca144befed6-utilities\") pod \"community-operators-8lxc7\" (UID: \"086170f4-76bd-43a5-861d-eca144befed6\") " pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:43 crc kubenswrapper[4902]: I0121 16:39:43.958005 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/086170f4-76bd-43a5-861d-eca144befed6-catalog-content\") pod \"community-operators-8lxc7\" (UID: \"086170f4-76bd-43a5-861d-eca144befed6\") " pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:43 crc kubenswrapper[4902]: I0121 16:39:43.980717 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cwd6\" (UniqueName: \"kubernetes.io/projected/086170f4-76bd-43a5-861d-eca144befed6-kube-api-access-2cwd6\") pod \"community-operators-8lxc7\" (UID: \"086170f4-76bd-43a5-861d-eca144befed6\") " pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:44 crc kubenswrapper[4902]: I0121 16:39:44.062311 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:44 crc kubenswrapper[4902]: I0121 16:39:44.664988 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8lxc7"] Jan 21 16:39:45 crc kubenswrapper[4902]: I0121 16:39:45.610901 4902 generic.go:334] "Generic (PLEG): container finished" podID="086170f4-76bd-43a5-861d-eca144befed6" containerID="b58be9c3a498fa9a6988df6acdb6dbd906eaa0376aaa75925597e04f83d011f0" exitCode=0 Jan 21 16:39:45 crc kubenswrapper[4902]: I0121 16:39:45.610944 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lxc7" event={"ID":"086170f4-76bd-43a5-861d-eca144befed6","Type":"ContainerDied","Data":"b58be9c3a498fa9a6988df6acdb6dbd906eaa0376aaa75925597e04f83d011f0"} Jan 21 16:39:45 crc kubenswrapper[4902]: I0121 16:39:45.611262 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lxc7" event={"ID":"086170f4-76bd-43a5-861d-eca144befed6","Type":"ContainerStarted","Data":"5ab74e5da033ad0b6c3336ad93b57452ac02493806ddcb5b7c603ed687ba6556"} Jan 21 16:39:47 crc kubenswrapper[4902]: I0121 16:39:47.631878 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lxc7" event={"ID":"086170f4-76bd-43a5-861d-eca144befed6","Type":"ContainerStarted","Data":"1e42bc3c139adb6e36f7a8865f03b6d6c2d57497403db07bf73d38c701302875"} Jan 21 16:39:49 crc kubenswrapper[4902]: I0121 16:39:49.652314 4902 generic.go:334] "Generic (PLEG): container finished" podID="086170f4-76bd-43a5-861d-eca144befed6" containerID="1e42bc3c139adb6e36f7a8865f03b6d6c2d57497403db07bf73d38c701302875" exitCode=0 Jan 21 16:39:49 crc kubenswrapper[4902]: I0121 16:39:49.652407 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lxc7" event={"ID":"086170f4-76bd-43a5-861d-eca144befed6","Type":"ContainerDied","Data":"1e42bc3c139adb6e36f7a8865f03b6d6c2d57497403db07bf73d38c701302875"} Jan 21 16:39:50 crc kubenswrapper[4902]: I0121 16:39:50.676904 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lxc7" event={"ID":"086170f4-76bd-43a5-861d-eca144befed6","Type":"ContainerStarted","Data":"c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804"} Jan 21 16:39:50 crc kubenswrapper[4902]: I0121 16:39:50.707958 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8lxc7" podStartSLOduration=3.244496712 podStartE2EDuration="7.707935989s" podCreationTimestamp="2026-01-21 16:39:43 +0000 UTC" firstStartedPulling="2026-01-21 16:39:45.612933891 +0000 UTC m=+7547.689766930" lastFinishedPulling="2026-01-21 16:39:50.076373168 +0000 UTC m=+7552.153206207" observedRunningTime="2026-01-21 16:39:50.701910998 +0000 UTC m=+7552.778744037" watchObservedRunningTime="2026-01-21 16:39:50.707935989 +0000 UTC m=+7552.784769018" Jan 21 16:39:52 crc kubenswrapper[4902]: I0121 16:39:52.294864 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:39:52 crc kubenswrapper[4902]: E0121 16:39:52.295470 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:39:54 crc kubenswrapper[4902]: I0121 16:39:54.062863 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:54 crc kubenswrapper[4902]: I0121 16:39:54.064272 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:54 crc kubenswrapper[4902]: I0121 16:39:54.114942 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:55 crc kubenswrapper[4902]: I0121 16:39:55.766384 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:55 crc kubenswrapper[4902]: I0121 16:39:55.810769 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8lxc7"] Jan 21 16:39:57 crc kubenswrapper[4902]: I0121 16:39:57.735079 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8lxc7" podUID="086170f4-76bd-43a5-861d-eca144befed6" containerName="registry-server" containerID="cri-o://c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804" gracePeriod=2 Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.215222 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.282856 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/086170f4-76bd-43a5-861d-eca144befed6-utilities\") pod \"086170f4-76bd-43a5-861d-eca144befed6\" (UID: \"086170f4-76bd-43a5-861d-eca144befed6\") " Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.282900 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/086170f4-76bd-43a5-861d-eca144befed6-catalog-content\") pod \"086170f4-76bd-43a5-861d-eca144befed6\" (UID: \"086170f4-76bd-43a5-861d-eca144befed6\") " Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.283073 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cwd6\" (UniqueName: \"kubernetes.io/projected/086170f4-76bd-43a5-861d-eca144befed6-kube-api-access-2cwd6\") pod \"086170f4-76bd-43a5-861d-eca144befed6\" (UID: \"086170f4-76bd-43a5-861d-eca144befed6\") " Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.283790 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/086170f4-76bd-43a5-861d-eca144befed6-utilities" (OuterVolumeSpecName: "utilities") pod "086170f4-76bd-43a5-861d-eca144befed6" (UID: "086170f4-76bd-43a5-861d-eca144befed6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.290789 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/086170f4-76bd-43a5-861d-eca144befed6-kube-api-access-2cwd6" (OuterVolumeSpecName: "kube-api-access-2cwd6") pod "086170f4-76bd-43a5-861d-eca144befed6" (UID: "086170f4-76bd-43a5-861d-eca144befed6"). InnerVolumeSpecName "kube-api-access-2cwd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.350279 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/086170f4-76bd-43a5-861d-eca144befed6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "086170f4-76bd-43a5-861d-eca144befed6" (UID: "086170f4-76bd-43a5-861d-eca144befed6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.385994 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/086170f4-76bd-43a5-861d-eca144befed6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.386273 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/086170f4-76bd-43a5-861d-eca144befed6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.386397 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cwd6\" (UniqueName: \"kubernetes.io/projected/086170f4-76bd-43a5-861d-eca144befed6-kube-api-access-2cwd6\") on node \"crc\" DevicePath \"\"" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.745669 4902 generic.go:334] "Generic (PLEG): container finished" podID="086170f4-76bd-43a5-861d-eca144befed6" containerID="c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804" exitCode=0 Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.745725 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lxc7" event={"ID":"086170f4-76bd-43a5-861d-eca144befed6","Type":"ContainerDied","Data":"c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804"} Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.745992 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lxc7" event={"ID":"086170f4-76bd-43a5-861d-eca144befed6","Type":"ContainerDied","Data":"5ab74e5da033ad0b6c3336ad93b57452ac02493806ddcb5b7c603ed687ba6556"} Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.746014 4902 scope.go:117] "RemoveContainer" containerID="c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.745743 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.777322 4902 scope.go:117] "RemoveContainer" containerID="1e42bc3c139adb6e36f7a8865f03b6d6c2d57497403db07bf73d38c701302875" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.806492 4902 scope.go:117] "RemoveContainer" containerID="b58be9c3a498fa9a6988df6acdb6dbd906eaa0376aaa75925597e04f83d011f0" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.815249 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8lxc7"] Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.821702 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8lxc7"] Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.859320 4902 scope.go:117] "RemoveContainer" containerID="c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804" Jan 21 16:39:58 crc kubenswrapper[4902]: E0121 16:39:58.859847 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804\": container with ID starting with c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804 not found: ID does not exist" containerID="c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.859889 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804"} err="failed to get container status \"c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804\": rpc error: code = NotFound desc = could not find container \"c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804\": container with ID starting with c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804 not found: ID does not exist" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.859918 4902 scope.go:117] "RemoveContainer" containerID="1e42bc3c139adb6e36f7a8865f03b6d6c2d57497403db07bf73d38c701302875" Jan 21 16:39:58 crc kubenswrapper[4902]: E0121 16:39:58.860320 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e42bc3c139adb6e36f7a8865f03b6d6c2d57497403db07bf73d38c701302875\": container with ID starting with 1e42bc3c139adb6e36f7a8865f03b6d6c2d57497403db07bf73d38c701302875 not found: ID does not exist" containerID="1e42bc3c139adb6e36f7a8865f03b6d6c2d57497403db07bf73d38c701302875" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.860366 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e42bc3c139adb6e36f7a8865f03b6d6c2d57497403db07bf73d38c701302875"} err="failed to get container status \"1e42bc3c139adb6e36f7a8865f03b6d6c2d57497403db07bf73d38c701302875\": rpc error: code = NotFound desc = could not find container \"1e42bc3c139adb6e36f7a8865f03b6d6c2d57497403db07bf73d38c701302875\": container with ID starting with 1e42bc3c139adb6e36f7a8865f03b6d6c2d57497403db07bf73d38c701302875 not found: ID does not exist" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.860400 4902 scope.go:117] "RemoveContainer" containerID="b58be9c3a498fa9a6988df6acdb6dbd906eaa0376aaa75925597e04f83d011f0" Jan 21 16:39:58 crc kubenswrapper[4902]: E0121 16:39:58.860751 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b58be9c3a498fa9a6988df6acdb6dbd906eaa0376aaa75925597e04f83d011f0\": container with ID starting with b58be9c3a498fa9a6988df6acdb6dbd906eaa0376aaa75925597e04f83d011f0 not found: ID does not exist" containerID="b58be9c3a498fa9a6988df6acdb6dbd906eaa0376aaa75925597e04f83d011f0" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.860801 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b58be9c3a498fa9a6988df6acdb6dbd906eaa0376aaa75925597e04f83d011f0"} err="failed to get container status \"b58be9c3a498fa9a6988df6acdb6dbd906eaa0376aaa75925597e04f83d011f0\": rpc error: code = NotFound desc = could not find container \"b58be9c3a498fa9a6988df6acdb6dbd906eaa0376aaa75925597e04f83d011f0\": container with ID starting with b58be9c3a498fa9a6988df6acdb6dbd906eaa0376aaa75925597e04f83d011f0 not found: ID does not exist" Jan 21 16:40:00 crc kubenswrapper[4902]: I0121 16:40:00.312946 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="086170f4-76bd-43a5-861d-eca144befed6" path="/var/lib/kubelet/pods/086170f4-76bd-43a5-861d-eca144befed6/volumes" Jan 21 16:40:05 crc kubenswrapper[4902]: I0121 16:40:05.295646 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:40:05 crc kubenswrapper[4902]: E0121 16:40:05.297009 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:40:18 crc kubenswrapper[4902]: I0121 16:40:18.306402 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:40:18 crc kubenswrapper[4902]: E0121 16:40:18.307282 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:40:23 crc kubenswrapper[4902]: I0121 16:40:23.049815 4902 generic.go:334] "Generic (PLEG): container finished" podID="e253be6c-dccb-456f-b4ca-0aed1b901c43" containerID="7f14e700c3c08bd2436965f63df6596d4264b1913725352a693d9211f6ae13f3" exitCode=0 Jan 21 16:40:23 crc kubenswrapper[4902]: I0121 16:40:23.050425 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-7xpxk" event={"ID":"e253be6c-dccb-456f-b4ca-0aed1b901c43","Type":"ContainerDied","Data":"7f14e700c3c08bd2436965f63df6596d4264b1913725352a693d9211f6ae13f3"} Jan 21 16:40:24 crc kubenswrapper[4902]: I0121 16:40:24.598066 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:40:24 crc kubenswrapper[4902]: I0121 16:40:24.745102 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e253be6c-dccb-456f-b4ca-0aed1b901c43-ssh-key-openstack-cell1\") pod \"e253be6c-dccb-456f-b4ca-0aed1b901c43\" (UID: \"e253be6c-dccb-456f-b4ca-0aed1b901c43\") " Jan 21 16:40:24 crc kubenswrapper[4902]: I0121 16:40:24.745724 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd7ld\" (UniqueName: \"kubernetes.io/projected/e253be6c-dccb-456f-b4ca-0aed1b901c43-kube-api-access-kd7ld\") pod \"e253be6c-dccb-456f-b4ca-0aed1b901c43\" (UID: \"e253be6c-dccb-456f-b4ca-0aed1b901c43\") " Jan 21 16:40:24 crc kubenswrapper[4902]: I0121 16:40:24.745884 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e253be6c-dccb-456f-b4ca-0aed1b901c43-inventory\") pod \"e253be6c-dccb-456f-b4ca-0aed1b901c43\" (UID: \"e253be6c-dccb-456f-b4ca-0aed1b901c43\") " Jan 21 16:40:24 crc kubenswrapper[4902]: I0121 16:40:24.753947 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e253be6c-dccb-456f-b4ca-0aed1b901c43-kube-api-access-kd7ld" (OuterVolumeSpecName: "kube-api-access-kd7ld") pod "e253be6c-dccb-456f-b4ca-0aed1b901c43" (UID: "e253be6c-dccb-456f-b4ca-0aed1b901c43"). InnerVolumeSpecName "kube-api-access-kd7ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:40:24 crc kubenswrapper[4902]: I0121 16:40:24.785529 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e253be6c-dccb-456f-b4ca-0aed1b901c43-inventory" (OuterVolumeSpecName: "inventory") pod "e253be6c-dccb-456f-b4ca-0aed1b901c43" (UID: "e253be6c-dccb-456f-b4ca-0aed1b901c43"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:40:24 crc kubenswrapper[4902]: I0121 16:40:24.788406 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e253be6c-dccb-456f-b4ca-0aed1b901c43-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "e253be6c-dccb-456f-b4ca-0aed1b901c43" (UID: "e253be6c-dccb-456f-b4ca-0aed1b901c43"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:40:24 crc kubenswrapper[4902]: I0121 16:40:24.849417 4902 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e253be6c-dccb-456f-b4ca-0aed1b901c43-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:40:24 crc kubenswrapper[4902]: I0121 16:40:24.849454 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd7ld\" (UniqueName: \"kubernetes.io/projected/e253be6c-dccb-456f-b4ca-0aed1b901c43-kube-api-access-kd7ld\") on node \"crc\" DevicePath \"\"" Jan 21 16:40:24 crc kubenswrapper[4902]: I0121 16:40:24.849469 4902 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e253be6c-dccb-456f-b4ca-0aed1b901c43-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.075782 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.075768 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-7xpxk" event={"ID":"e253be6c-dccb-456f-b4ca-0aed1b901c43","Type":"ContainerDied","Data":"d0c3082fb7a35a7b9b6397aac0ac9cface842fa578167f4301df76aea1c35137"} Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.075978 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0c3082fb7a35a7b9b6397aac0ac9cface842fa578167f4301df76aea1c35137" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.169433 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-w46l6"] Jan 21 16:40:25 crc kubenswrapper[4902]: E0121 16:40:25.169932 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="086170f4-76bd-43a5-861d-eca144befed6" containerName="extract-content" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.169950 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="086170f4-76bd-43a5-861d-eca144befed6" containerName="extract-content" Jan 21 16:40:25 crc kubenswrapper[4902]: E0121 16:40:25.169981 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="086170f4-76bd-43a5-861d-eca144befed6" containerName="extract-utilities" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.169988 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="086170f4-76bd-43a5-861d-eca144befed6" containerName="extract-utilities" Jan 21 16:40:25 crc kubenswrapper[4902]: E0121 16:40:25.170001 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e253be6c-dccb-456f-b4ca-0aed1b901c43" containerName="install-os-openstack-openstack-cell1" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.170018 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e253be6c-dccb-456f-b4ca-0aed1b901c43" containerName="install-os-openstack-openstack-cell1" Jan 21 16:40:25 crc kubenswrapper[4902]: E0121 16:40:25.170027 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="086170f4-76bd-43a5-861d-eca144befed6" containerName="registry-server" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.170033 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="086170f4-76bd-43a5-861d-eca144befed6" containerName="registry-server" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.170254 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e253be6c-dccb-456f-b4ca-0aed1b901c43" containerName="install-os-openstack-openstack-cell1" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.170270 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="086170f4-76bd-43a5-861d-eca144befed6" containerName="registry-server" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.171035 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.173000 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.173178 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-c55r2" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.177535 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.177768 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.182655 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-w46l6"] Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.360651 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdvm7\" (UniqueName: \"kubernetes.io/projected/4570bbab-b55a-498c-8276-2c7aa0969540-kube-api-access-tdvm7\") pod \"configure-os-openstack-openstack-cell1-w46l6\" (UID: \"4570bbab-b55a-498c-8276-2c7aa0969540\") " pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.360716 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4570bbab-b55a-498c-8276-2c7aa0969540-inventory\") pod \"configure-os-openstack-openstack-cell1-w46l6\" (UID: \"4570bbab-b55a-498c-8276-2c7aa0969540\") " pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.361487 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4570bbab-b55a-498c-8276-2c7aa0969540-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-w46l6\" (UID: \"4570bbab-b55a-498c-8276-2c7aa0969540\") " pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.462912 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4570bbab-b55a-498c-8276-2c7aa0969540-inventory\") pod \"configure-os-openstack-openstack-cell1-w46l6\" (UID: \"4570bbab-b55a-498c-8276-2c7aa0969540\") " pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.463427 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4570bbab-b55a-498c-8276-2c7aa0969540-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-w46l6\" (UID: \"4570bbab-b55a-498c-8276-2c7aa0969540\") " pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.464068 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdvm7\" (UniqueName: \"kubernetes.io/projected/4570bbab-b55a-498c-8276-2c7aa0969540-kube-api-access-tdvm7\") pod \"configure-os-openstack-openstack-cell1-w46l6\" (UID: \"4570bbab-b55a-498c-8276-2c7aa0969540\") " pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.467175 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4570bbab-b55a-498c-8276-2c7aa0969540-inventory\") pod \"configure-os-openstack-openstack-cell1-w46l6\" (UID: \"4570bbab-b55a-498c-8276-2c7aa0969540\") " pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.477174 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4570bbab-b55a-498c-8276-2c7aa0969540-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-w46l6\" (UID: \"4570bbab-b55a-498c-8276-2c7aa0969540\") " pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.483736 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdvm7\" (UniqueName: \"kubernetes.io/projected/4570bbab-b55a-498c-8276-2c7aa0969540-kube-api-access-tdvm7\") pod \"configure-os-openstack-openstack-cell1-w46l6\" (UID: \"4570bbab-b55a-498c-8276-2c7aa0969540\") " pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.560421 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:40:26 crc kubenswrapper[4902]: I0121 16:40:26.147220 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-w46l6"] Jan 21 16:40:27 crc kubenswrapper[4902]: I0121 16:40:27.096993 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-w46l6" event={"ID":"4570bbab-b55a-498c-8276-2c7aa0969540","Type":"ContainerStarted","Data":"88ff5cbcfa1e27400f64e32a1c15c4a3a56ae22145423ed31c76145fed2fd012"} Jan 21 16:40:28 crc kubenswrapper[4902]: I0121 16:40:28.107889 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-w46l6" event={"ID":"4570bbab-b55a-498c-8276-2c7aa0969540","Type":"ContainerStarted","Data":"eab440740b0a21242af5c0364ac8efd26b6c03943ba49a53c8eef5d719029002"} Jan 21 16:40:28 crc kubenswrapper[4902]: I0121 16:40:28.133883 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-w46l6" podStartSLOduration=2.395082393 podStartE2EDuration="3.133866298s" podCreationTimestamp="2026-01-21 16:40:25 +0000 UTC" firstStartedPulling="2026-01-21 16:40:26.149122278 +0000 UTC m=+7588.225955307" lastFinishedPulling="2026-01-21 16:40:26.887906153 +0000 UTC m=+7588.964739212" observedRunningTime="2026-01-21 16:40:28.121957431 +0000 UTC m=+7590.198790460" watchObservedRunningTime="2026-01-21 16:40:28.133866298 +0000 UTC m=+7590.210699327" Jan 21 16:40:33 crc kubenswrapper[4902]: I0121 16:40:33.294682 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:40:33 crc kubenswrapper[4902]: E0121 16:40:33.295706 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:40:47 crc kubenswrapper[4902]: I0121 16:40:47.294706 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:40:47 crc kubenswrapper[4902]: E0121 16:40:47.295906 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.211709 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-st4zd"] Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.216264 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.252170 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-st4zd"] Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.351364 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-utilities\") pod \"redhat-operators-st4zd\" (UID: \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\") " pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.351446 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-catalog-content\") pod \"redhat-operators-st4zd\" (UID: \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\") " pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.351511 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrcfx\" (UniqueName: \"kubernetes.io/projected/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-kube-api-access-hrcfx\") pod \"redhat-operators-st4zd\" (UID: \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\") " pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.454290 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-catalog-content\") pod \"redhat-operators-st4zd\" (UID: \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\") " pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.454427 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrcfx\" (UniqueName: \"kubernetes.io/projected/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-kube-api-access-hrcfx\") pod \"redhat-operators-st4zd\" (UID: \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\") " pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.454877 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-catalog-content\") pod \"redhat-operators-st4zd\" (UID: \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\") " pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.455776 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-utilities\") pod \"redhat-operators-st4zd\" (UID: \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\") " pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.456248 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-utilities\") pod \"redhat-operators-st4zd\" (UID: \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\") " pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.488904 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrcfx\" (UniqueName: \"kubernetes.io/projected/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-kube-api-access-hrcfx\") pod \"redhat-operators-st4zd\" (UID: \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\") " pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.566164 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.082065 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-st4zd"] Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.410488 4902 generic.go:334] "Generic (PLEG): container finished" podID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" containerID="31ab11aa334b47f67e0534f5f80d282b7130b052e6114028cd006aa685450da0" exitCode=0 Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.410741 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st4zd" event={"ID":"4bf7e55d-ec94-44b6-96c2-04452baeb3b6","Type":"ContainerDied","Data":"31ab11aa334b47f67e0534f5f80d282b7130b052e6114028cd006aa685450da0"} Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.410888 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st4zd" event={"ID":"4bf7e55d-ec94-44b6-96c2-04452baeb3b6","Type":"ContainerStarted","Data":"bb7ccdf2827a1f6dd75b1026bcd57af8e936b222c3cfce0567e1538bfba4bc6e"} Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.611897 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mz8dz"] Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.614658 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.624302 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mz8dz"] Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.695726 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-catalog-content\") pod \"certified-operators-mz8dz\" (UID: \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\") " pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.695888 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6tqd\" (UniqueName: \"kubernetes.io/projected/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-kube-api-access-q6tqd\") pod \"certified-operators-mz8dz\" (UID: \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\") " pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.695963 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-utilities\") pod \"certified-operators-mz8dz\" (UID: \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\") " pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.798389 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6tqd\" (UniqueName: \"kubernetes.io/projected/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-kube-api-access-q6tqd\") pod \"certified-operators-mz8dz\" (UID: \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\") " pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.798525 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-utilities\") pod \"certified-operators-mz8dz\" (UID: \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\") " pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.799060 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-utilities\") pod \"certified-operators-mz8dz\" (UID: \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\") " pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.799257 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-catalog-content\") pod \"certified-operators-mz8dz\" (UID: \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\") " pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.799579 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-catalog-content\") pod \"certified-operators-mz8dz\" (UID: \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\") " pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.838349 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6tqd\" (UniqueName: \"kubernetes.io/projected/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-kube-api-access-q6tqd\") pod \"certified-operators-mz8dz\" (UID: \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\") " pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.949341 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:40:55 crc kubenswrapper[4902]: I0121 16:40:55.533332 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mz8dz"] Jan 21 16:40:56 crc kubenswrapper[4902]: I0121 16:40:56.433907 4902 generic.go:334] "Generic (PLEG): container finished" podID="b65e05c0-de41-4782-ba9c-a82a8ab0f83a" containerID="14dd86a2e3b7d5c82bcad09aa1bc2117f7eed1b5efb481174f490e8dcceec431" exitCode=0 Jan 21 16:40:56 crc kubenswrapper[4902]: I0121 16:40:56.434215 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mz8dz" event={"ID":"b65e05c0-de41-4782-ba9c-a82a8ab0f83a","Type":"ContainerDied","Data":"14dd86a2e3b7d5c82bcad09aa1bc2117f7eed1b5efb481174f490e8dcceec431"} Jan 21 16:40:56 crc kubenswrapper[4902]: I0121 16:40:56.434242 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mz8dz" event={"ID":"b65e05c0-de41-4782-ba9c-a82a8ab0f83a","Type":"ContainerStarted","Data":"ee8e92833fe775226f0b1a1c483d4e9507784ce3af65b37360c09279b01577fa"} Jan 21 16:40:56 crc kubenswrapper[4902]: I0121 16:40:56.438298 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st4zd" event={"ID":"4bf7e55d-ec94-44b6-96c2-04452baeb3b6","Type":"ContainerStarted","Data":"f37ae92ee1943e1f682d72ce1930fcb0235222a6cd05d7e2eee50137e7204509"} Jan 21 16:40:59 crc kubenswrapper[4902]: I0121 16:40:59.468409 4902 generic.go:334] "Generic (PLEG): container finished" podID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" containerID="f37ae92ee1943e1f682d72ce1930fcb0235222a6cd05d7e2eee50137e7204509" exitCode=0 Jan 21 16:40:59 crc kubenswrapper[4902]: I0121 16:40:59.469081 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st4zd" event={"ID":"4bf7e55d-ec94-44b6-96c2-04452baeb3b6","Type":"ContainerDied","Data":"f37ae92ee1943e1f682d72ce1930fcb0235222a6cd05d7e2eee50137e7204509"} Jan 21 16:40:59 crc kubenswrapper[4902]: I0121 16:40:59.476975 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mz8dz" event={"ID":"b65e05c0-de41-4782-ba9c-a82a8ab0f83a","Type":"ContainerStarted","Data":"f2a60e3589ca79881dcfa599a9ffc2679c7179019127fdf6fe4134dd2dc99dd8"} Jan 21 16:41:01 crc kubenswrapper[4902]: I0121 16:41:01.504503 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st4zd" event={"ID":"4bf7e55d-ec94-44b6-96c2-04452baeb3b6","Type":"ContainerStarted","Data":"f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a"} Jan 21 16:41:01 crc kubenswrapper[4902]: I0121 16:41:01.508403 4902 generic.go:334] "Generic (PLEG): container finished" podID="b65e05c0-de41-4782-ba9c-a82a8ab0f83a" containerID="f2a60e3589ca79881dcfa599a9ffc2679c7179019127fdf6fe4134dd2dc99dd8" exitCode=0 Jan 21 16:41:01 crc kubenswrapper[4902]: I0121 16:41:01.508457 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mz8dz" event={"ID":"b65e05c0-de41-4782-ba9c-a82a8ab0f83a","Type":"ContainerDied","Data":"f2a60e3589ca79881dcfa599a9ffc2679c7179019127fdf6fe4134dd2dc99dd8"} Jan 21 16:41:01 crc kubenswrapper[4902]: I0121 16:41:01.533420 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-st4zd" podStartSLOduration=2.4120412399999998 podStartE2EDuration="8.5333931s" podCreationTimestamp="2026-01-21 16:40:53 +0000 UTC" firstStartedPulling="2026-01-21 16:40:54.414463042 +0000 UTC m=+7616.491296081" lastFinishedPulling="2026-01-21 16:41:00.535814912 +0000 UTC m=+7622.612647941" observedRunningTime="2026-01-21 16:41:01.52277337 +0000 UTC m=+7623.599606409" watchObservedRunningTime="2026-01-21 16:41:01.5333931 +0000 UTC m=+7623.610226169" Jan 21 16:41:02 crc kubenswrapper[4902]: I0121 16:41:02.295612 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:41:02 crc kubenswrapper[4902]: E0121 16:41:02.296666 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:41:02 crc kubenswrapper[4902]: I0121 16:41:02.524035 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mz8dz" event={"ID":"b65e05c0-de41-4782-ba9c-a82a8ab0f83a","Type":"ContainerStarted","Data":"1d8a9965a9fa69ca3a927f548d26553ae82c7a81151442950d884266cba4af26"} Jan 21 16:41:02 crc kubenswrapper[4902]: I0121 16:41:02.550185 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mz8dz" podStartSLOduration=3.012806065 podStartE2EDuration="8.550168191s" podCreationTimestamp="2026-01-21 16:40:54 +0000 UTC" firstStartedPulling="2026-01-21 16:40:56.436319893 +0000 UTC m=+7618.513152932" lastFinishedPulling="2026-01-21 16:41:01.973682019 +0000 UTC m=+7624.050515058" observedRunningTime="2026-01-21 16:41:02.549228034 +0000 UTC m=+7624.626061073" watchObservedRunningTime="2026-01-21 16:41:02.550168191 +0000 UTC m=+7624.627001210" Jan 21 16:41:03 crc kubenswrapper[4902]: I0121 16:41:03.566302 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:41:03 crc kubenswrapper[4902]: I0121 16:41:03.566579 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:41:04 crc kubenswrapper[4902]: I0121 16:41:04.615870 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-st4zd" podUID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" containerName="registry-server" probeResult="failure" output=< Jan 21 16:41:04 crc kubenswrapper[4902]: timeout: failed to connect service ":50051" within 1s Jan 21 16:41:04 crc kubenswrapper[4902]: > Jan 21 16:41:04 crc kubenswrapper[4902]: I0121 16:41:04.949705 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:41:04 crc kubenswrapper[4902]: I0121 16:41:04.949770 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:41:05 crc kubenswrapper[4902]: I0121 16:41:05.026447 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:41:08 crc kubenswrapper[4902]: I0121 16:41:08.629788 4902 generic.go:334] "Generic (PLEG): container finished" podID="4570bbab-b55a-498c-8276-2c7aa0969540" containerID="eab440740b0a21242af5c0364ac8efd26b6c03943ba49a53c8eef5d719029002" exitCode=2 Jan 21 16:41:08 crc kubenswrapper[4902]: I0121 16:41:08.629890 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-w46l6" event={"ID":"4570bbab-b55a-498c-8276-2c7aa0969540","Type":"ContainerDied","Data":"eab440740b0a21242af5c0364ac8efd26b6c03943ba49a53c8eef5d719029002"} Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.093534 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.196427 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdvm7\" (UniqueName: \"kubernetes.io/projected/4570bbab-b55a-498c-8276-2c7aa0969540-kube-api-access-tdvm7\") pod \"4570bbab-b55a-498c-8276-2c7aa0969540\" (UID: \"4570bbab-b55a-498c-8276-2c7aa0969540\") " Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.196653 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4570bbab-b55a-498c-8276-2c7aa0969540-ssh-key-openstack-cell1\") pod \"4570bbab-b55a-498c-8276-2c7aa0969540\" (UID: \"4570bbab-b55a-498c-8276-2c7aa0969540\") " Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.196805 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4570bbab-b55a-498c-8276-2c7aa0969540-inventory\") pod \"4570bbab-b55a-498c-8276-2c7aa0969540\" (UID: \"4570bbab-b55a-498c-8276-2c7aa0969540\") " Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.202659 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4570bbab-b55a-498c-8276-2c7aa0969540-kube-api-access-tdvm7" (OuterVolumeSpecName: "kube-api-access-tdvm7") pod "4570bbab-b55a-498c-8276-2c7aa0969540" (UID: "4570bbab-b55a-498c-8276-2c7aa0969540"). InnerVolumeSpecName "kube-api-access-tdvm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.225962 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4570bbab-b55a-498c-8276-2c7aa0969540-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "4570bbab-b55a-498c-8276-2c7aa0969540" (UID: "4570bbab-b55a-498c-8276-2c7aa0969540"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.226831 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4570bbab-b55a-498c-8276-2c7aa0969540-inventory" (OuterVolumeSpecName: "inventory") pod "4570bbab-b55a-498c-8276-2c7aa0969540" (UID: "4570bbab-b55a-498c-8276-2c7aa0969540"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.300358 4902 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4570bbab-b55a-498c-8276-2c7aa0969540-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.300411 4902 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4570bbab-b55a-498c-8276-2c7aa0969540-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.300423 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdvm7\" (UniqueName: \"kubernetes.io/projected/4570bbab-b55a-498c-8276-2c7aa0969540-kube-api-access-tdvm7\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.653255 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-w46l6" event={"ID":"4570bbab-b55a-498c-8276-2c7aa0969540","Type":"ContainerDied","Data":"88ff5cbcfa1e27400f64e32a1c15c4a3a56ae22145423ed31c76145fed2fd012"} Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.653751 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88ff5cbcfa1e27400f64e32a1c15c4a3a56ae22145423ed31c76145fed2fd012" Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.653376 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:41:13 crc kubenswrapper[4902]: I0121 16:41:13.618133 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:41:13 crc kubenswrapper[4902]: I0121 16:41:13.665325 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:41:13 crc kubenswrapper[4902]: I0121 16:41:13.917261 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-st4zd"] Jan 21 16:41:14 crc kubenswrapper[4902]: I0121 16:41:14.687064 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-st4zd" podUID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" containerName="registry-server" containerID="cri-o://f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a" gracePeriod=2 Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.012293 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.172074 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.345455 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrcfx\" (UniqueName: \"kubernetes.io/projected/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-kube-api-access-hrcfx\") pod \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\" (UID: \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\") " Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.345616 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-catalog-content\") pod \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\" (UID: \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\") " Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.346107 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-utilities\") pod \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\" (UID: \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\") " Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.347384 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-utilities" (OuterVolumeSpecName: "utilities") pod "4bf7e55d-ec94-44b6-96c2-04452baeb3b6" (UID: "4bf7e55d-ec94-44b6-96c2-04452baeb3b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.350823 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-kube-api-access-hrcfx" (OuterVolumeSpecName: "kube-api-access-hrcfx") pod "4bf7e55d-ec94-44b6-96c2-04452baeb3b6" (UID: "4bf7e55d-ec94-44b6-96c2-04452baeb3b6"). InnerVolumeSpecName "kube-api-access-hrcfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.449162 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrcfx\" (UniqueName: \"kubernetes.io/projected/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-kube-api-access-hrcfx\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.449195 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.473551 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4bf7e55d-ec94-44b6-96c2-04452baeb3b6" (UID: "4bf7e55d-ec94-44b6-96c2-04452baeb3b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.551085 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.696919 4902 generic.go:334] "Generic (PLEG): container finished" podID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" containerID="f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a" exitCode=0 Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.696961 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st4zd" event={"ID":"4bf7e55d-ec94-44b6-96c2-04452baeb3b6","Type":"ContainerDied","Data":"f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a"} Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.696995 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st4zd" event={"ID":"4bf7e55d-ec94-44b6-96c2-04452baeb3b6","Type":"ContainerDied","Data":"bb7ccdf2827a1f6dd75b1026bcd57af8e936b222c3cfce0567e1538bfba4bc6e"} Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.697013 4902 scope.go:117] "RemoveContainer" containerID="f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.697160 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.830390 4902 scope.go:117] "RemoveContainer" containerID="f37ae92ee1943e1f682d72ce1930fcb0235222a6cd05d7e2eee50137e7204509" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.837427 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-st4zd"] Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.847460 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-st4zd"] Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.852966 4902 scope.go:117] "RemoveContainer" containerID="31ab11aa334b47f67e0534f5f80d282b7130b052e6114028cd006aa685450da0" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.922073 4902 scope.go:117] "RemoveContainer" containerID="f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a" Jan 21 16:41:15 crc kubenswrapper[4902]: E0121 16:41:15.922555 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a\": container with ID starting with f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a not found: ID does not exist" containerID="f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.922595 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a"} err="failed to get container status \"f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a\": rpc error: code = NotFound desc = could not find container \"f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a\": container with ID starting with f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a not found: ID does not exist" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.922635 4902 scope.go:117] "RemoveContainer" containerID="f37ae92ee1943e1f682d72ce1930fcb0235222a6cd05d7e2eee50137e7204509" Jan 21 16:41:15 crc kubenswrapper[4902]: E0121 16:41:15.922969 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f37ae92ee1943e1f682d72ce1930fcb0235222a6cd05d7e2eee50137e7204509\": container with ID starting with f37ae92ee1943e1f682d72ce1930fcb0235222a6cd05d7e2eee50137e7204509 not found: ID does not exist" containerID="f37ae92ee1943e1f682d72ce1930fcb0235222a6cd05d7e2eee50137e7204509" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.923021 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f37ae92ee1943e1f682d72ce1930fcb0235222a6cd05d7e2eee50137e7204509"} err="failed to get container status \"f37ae92ee1943e1f682d72ce1930fcb0235222a6cd05d7e2eee50137e7204509\": rpc error: code = NotFound desc = could not find container \"f37ae92ee1943e1f682d72ce1930fcb0235222a6cd05d7e2eee50137e7204509\": container with ID starting with f37ae92ee1943e1f682d72ce1930fcb0235222a6cd05d7e2eee50137e7204509 not found: ID does not exist" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.923073 4902 scope.go:117] "RemoveContainer" containerID="31ab11aa334b47f67e0534f5f80d282b7130b052e6114028cd006aa685450da0" Jan 21 16:41:15 crc kubenswrapper[4902]: E0121 16:41:15.923473 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31ab11aa334b47f67e0534f5f80d282b7130b052e6114028cd006aa685450da0\": container with ID starting with 31ab11aa334b47f67e0534f5f80d282b7130b052e6114028cd006aa685450da0 not found: ID does not exist" containerID="31ab11aa334b47f67e0534f5f80d282b7130b052e6114028cd006aa685450da0" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.923506 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31ab11aa334b47f67e0534f5f80d282b7130b052e6114028cd006aa685450da0"} err="failed to get container status \"31ab11aa334b47f67e0534f5f80d282b7130b052e6114028cd006aa685450da0\": rpc error: code = NotFound desc = could not find container \"31ab11aa334b47f67e0534f5f80d282b7130b052e6114028cd006aa685450da0\": container with ID starting with 31ab11aa334b47f67e0534f5f80d282b7130b052e6114028cd006aa685450da0 not found: ID does not exist" Jan 21 16:41:16 crc kubenswrapper[4902]: I0121 16:41:16.314295 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" path="/var/lib/kubelet/pods/4bf7e55d-ec94-44b6-96c2-04452baeb3b6/volumes" Jan 21 16:41:17 crc kubenswrapper[4902]: I0121 16:41:17.331386 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:41:17 crc kubenswrapper[4902]: E0121 16:41:17.331876 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:41:17 crc kubenswrapper[4902]: I0121 16:41:17.338791 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mz8dz"] Jan 21 16:41:17 crc kubenswrapper[4902]: I0121 16:41:17.339056 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mz8dz" podUID="b65e05c0-de41-4782-ba9c-a82a8ab0f83a" containerName="registry-server" containerID="cri-o://1d8a9965a9fa69ca3a927f548d26553ae82c7a81151442950d884266cba4af26" gracePeriod=2 Jan 21 16:41:17 crc kubenswrapper[4902]: I0121 16:41:17.718268 4902 generic.go:334] "Generic (PLEG): container finished" podID="b65e05c0-de41-4782-ba9c-a82a8ab0f83a" containerID="1d8a9965a9fa69ca3a927f548d26553ae82c7a81151442950d884266cba4af26" exitCode=0 Jan 21 16:41:17 crc kubenswrapper[4902]: I0121 16:41:17.718322 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mz8dz" event={"ID":"b65e05c0-de41-4782-ba9c-a82a8ab0f83a","Type":"ContainerDied","Data":"1d8a9965a9fa69ca3a927f548d26553ae82c7a81151442950d884266cba4af26"} Jan 21 16:41:17 crc kubenswrapper[4902]: I0121 16:41:17.889127 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.029418 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-2qbs2"] Jan 21 16:41:18 crc kubenswrapper[4902]: E0121 16:41:18.030119 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65e05c0-de41-4782-ba9c-a82a8ab0f83a" containerName="registry-server" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.030136 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65e05c0-de41-4782-ba9c-a82a8ab0f83a" containerName="registry-server" Jan 21 16:41:18 crc kubenswrapper[4902]: E0121 16:41:18.030148 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" containerName="extract-content" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.030155 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" containerName="extract-content" Jan 21 16:41:18 crc kubenswrapper[4902]: E0121 16:41:18.030199 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" containerName="registry-server" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.030206 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" containerName="registry-server" Jan 21 16:41:18 crc kubenswrapper[4902]: E0121 16:41:18.030223 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4570bbab-b55a-498c-8276-2c7aa0969540" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.030230 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4570bbab-b55a-498c-8276-2c7aa0969540" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:41:18 crc kubenswrapper[4902]: E0121 16:41:18.030241 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65e05c0-de41-4782-ba9c-a82a8ab0f83a" containerName="extract-content" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.030247 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65e05c0-de41-4782-ba9c-a82a8ab0f83a" containerName="extract-content" Jan 21 16:41:18 crc kubenswrapper[4902]: E0121 16:41:18.030258 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65e05c0-de41-4782-ba9c-a82a8ab0f83a" containerName="extract-utilities" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.030264 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65e05c0-de41-4782-ba9c-a82a8ab0f83a" containerName="extract-utilities" Jan 21 16:41:18 crc kubenswrapper[4902]: E0121 16:41:18.030276 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" containerName="extract-utilities" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.030283 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" containerName="extract-utilities" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.030458 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" containerName="registry-server" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.030470 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="4570bbab-b55a-498c-8276-2c7aa0969540" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.030490 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b65e05c0-de41-4782-ba9c-a82a8ab0f83a" containerName="registry-server" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.031244 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.034273 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.034718 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-c55r2" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.034877 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.035029 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.045461 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-2qbs2"] Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.061332 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-utilities\") pod \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\" (UID: \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\") " Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.061435 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6tqd\" (UniqueName: \"kubernetes.io/projected/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-kube-api-access-q6tqd\") pod \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\" (UID: \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\") " Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.061621 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-catalog-content\") pod \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\" (UID: \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\") " Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.063680 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-utilities" (OuterVolumeSpecName: "utilities") pod "b65e05c0-de41-4782-ba9c-a82a8ab0f83a" (UID: "b65e05c0-de41-4782-ba9c-a82a8ab0f83a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.072887 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-kube-api-access-q6tqd" (OuterVolumeSpecName: "kube-api-access-q6tqd") pod "b65e05c0-de41-4782-ba9c-a82a8ab0f83a" (UID: "b65e05c0-de41-4782-ba9c-a82a8ab0f83a"). InnerVolumeSpecName "kube-api-access-q6tqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.133138 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b65e05c0-de41-4782-ba9c-a82a8ab0f83a" (UID: "b65e05c0-de41-4782-ba9c-a82a8ab0f83a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.164694 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkq24\" (UniqueName: \"kubernetes.io/projected/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-kube-api-access-nkq24\") pod \"configure-os-openstack-openstack-cell1-2qbs2\" (UID: \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\") " pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.164928 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-2qbs2\" (UID: \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\") " pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.165083 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-inventory\") pod \"configure-os-openstack-openstack-cell1-2qbs2\" (UID: \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\") " pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.165207 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.165238 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6tqd\" (UniqueName: \"kubernetes.io/projected/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-kube-api-access-q6tqd\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.165251 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.266754 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkq24\" (UniqueName: \"kubernetes.io/projected/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-kube-api-access-nkq24\") pod \"configure-os-openstack-openstack-cell1-2qbs2\" (UID: \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\") " pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.266899 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-2qbs2\" (UID: \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\") " pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.266959 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-inventory\") pod \"configure-os-openstack-openstack-cell1-2qbs2\" (UID: \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\") " pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.270652 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-2qbs2\" (UID: \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\") " pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.271626 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-inventory\") pod \"configure-os-openstack-openstack-cell1-2qbs2\" (UID: \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\") " pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.283870 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkq24\" (UniqueName: \"kubernetes.io/projected/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-kube-api-access-nkq24\") pod \"configure-os-openstack-openstack-cell1-2qbs2\" (UID: \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\") " pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.434328 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.786403 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mz8dz" event={"ID":"b65e05c0-de41-4782-ba9c-a82a8ab0f83a","Type":"ContainerDied","Data":"ee8e92833fe775226f0b1a1c483d4e9507784ce3af65b37360c09279b01577fa"} Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.786670 4902 scope.go:117] "RemoveContainer" containerID="1d8a9965a9fa69ca3a927f548d26553ae82c7a81151442950d884266cba4af26" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.786872 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.829317 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mz8dz"] Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.841021 4902 scope.go:117] "RemoveContainer" containerID="f2a60e3589ca79881dcfa599a9ffc2679c7179019127fdf6fe4134dd2dc99dd8" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.845948 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mz8dz"] Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.866618 4902 scope.go:117] "RemoveContainer" containerID="14dd86a2e3b7d5c82bcad09aa1bc2117f7eed1b5efb481174f490e8dcceec431" Jan 21 16:41:19 crc kubenswrapper[4902]: I0121 16:41:19.263395 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-2qbs2"] Jan 21 16:41:19 crc kubenswrapper[4902]: I0121 16:41:19.797136 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" event={"ID":"7ddf7812-c5ee-4c59-ad12-19f7b1a00442","Type":"ContainerStarted","Data":"42b3b08c13fd764f45a6ac87813e34b5bd5db9232e160886c1e404847f1ca7b7"} Jan 21 16:41:20 crc kubenswrapper[4902]: I0121 16:41:20.313858 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b65e05c0-de41-4782-ba9c-a82a8ab0f83a" path="/var/lib/kubelet/pods/b65e05c0-de41-4782-ba9c-a82a8ab0f83a/volumes" Jan 21 16:41:20 crc kubenswrapper[4902]: I0121 16:41:20.807414 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" event={"ID":"7ddf7812-c5ee-4c59-ad12-19f7b1a00442","Type":"ContainerStarted","Data":"b0ea5beb9b05a87f4ba1b2af835803387b11edca7ca299b52d070d2d5b519bdd"} Jan 21 16:41:20 crc kubenswrapper[4902]: I0121 16:41:20.835663 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" podStartSLOduration=2.374570648 podStartE2EDuration="2.835635605s" podCreationTimestamp="2026-01-21 16:41:18 +0000 UTC" firstStartedPulling="2026-01-21 16:41:19.272620687 +0000 UTC m=+7641.349453716" lastFinishedPulling="2026-01-21 16:41:19.733685624 +0000 UTC m=+7641.810518673" observedRunningTime="2026-01-21 16:41:20.825016694 +0000 UTC m=+7642.901849733" watchObservedRunningTime="2026-01-21 16:41:20.835635605 +0000 UTC m=+7642.912468634" Jan 21 16:41:32 crc kubenswrapper[4902]: I0121 16:41:32.295635 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:41:32 crc kubenswrapper[4902]: E0121 16:41:32.296436 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:41:46 crc kubenswrapper[4902]: I0121 16:41:46.295596 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:41:46 crc kubenswrapper[4902]: E0121 16:41:46.296377 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:41:56 crc kubenswrapper[4902]: I0121 16:41:56.171217 4902 generic.go:334] "Generic (PLEG): container finished" podID="7ddf7812-c5ee-4c59-ad12-19f7b1a00442" containerID="b0ea5beb9b05a87f4ba1b2af835803387b11edca7ca299b52d070d2d5b519bdd" exitCode=2 Jan 21 16:41:56 crc kubenswrapper[4902]: I0121 16:41:56.171328 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" event={"ID":"7ddf7812-c5ee-4c59-ad12-19f7b1a00442","Type":"ContainerDied","Data":"b0ea5beb9b05a87f4ba1b2af835803387b11edca7ca299b52d070d2d5b519bdd"} Jan 21 16:41:57 crc kubenswrapper[4902]: I0121 16:41:57.635437 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:41:57 crc kubenswrapper[4902]: I0121 16:41:57.749143 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-ssh-key-openstack-cell1\") pod \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\" (UID: \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\") " Jan 21 16:41:57 crc kubenswrapper[4902]: I0121 16:41:57.749301 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-inventory\") pod \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\" (UID: \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\") " Jan 21 16:41:57 crc kubenswrapper[4902]: I0121 16:41:57.749429 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkq24\" (UniqueName: \"kubernetes.io/projected/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-kube-api-access-nkq24\") pod \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\" (UID: \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\") " Jan 21 16:41:57 crc kubenswrapper[4902]: I0121 16:41:57.755425 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-kube-api-access-nkq24" (OuterVolumeSpecName: "kube-api-access-nkq24") pod "7ddf7812-c5ee-4c59-ad12-19f7b1a00442" (UID: "7ddf7812-c5ee-4c59-ad12-19f7b1a00442"). InnerVolumeSpecName "kube-api-access-nkq24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:41:57 crc kubenswrapper[4902]: I0121 16:41:57.785527 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-inventory" (OuterVolumeSpecName: "inventory") pod "7ddf7812-c5ee-4c59-ad12-19f7b1a00442" (UID: "7ddf7812-c5ee-4c59-ad12-19f7b1a00442"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:41:57 crc kubenswrapper[4902]: I0121 16:41:57.797351 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "7ddf7812-c5ee-4c59-ad12-19f7b1a00442" (UID: "7ddf7812-c5ee-4c59-ad12-19f7b1a00442"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:41:57 crc kubenswrapper[4902]: I0121 16:41:57.852830 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkq24\" (UniqueName: \"kubernetes.io/projected/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-kube-api-access-nkq24\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:57 crc kubenswrapper[4902]: I0121 16:41:57.852881 4902 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:57 crc kubenswrapper[4902]: I0121 16:41:57.852900 4902 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:58 crc kubenswrapper[4902]: I0121 16:41:58.217965 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" event={"ID":"7ddf7812-c5ee-4c59-ad12-19f7b1a00442","Type":"ContainerDied","Data":"42b3b08c13fd764f45a6ac87813e34b5bd5db9232e160886c1e404847f1ca7b7"} Jan 21 16:41:58 crc kubenswrapper[4902]: I0121 16:41:58.218008 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42b3b08c13fd764f45a6ac87813e34b5bd5db9232e160886c1e404847f1ca7b7" Jan 21 16:41:58 crc kubenswrapper[4902]: I0121 16:41:58.218079 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:42:00 crc kubenswrapper[4902]: I0121 16:42:00.295533 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:42:00 crc kubenswrapper[4902]: E0121 16:42:00.296497 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:42:15 crc kubenswrapper[4902]: I0121 16:42:15.295465 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:42:15 crc kubenswrapper[4902]: E0121 16:42:15.296756 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.046249 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-crnmm"] Jan 21 16:42:16 crc kubenswrapper[4902]: E0121 16:42:16.046758 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ddf7812-c5ee-4c59-ad12-19f7b1a00442" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.046778 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ddf7812-c5ee-4c59-ad12-19f7b1a00442" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.047142 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ddf7812-c5ee-4c59-ad12-19f7b1a00442" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.048286 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.052522 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.052648 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.052900 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-c55r2" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.053228 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.072173 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-crnmm"] Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.214349 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9d010648-1998-4311-917b-20626c2f5586-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-crnmm\" (UID: \"9d010648-1998-4311-917b-20626c2f5586\") " pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.214769 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d010648-1998-4311-917b-20626c2f5586-inventory\") pod \"configure-os-openstack-openstack-cell1-crnmm\" (UID: \"9d010648-1998-4311-917b-20626c2f5586\") " pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.214833 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5bdh\" (UniqueName: \"kubernetes.io/projected/9d010648-1998-4311-917b-20626c2f5586-kube-api-access-c5bdh\") pod \"configure-os-openstack-openstack-cell1-crnmm\" (UID: \"9d010648-1998-4311-917b-20626c2f5586\") " pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.317352 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9d010648-1998-4311-917b-20626c2f5586-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-crnmm\" (UID: \"9d010648-1998-4311-917b-20626c2f5586\") " pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.318546 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d010648-1998-4311-917b-20626c2f5586-inventory\") pod \"configure-os-openstack-openstack-cell1-crnmm\" (UID: \"9d010648-1998-4311-917b-20626c2f5586\") " pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.318762 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5bdh\" (UniqueName: \"kubernetes.io/projected/9d010648-1998-4311-917b-20626c2f5586-kube-api-access-c5bdh\") pod \"configure-os-openstack-openstack-cell1-crnmm\" (UID: \"9d010648-1998-4311-917b-20626c2f5586\") " pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.328656 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9d010648-1998-4311-917b-20626c2f5586-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-crnmm\" (UID: \"9d010648-1998-4311-917b-20626c2f5586\") " pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.334329 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d010648-1998-4311-917b-20626c2f5586-inventory\") pod \"configure-os-openstack-openstack-cell1-crnmm\" (UID: \"9d010648-1998-4311-917b-20626c2f5586\") " pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.355232 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5bdh\" (UniqueName: \"kubernetes.io/projected/9d010648-1998-4311-917b-20626c2f5586-kube-api-access-c5bdh\") pod \"configure-os-openstack-openstack-cell1-crnmm\" (UID: \"9d010648-1998-4311-917b-20626c2f5586\") " pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.386302 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.939994 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-crnmm"] Jan 21 16:42:16 crc kubenswrapper[4902]: W0121 16:42:16.948001 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d010648_1998_4311_917b_20626c2f5586.slice/crio-5c9d1edc35380fe37afa72007a3a41355f5f7dc6a6b0f3579382af699a5d5cb7 WatchSource:0}: Error finding container 5c9d1edc35380fe37afa72007a3a41355f5f7dc6a6b0f3579382af699a5d5cb7: Status 404 returned error can't find the container with id 5c9d1edc35380fe37afa72007a3a41355f5f7dc6a6b0f3579382af699a5d5cb7 Jan 21 16:42:17 crc kubenswrapper[4902]: I0121 16:42:17.414580 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-crnmm" event={"ID":"9d010648-1998-4311-917b-20626c2f5586","Type":"ContainerStarted","Data":"5c9d1edc35380fe37afa72007a3a41355f5f7dc6a6b0f3579382af699a5d5cb7"} Jan 21 16:42:18 crc kubenswrapper[4902]: I0121 16:42:18.424198 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-crnmm" event={"ID":"9d010648-1998-4311-917b-20626c2f5586","Type":"ContainerStarted","Data":"70faaab266dd818acbdadfb66ada41235c8ee46467514dc67255ecaa970bb0ce"} Jan 21 16:42:18 crc kubenswrapper[4902]: I0121 16:42:18.445329 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-crnmm" podStartSLOduration=2.030778996 podStartE2EDuration="2.445313435s" podCreationTimestamp="2026-01-21 16:42:16 +0000 UTC" firstStartedPulling="2026-01-21 16:42:16.950647182 +0000 UTC m=+7699.027480221" lastFinishedPulling="2026-01-21 16:42:17.365181631 +0000 UTC m=+7699.442014660" observedRunningTime="2026-01-21 16:42:18.443698949 +0000 UTC m=+7700.520531988" watchObservedRunningTime="2026-01-21 16:42:18.445313435 +0000 UTC m=+7700.522146464" Jan 21 16:42:29 crc kubenswrapper[4902]: I0121 16:42:29.294457 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:42:29 crc kubenswrapper[4902]: E0121 16:42:29.295291 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:42:40 crc kubenswrapper[4902]: I0121 16:42:40.297084 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:42:40 crc kubenswrapper[4902]: E0121 16:42:40.297896 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:42:52 crc kubenswrapper[4902]: I0121 16:42:52.294833 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:42:52 crc kubenswrapper[4902]: E0121 16:42:52.295688 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:42:56 crc kubenswrapper[4902]: I0121 16:42:56.828962 4902 generic.go:334] "Generic (PLEG): container finished" podID="9d010648-1998-4311-917b-20626c2f5586" containerID="70faaab266dd818acbdadfb66ada41235c8ee46467514dc67255ecaa970bb0ce" exitCode=2 Jan 21 16:42:56 crc kubenswrapper[4902]: I0121 16:42:56.829082 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-crnmm" event={"ID":"9d010648-1998-4311-917b-20626c2f5586","Type":"ContainerDied","Data":"70faaab266dd818acbdadfb66ada41235c8ee46467514dc67255ecaa970bb0ce"} Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.331375 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.483517 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5bdh\" (UniqueName: \"kubernetes.io/projected/9d010648-1998-4311-917b-20626c2f5586-kube-api-access-c5bdh\") pod \"9d010648-1998-4311-917b-20626c2f5586\" (UID: \"9d010648-1998-4311-917b-20626c2f5586\") " Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.483704 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d010648-1998-4311-917b-20626c2f5586-inventory\") pod \"9d010648-1998-4311-917b-20626c2f5586\" (UID: \"9d010648-1998-4311-917b-20626c2f5586\") " Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.485094 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9d010648-1998-4311-917b-20626c2f5586-ssh-key-openstack-cell1\") pod \"9d010648-1998-4311-917b-20626c2f5586\" (UID: \"9d010648-1998-4311-917b-20626c2f5586\") " Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.492340 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d010648-1998-4311-917b-20626c2f5586-kube-api-access-c5bdh" (OuterVolumeSpecName: "kube-api-access-c5bdh") pod "9d010648-1998-4311-917b-20626c2f5586" (UID: "9d010648-1998-4311-917b-20626c2f5586"). InnerVolumeSpecName "kube-api-access-c5bdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.524471 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d010648-1998-4311-917b-20626c2f5586-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "9d010648-1998-4311-917b-20626c2f5586" (UID: "9d010648-1998-4311-917b-20626c2f5586"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.526241 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d010648-1998-4311-917b-20626c2f5586-inventory" (OuterVolumeSpecName: "inventory") pod "9d010648-1998-4311-917b-20626c2f5586" (UID: "9d010648-1998-4311-917b-20626c2f5586"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.587799 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5bdh\" (UniqueName: \"kubernetes.io/projected/9d010648-1998-4311-917b-20626c2f5586-kube-api-access-c5bdh\") on node \"crc\" DevicePath \"\"" Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.587842 4902 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d010648-1998-4311-917b-20626c2f5586-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.587857 4902 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9d010648-1998-4311-917b-20626c2f5586-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.854674 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-crnmm" event={"ID":"9d010648-1998-4311-917b-20626c2f5586","Type":"ContainerDied","Data":"5c9d1edc35380fe37afa72007a3a41355f5f7dc6a6b0f3579382af699a5d5cb7"} Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.854715 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.854719 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c9d1edc35380fe37afa72007a3a41355f5f7dc6a6b0f3579382af699a5d5cb7" Jan 21 16:43:07 crc kubenswrapper[4902]: I0121 16:43:07.296120 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:43:07 crc kubenswrapper[4902]: E0121 16:43:07.297143 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:43:18 crc kubenswrapper[4902]: I0121 16:43:18.301697 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:43:19 crc kubenswrapper[4902]: I0121 16:43:19.062541 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"46dab60a77a31c9c125a7eb039a17b28b44898970f8705055f9ff1b6d0fef030"} Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.050179 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-755fc"] Jan 21 16:43:36 crc kubenswrapper[4902]: E0121 16:43:36.051643 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d010648-1998-4311-917b-20626c2f5586" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.051662 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d010648-1998-4311-917b-20626c2f5586" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.051933 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d010648-1998-4311-917b-20626c2f5586" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.053876 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.058006 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.058439 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.058522 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-c55r2" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.060135 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-755fc"] Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.063762 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.108693 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1d7d4592-eaab-4fdb-a63f-6b92285b1129-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-755fc\" (UID: \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\") " pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.109031 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv6x7\" (UniqueName: \"kubernetes.io/projected/1d7d4592-eaab-4fdb-a63f-6b92285b1129-kube-api-access-cv6x7\") pod \"configure-os-openstack-openstack-cell1-755fc\" (UID: \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\") " pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.109111 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d7d4592-eaab-4fdb-a63f-6b92285b1129-inventory\") pod \"configure-os-openstack-openstack-cell1-755fc\" (UID: \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\") " pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.211401 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv6x7\" (UniqueName: \"kubernetes.io/projected/1d7d4592-eaab-4fdb-a63f-6b92285b1129-kube-api-access-cv6x7\") pod \"configure-os-openstack-openstack-cell1-755fc\" (UID: \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\") " pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.211565 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d7d4592-eaab-4fdb-a63f-6b92285b1129-inventory\") pod \"configure-os-openstack-openstack-cell1-755fc\" (UID: \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\") " pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.213643 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1d7d4592-eaab-4fdb-a63f-6b92285b1129-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-755fc\" (UID: \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\") " pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.218495 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d7d4592-eaab-4fdb-a63f-6b92285b1129-inventory\") pod \"configure-os-openstack-openstack-cell1-755fc\" (UID: \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\") " pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.219474 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1d7d4592-eaab-4fdb-a63f-6b92285b1129-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-755fc\" (UID: \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\") " pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.239240 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv6x7\" (UniqueName: \"kubernetes.io/projected/1d7d4592-eaab-4fdb-a63f-6b92285b1129-kube-api-access-cv6x7\") pod \"configure-os-openstack-openstack-cell1-755fc\" (UID: \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\") " pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.392150 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.945365 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-755fc"] Jan 21 16:43:36 crc kubenswrapper[4902]: W0121 16:43:36.946429 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d7d4592_eaab_4fdb_a63f_6b92285b1129.slice/crio-7606ed6c8b018995a24c565b45d1cfab6b7e8fc84d30b4a5dd724907ef6dc615 WatchSource:0}: Error finding container 7606ed6c8b018995a24c565b45d1cfab6b7e8fc84d30b4a5dd724907ef6dc615: Status 404 returned error can't find the container with id 7606ed6c8b018995a24c565b45d1cfab6b7e8fc84d30b4a5dd724907ef6dc615 Jan 21 16:43:37 crc kubenswrapper[4902]: I0121 16:43:37.232417 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-755fc" event={"ID":"1d7d4592-eaab-4fdb-a63f-6b92285b1129","Type":"ContainerStarted","Data":"7606ed6c8b018995a24c565b45d1cfab6b7e8fc84d30b4a5dd724907ef6dc615"} Jan 21 16:43:38 crc kubenswrapper[4902]: I0121 16:43:38.244303 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-755fc" event={"ID":"1d7d4592-eaab-4fdb-a63f-6b92285b1129","Type":"ContainerStarted","Data":"a03258a4cb27bf6df46d45d9414de4b0caf988d4d615537ec950490a5b51869c"} Jan 21 16:43:38 crc kubenswrapper[4902]: I0121 16:43:38.274279 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-755fc" podStartSLOduration=1.634461135 podStartE2EDuration="2.274253199s" podCreationTimestamp="2026-01-21 16:43:36 +0000 UTC" firstStartedPulling="2026-01-21 16:43:36.949734651 +0000 UTC m=+7779.026567670" lastFinishedPulling="2026-01-21 16:43:37.589526705 +0000 UTC m=+7779.666359734" observedRunningTime="2026-01-21 16:43:38.264017629 +0000 UTC m=+7780.340850668" watchObservedRunningTime="2026-01-21 16:43:38.274253199 +0000 UTC m=+7780.351086228" Jan 21 16:44:12 crc kubenswrapper[4902]: I0121 16:44:12.581298 4902 generic.go:334] "Generic (PLEG): container finished" podID="1d7d4592-eaab-4fdb-a63f-6b92285b1129" containerID="a03258a4cb27bf6df46d45d9414de4b0caf988d4d615537ec950490a5b51869c" exitCode=2 Jan 21 16:44:12 crc kubenswrapper[4902]: I0121 16:44:12.581353 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-755fc" event={"ID":"1d7d4592-eaab-4fdb-a63f-6b92285b1129","Type":"ContainerDied","Data":"a03258a4cb27bf6df46d45d9414de4b0caf988d4d615537ec950490a5b51869c"} Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.507441 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.613025 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv6x7\" (UniqueName: \"kubernetes.io/projected/1d7d4592-eaab-4fdb-a63f-6b92285b1129-kube-api-access-cv6x7\") pod \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\" (UID: \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\") " Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.613094 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1d7d4592-eaab-4fdb-a63f-6b92285b1129-ssh-key-openstack-cell1\") pod \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\" (UID: \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\") " Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.613146 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d7d4592-eaab-4fdb-a63f-6b92285b1129-inventory\") pod \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\" (UID: \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\") " Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.624670 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d7d4592-eaab-4fdb-a63f-6b92285b1129-kube-api-access-cv6x7" (OuterVolumeSpecName: "kube-api-access-cv6x7") pod "1d7d4592-eaab-4fdb-a63f-6b92285b1129" (UID: "1d7d4592-eaab-4fdb-a63f-6b92285b1129"). InnerVolumeSpecName "kube-api-access-cv6x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.637433 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-755fc" event={"ID":"1d7d4592-eaab-4fdb-a63f-6b92285b1129","Type":"ContainerDied","Data":"7606ed6c8b018995a24c565b45d1cfab6b7e8fc84d30b4a5dd724907ef6dc615"} Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.637477 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7606ed6c8b018995a24c565b45d1cfab6b7e8fc84d30b4a5dd724907ef6dc615" Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.637494 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.656072 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d7d4592-eaab-4fdb-a63f-6b92285b1129-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "1d7d4592-eaab-4fdb-a63f-6b92285b1129" (UID: "1d7d4592-eaab-4fdb-a63f-6b92285b1129"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.675029 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d7d4592-eaab-4fdb-a63f-6b92285b1129-inventory" (OuterVolumeSpecName: "inventory") pod "1d7d4592-eaab-4fdb-a63f-6b92285b1129" (UID: "1d7d4592-eaab-4fdb-a63f-6b92285b1129"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.715262 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv6x7\" (UniqueName: \"kubernetes.io/projected/1d7d4592-eaab-4fdb-a63f-6b92285b1129-kube-api-access-cv6x7\") on node \"crc\" DevicePath \"\"" Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.715303 4902 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1d7d4592-eaab-4fdb-a63f-6b92285b1129-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.715316 4902 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d7d4592-eaab-4fdb-a63f-6b92285b1129-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.212316 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8"] Jan 21 16:45:00 crc kubenswrapper[4902]: E0121 16:45:00.213754 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d7d4592-eaab-4fdb-a63f-6b92285b1129" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.213772 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d7d4592-eaab-4fdb-a63f-6b92285b1129" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.214022 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d7d4592-eaab-4fdb-a63f-6b92285b1129" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.214838 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.217109 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.217328 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.233158 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8"] Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.394581 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/504c5756-9427-4037-be3a-481fc1e8715f-secret-volume\") pod \"collect-profiles-29483565-7hzv8\" (UID: \"504c5756-9427-4037-be3a-481fc1e8715f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.394680 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/504c5756-9427-4037-be3a-481fc1e8715f-config-volume\") pod \"collect-profiles-29483565-7hzv8\" (UID: \"504c5756-9427-4037-be3a-481fc1e8715f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.394853 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vvx2\" (UniqueName: \"kubernetes.io/projected/504c5756-9427-4037-be3a-481fc1e8715f-kube-api-access-4vvx2\") pod \"collect-profiles-29483565-7hzv8\" (UID: \"504c5756-9427-4037-be3a-481fc1e8715f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.497185 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vvx2\" (UniqueName: \"kubernetes.io/projected/504c5756-9427-4037-be3a-481fc1e8715f-kube-api-access-4vvx2\") pod \"collect-profiles-29483565-7hzv8\" (UID: \"504c5756-9427-4037-be3a-481fc1e8715f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.497292 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/504c5756-9427-4037-be3a-481fc1e8715f-secret-volume\") pod \"collect-profiles-29483565-7hzv8\" (UID: \"504c5756-9427-4037-be3a-481fc1e8715f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.497369 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/504c5756-9427-4037-be3a-481fc1e8715f-config-volume\") pod \"collect-profiles-29483565-7hzv8\" (UID: \"504c5756-9427-4037-be3a-481fc1e8715f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.498656 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/504c5756-9427-4037-be3a-481fc1e8715f-config-volume\") pod \"collect-profiles-29483565-7hzv8\" (UID: \"504c5756-9427-4037-be3a-481fc1e8715f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.503076 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/504c5756-9427-4037-be3a-481fc1e8715f-secret-volume\") pod \"collect-profiles-29483565-7hzv8\" (UID: \"504c5756-9427-4037-be3a-481fc1e8715f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.515458 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vvx2\" (UniqueName: \"kubernetes.io/projected/504c5756-9427-4037-be3a-481fc1e8715f-kube-api-access-4vvx2\") pod \"collect-profiles-29483565-7hzv8\" (UID: \"504c5756-9427-4037-be3a-481fc1e8715f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.534168 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:01 crc kubenswrapper[4902]: I0121 16:45:01.014984 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8"] Jan 21 16:45:01 crc kubenswrapper[4902]: I0121 16:45:01.186625 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" event={"ID":"504c5756-9427-4037-be3a-481fc1e8715f","Type":"ContainerStarted","Data":"281062accf2e074810f33f20bc1bf88bb0df7e0affa7d8b2581066f49b358d78"} Jan 21 16:45:02 crc kubenswrapper[4902]: I0121 16:45:02.198800 4902 generic.go:334] "Generic (PLEG): container finished" podID="504c5756-9427-4037-be3a-481fc1e8715f" containerID="aa3c7bb404afe310e56cb2617f84d467c8f578e09af1f3e30d342fd88646315e" exitCode=0 Jan 21 16:45:02 crc kubenswrapper[4902]: I0121 16:45:02.198903 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" event={"ID":"504c5756-9427-4037-be3a-481fc1e8715f","Type":"ContainerDied","Data":"aa3c7bb404afe310e56cb2617f84d467c8f578e09af1f3e30d342fd88646315e"} Jan 21 16:45:03 crc kubenswrapper[4902]: I0121 16:45:03.518679 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:03 crc kubenswrapper[4902]: I0121 16:45:03.668498 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vvx2\" (UniqueName: \"kubernetes.io/projected/504c5756-9427-4037-be3a-481fc1e8715f-kube-api-access-4vvx2\") pod \"504c5756-9427-4037-be3a-481fc1e8715f\" (UID: \"504c5756-9427-4037-be3a-481fc1e8715f\") " Jan 21 16:45:03 crc kubenswrapper[4902]: I0121 16:45:03.668569 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/504c5756-9427-4037-be3a-481fc1e8715f-config-volume\") pod \"504c5756-9427-4037-be3a-481fc1e8715f\" (UID: \"504c5756-9427-4037-be3a-481fc1e8715f\") " Jan 21 16:45:03 crc kubenswrapper[4902]: I0121 16:45:03.668846 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/504c5756-9427-4037-be3a-481fc1e8715f-secret-volume\") pod \"504c5756-9427-4037-be3a-481fc1e8715f\" (UID: \"504c5756-9427-4037-be3a-481fc1e8715f\") " Jan 21 16:45:03 crc kubenswrapper[4902]: I0121 16:45:03.669093 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/504c5756-9427-4037-be3a-481fc1e8715f-config-volume" (OuterVolumeSpecName: "config-volume") pod "504c5756-9427-4037-be3a-481fc1e8715f" (UID: "504c5756-9427-4037-be3a-481fc1e8715f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:45:03 crc kubenswrapper[4902]: I0121 16:45:03.669456 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/504c5756-9427-4037-be3a-481fc1e8715f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:03 crc kubenswrapper[4902]: I0121 16:45:03.673549 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/504c5756-9427-4037-be3a-481fc1e8715f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "504c5756-9427-4037-be3a-481fc1e8715f" (UID: "504c5756-9427-4037-be3a-481fc1e8715f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:45:03 crc kubenswrapper[4902]: I0121 16:45:03.674171 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/504c5756-9427-4037-be3a-481fc1e8715f-kube-api-access-4vvx2" (OuterVolumeSpecName: "kube-api-access-4vvx2") pod "504c5756-9427-4037-be3a-481fc1e8715f" (UID: "504c5756-9427-4037-be3a-481fc1e8715f"). InnerVolumeSpecName "kube-api-access-4vvx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:45:03 crc kubenswrapper[4902]: I0121 16:45:03.771625 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vvx2\" (UniqueName: \"kubernetes.io/projected/504c5756-9427-4037-be3a-481fc1e8715f-kube-api-access-4vvx2\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:03 crc kubenswrapper[4902]: I0121 16:45:03.771684 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/504c5756-9427-4037-be3a-481fc1e8715f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:04 crc kubenswrapper[4902]: I0121 16:45:04.225476 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" event={"ID":"504c5756-9427-4037-be3a-481fc1e8715f","Type":"ContainerDied","Data":"281062accf2e074810f33f20bc1bf88bb0df7e0affa7d8b2581066f49b358d78"} Jan 21 16:45:04 crc kubenswrapper[4902]: I0121 16:45:04.225522 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="281062accf2e074810f33f20bc1bf88bb0df7e0affa7d8b2581066f49b358d78" Jan 21 16:45:04 crc kubenswrapper[4902]: I0121 16:45:04.225555 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:04 crc kubenswrapper[4902]: I0121 16:45:04.586527 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp"] Jan 21 16:45:04 crc kubenswrapper[4902]: I0121 16:45:04.595166 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp"] Jan 21 16:45:06 crc kubenswrapper[4902]: I0121 16:45:06.307136 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f705e9e-4608-4e35-9f28-665a52f2aba6" path="/var/lib/kubelet/pods/2f705e9e-4608-4e35-9f28-665a52f2aba6/volumes" Jan 21 16:45:35 crc kubenswrapper[4902]: I0121 16:45:35.785523 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-649bt/must-gather-sx2rp"] Jan 21 16:45:35 crc kubenswrapper[4902]: E0121 16:45:35.786382 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="504c5756-9427-4037-be3a-481fc1e8715f" containerName="collect-profiles" Jan 21 16:45:35 crc kubenswrapper[4902]: I0121 16:45:35.786425 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="504c5756-9427-4037-be3a-481fc1e8715f" containerName="collect-profiles" Jan 21 16:45:35 crc kubenswrapper[4902]: I0121 16:45:35.786637 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="504c5756-9427-4037-be3a-481fc1e8715f" containerName="collect-profiles" Jan 21 16:45:35 crc kubenswrapper[4902]: I0121 16:45:35.788166 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/must-gather-sx2rp" Jan 21 16:45:35 crc kubenswrapper[4902]: I0121 16:45:35.792943 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-649bt"/"openshift-service-ca.crt" Jan 21 16:45:35 crc kubenswrapper[4902]: I0121 16:45:35.793098 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-649bt"/"kube-root-ca.crt" Jan 21 16:45:35 crc kubenswrapper[4902]: I0121 16:45:35.793238 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-649bt"/"default-dockercfg-lrpnl" Jan 21 16:45:35 crc kubenswrapper[4902]: I0121 16:45:35.811523 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-649bt/must-gather-sx2rp"] Jan 21 16:45:35 crc kubenswrapper[4902]: I0121 16:45:35.923939 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnzhl\" (UniqueName: \"kubernetes.io/projected/7f3b3035-07fa-46da-ba99-74131a56f5b2-kube-api-access-jnzhl\") pod \"must-gather-sx2rp\" (UID: \"7f3b3035-07fa-46da-ba99-74131a56f5b2\") " pod="openshift-must-gather-649bt/must-gather-sx2rp" Jan 21 16:45:35 crc kubenswrapper[4902]: I0121 16:45:35.924163 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7f3b3035-07fa-46da-ba99-74131a56f5b2-must-gather-output\") pod \"must-gather-sx2rp\" (UID: \"7f3b3035-07fa-46da-ba99-74131a56f5b2\") " pod="openshift-must-gather-649bt/must-gather-sx2rp" Jan 21 16:45:36 crc kubenswrapper[4902]: I0121 16:45:36.025608 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnzhl\" (UniqueName: \"kubernetes.io/projected/7f3b3035-07fa-46da-ba99-74131a56f5b2-kube-api-access-jnzhl\") pod \"must-gather-sx2rp\" (UID: \"7f3b3035-07fa-46da-ba99-74131a56f5b2\") " pod="openshift-must-gather-649bt/must-gather-sx2rp" Jan 21 16:45:36 crc kubenswrapper[4902]: I0121 16:45:36.025819 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7f3b3035-07fa-46da-ba99-74131a56f5b2-must-gather-output\") pod \"must-gather-sx2rp\" (UID: \"7f3b3035-07fa-46da-ba99-74131a56f5b2\") " pod="openshift-must-gather-649bt/must-gather-sx2rp" Jan 21 16:45:36 crc kubenswrapper[4902]: I0121 16:45:36.026816 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7f3b3035-07fa-46da-ba99-74131a56f5b2-must-gather-output\") pod \"must-gather-sx2rp\" (UID: \"7f3b3035-07fa-46da-ba99-74131a56f5b2\") " pod="openshift-must-gather-649bt/must-gather-sx2rp" Jan 21 16:45:36 crc kubenswrapper[4902]: I0121 16:45:36.053008 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnzhl\" (UniqueName: \"kubernetes.io/projected/7f3b3035-07fa-46da-ba99-74131a56f5b2-kube-api-access-jnzhl\") pod \"must-gather-sx2rp\" (UID: \"7f3b3035-07fa-46da-ba99-74131a56f5b2\") " pod="openshift-must-gather-649bt/must-gather-sx2rp" Jan 21 16:45:36 crc kubenswrapper[4902]: I0121 16:45:36.110666 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/must-gather-sx2rp" Jan 21 16:45:36 crc kubenswrapper[4902]: I0121 16:45:36.210368 4902 scope.go:117] "RemoveContainer" containerID="32fe8ff5a7cc5267205a3f1e8b759ee5d99a41ef6bca9732cd6d5478ff974b57" Jan 21 16:45:36 crc kubenswrapper[4902]: I0121 16:45:36.644704 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-649bt/must-gather-sx2rp"] Jan 21 16:45:36 crc kubenswrapper[4902]: I0121 16:45:36.648351 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:45:37 crc kubenswrapper[4902]: I0121 16:45:37.557797 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-649bt/must-gather-sx2rp" event={"ID":"7f3b3035-07fa-46da-ba99-74131a56f5b2","Type":"ContainerStarted","Data":"14a2be8de840189356082d6093558c67e6bee1d9a733fe7d50bb5332aa247b58"} Jan 21 16:45:43 crc kubenswrapper[4902]: I0121 16:45:43.618789 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-649bt/must-gather-sx2rp" event={"ID":"7f3b3035-07fa-46da-ba99-74131a56f5b2","Type":"ContainerStarted","Data":"3c104b5b2ba1ed6d0cdf612cdf265bf9d9f9732077b75589cf5586c334a05bbb"} Jan 21 16:45:43 crc kubenswrapper[4902]: I0121 16:45:43.619232 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-649bt/must-gather-sx2rp" event={"ID":"7f3b3035-07fa-46da-ba99-74131a56f5b2","Type":"ContainerStarted","Data":"ccfea4af07fa6cc489b0aec7bd59161ebb73b38f88167db3fd5bcc73fa7d7e58"} Jan 21 16:45:43 crc kubenswrapper[4902]: I0121 16:45:43.640393 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-649bt/must-gather-sx2rp" podStartSLOduration=2.24440557 podStartE2EDuration="8.64037188s" podCreationTimestamp="2026-01-21 16:45:35 +0000 UTC" firstStartedPulling="2026-01-21 16:45:36.647952974 +0000 UTC m=+7898.724786003" lastFinishedPulling="2026-01-21 16:45:43.043919284 +0000 UTC m=+7905.120752313" observedRunningTime="2026-01-21 16:45:43.634102403 +0000 UTC m=+7905.710935442" watchObservedRunningTime="2026-01-21 16:45:43.64037188 +0000 UTC m=+7905.717204919" Jan 21 16:45:47 crc kubenswrapper[4902]: E0121 16:45:47.252236 4902 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.21:59922->38.129.56.21:44701: write tcp 38.129.56.21:59922->38.129.56.21:44701: write: connection reset by peer Jan 21 16:45:47 crc kubenswrapper[4902]: I0121 16:45:47.769850 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:45:47 crc kubenswrapper[4902]: I0121 16:45:47.770201 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:45:47 crc kubenswrapper[4902]: I0121 16:45:47.830192 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-649bt/crc-debug-4hbxl"] Jan 21 16:45:47 crc kubenswrapper[4902]: I0121 16:45:47.831544 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/crc-debug-4hbxl" Jan 21 16:45:47 crc kubenswrapper[4902]: I0121 16:45:47.893287 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db88f391-e5de-44fc-8bb9-7d7b4bddd96d-host\") pod \"crc-debug-4hbxl\" (UID: \"db88f391-e5de-44fc-8bb9-7d7b4bddd96d\") " pod="openshift-must-gather-649bt/crc-debug-4hbxl" Jan 21 16:45:47 crc kubenswrapper[4902]: I0121 16:45:47.893396 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8lt5\" (UniqueName: \"kubernetes.io/projected/db88f391-e5de-44fc-8bb9-7d7b4bddd96d-kube-api-access-b8lt5\") pod \"crc-debug-4hbxl\" (UID: \"db88f391-e5de-44fc-8bb9-7d7b4bddd96d\") " pod="openshift-must-gather-649bt/crc-debug-4hbxl" Jan 21 16:45:47 crc kubenswrapper[4902]: I0121 16:45:47.995830 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db88f391-e5de-44fc-8bb9-7d7b4bddd96d-host\") pod \"crc-debug-4hbxl\" (UID: \"db88f391-e5de-44fc-8bb9-7d7b4bddd96d\") " pod="openshift-must-gather-649bt/crc-debug-4hbxl" Jan 21 16:45:47 crc kubenswrapper[4902]: I0121 16:45:47.995905 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8lt5\" (UniqueName: \"kubernetes.io/projected/db88f391-e5de-44fc-8bb9-7d7b4bddd96d-kube-api-access-b8lt5\") pod \"crc-debug-4hbxl\" (UID: \"db88f391-e5de-44fc-8bb9-7d7b4bddd96d\") " pod="openshift-must-gather-649bt/crc-debug-4hbxl" Jan 21 16:45:47 crc kubenswrapper[4902]: I0121 16:45:47.996064 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db88f391-e5de-44fc-8bb9-7d7b4bddd96d-host\") pod \"crc-debug-4hbxl\" (UID: \"db88f391-e5de-44fc-8bb9-7d7b4bddd96d\") " pod="openshift-must-gather-649bt/crc-debug-4hbxl" Jan 21 16:45:48 crc kubenswrapper[4902]: I0121 16:45:48.018816 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8lt5\" (UniqueName: \"kubernetes.io/projected/db88f391-e5de-44fc-8bb9-7d7b4bddd96d-kube-api-access-b8lt5\") pod \"crc-debug-4hbxl\" (UID: \"db88f391-e5de-44fc-8bb9-7d7b4bddd96d\") " pod="openshift-must-gather-649bt/crc-debug-4hbxl" Jan 21 16:45:48 crc kubenswrapper[4902]: I0121 16:45:48.151540 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/crc-debug-4hbxl" Jan 21 16:45:48 crc kubenswrapper[4902]: W0121 16:45:48.190833 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb88f391_e5de_44fc_8bb9_7d7b4bddd96d.slice/crio-f7c7697f0980d691d3313b527a6369dc55eb3e3c975f66aca6b883c28023fe16 WatchSource:0}: Error finding container f7c7697f0980d691d3313b527a6369dc55eb3e3c975f66aca6b883c28023fe16: Status 404 returned error can't find the container with id f7c7697f0980d691d3313b527a6369dc55eb3e3c975f66aca6b883c28023fe16 Jan 21 16:45:48 crc kubenswrapper[4902]: I0121 16:45:48.667532 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-649bt/crc-debug-4hbxl" event={"ID":"db88f391-e5de-44fc-8bb9-7d7b4bddd96d","Type":"ContainerStarted","Data":"f7c7697f0980d691d3313b527a6369dc55eb3e3c975f66aca6b883c28023fe16"} Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.524559 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_657b791a-81e2-483e-8ae9-b261f3bc0c41/alertmanager/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.533292 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_657b791a-81e2-483e-8ae9-b261f3bc0c41/config-reloader/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.540288 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_657b791a-81e2-483e-8ae9-b261f3bc0c41/init-config-reloader/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.577402 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_da0893d4-ad82-4a00-8ccf-5e33ead4d85d/aodh-api/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.596410 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_da0893d4-ad82-4a00-8ccf-5e33ead4d85d/aodh-evaluator/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.610266 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_da0893d4-ad82-4a00-8ccf-5e33ead4d85d/aodh-notifier/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.625073 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_da0893d4-ad82-4a00-8ccf-5e33ead4d85d/aodh-listener/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.648789 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-85645f8dd4-bf5z5_49dfaf72-0f35-4705-a9d8-830878fc46d1/barbican-api-log/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.656173 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-85645f8dd4-bf5z5_49dfaf72-0f35-4705-a9d8-830878fc46d1/barbican-api/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.691021 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8458cc5fd6-z5j6z_95cef3f6-598c-483e-b2b6-bb3d2942f18e/barbican-keystone-listener-log/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.697816 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8458cc5fd6-z5j6z_95cef3f6-598c-483e-b2b6-bb3d2942f18e/barbican-keystone-listener/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.715153 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-c94b5b747-nxfg6_9162d3ad-8f1a-4998-9f4d-a1869af6a23f/barbican-worker-log/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.723035 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-c94b5b747-nxfg6_9162d3ad-8f1a-4998-9f4d-a1869af6a23f/barbican-worker/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.758184 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-zwtbg_03ebbaac-5961-4e6e-8709-93bb85975c9c/bootstrap-openstack-openstack-cell1/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.802882 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_57c9a2f0-4583-4438-b35f-f92aa9a7efe8/ceilometer-central-agent/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.911606 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_57c9a2f0-4583-4438-b35f-f92aa9a7efe8/ceilometer-notification-agent/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.937924 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_57c9a2f0-4583-4438-b35f-f92aa9a7efe8/sg-core/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.958141 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_57c9a2f0-4583-4438-b35f-f92aa9a7efe8/proxy-httpd/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.985779 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_24d9842a-4646-47c5-a81c-18e641f7617f/cinder-api-log/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.052657 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_24d9842a-4646-47c5-a81c-18e641f7617f/cinder-api/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.081249 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_16354b62-7b74-468c-8953-3a41b1dc1a66/cinder-scheduler/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.117596 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_16354b62-7b74-468c-8953-3a41b1dc1a66/probe/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.135603 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-jgd86_2418bfc5-bf9b-4397-bc7f-20aa86aa582a/configure-network-openstack-openstack-cell1/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.162169 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-2qbs2_7ddf7812-c5ee-4c59-ad12-19f7b1a00442/configure-os-openstack-openstack-cell1/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.184687 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-755fc_1d7d4592-eaab-4fdb-a63f-6b92285b1129/configure-os-openstack-openstack-cell1/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.203159 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-crnmm_9d010648-1998-4311-917b-20626c2f5586/configure-os-openstack-openstack-cell1/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.277020 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-w46l6_4570bbab-b55a-498c-8276-2c7aa0969540/configure-os-openstack-openstack-cell1/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.302319 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-f4c775f77-hlsqd_45e057f7-f682-43f2-a02c-effad070763f/dnsmasq-dns/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.316550 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-f4c775f77-hlsqd_45e057f7-f682-43f2-a02c-effad070763f/init/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.523472 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-lvw72_d171dc59-1575-4895-b80f-0886e901b704/download-cache-openstack-openstack-cell1/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.537455 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_43059835-649d-40c9-bf13-f46c9d6b65a6/glance-log/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.683385 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_43059835-649d-40c9-bf13-f46c9d6b65a6/glance-httpd/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.708618 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_a21c1b8f-59f7-445b-bc8a-f8e89d7142e5/glance-log/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.783993 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_a21c1b8f-59f7-445b-bc8a-f8e89d7142e5/glance-httpd/0.log" Jan 21 16:45:52 crc kubenswrapper[4902]: I0121 16:45:52.026213 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-575dc5884b-mwxz4_9bfec31e-5cec-4820-9f26-34413330e44c/heat-api/0.log" Jan 21 16:45:52 crc kubenswrapper[4902]: I0121 16:45:52.299582 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-5c8d887b44-lnw77_5acd47b5-1a65-41c3-af06-401bd9880c1f/heat-cfnapi/0.log" Jan 21 16:45:52 crc kubenswrapper[4902]: I0121 16:45:52.317930 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-68647965fb-5bvjr_bb701a34-be50-44cd-b277-b687e8499664/heat-engine/0.log" Jan 21 16:45:52 crc kubenswrapper[4902]: I0121 16:45:52.554865 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6845bd7746-jd2dk_d71e079c-1163-4e7e-ac94-0e92a0b602ad/horizon-log/0.log" Jan 21 16:45:52 crc kubenswrapper[4902]: I0121 16:45:52.645901 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6845bd7746-jd2dk_d71e079c-1163-4e7e-ac94-0e92a0b602ad/horizon/0.log" Jan 21 16:45:52 crc kubenswrapper[4902]: I0121 16:45:52.669987 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-7xpxk_e253be6c-dccb-456f-b4ca-0aed1b901c43/install-os-openstack-openstack-cell1/0.log" Jan 21 16:45:53 crc kubenswrapper[4902]: I0121 16:45:53.395195 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-67bfc4c47-flndt_1bc7e490-49b1-4eef-ab29-4453235cf752/keystone-api/0.log" Jan 21 16:45:53 crc kubenswrapper[4902]: I0121 16:45:53.406806 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_c4fb45d4-a64d-4e42-86b5-9e3924f0f877/kube-state-metrics/0.log" Jan 21 16:45:53 crc kubenswrapper[4902]: I0121 16:45:53.416012 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_45f02625-70e9-48ec-8dd4-a0bd456a283b/adoption/0.log" Jan 21 16:45:55 crc kubenswrapper[4902]: I0121 16:45:55.533366 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_32eae2d9-5b57-4ae9-8451-fa00bd7be443/memcached/0.log" Jan 21 16:45:55 crc kubenswrapper[4902]: I0121 16:45:55.593463 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-66b9c9869c-btkxh_565a7068-4930-41e5-99bb-a08376495b63/neutron-api/0.log" Jan 21 16:45:55 crc kubenswrapper[4902]: I0121 16:45:55.638149 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-66b9c9869c-btkxh_565a7068-4930-41e5-99bb-a08376495b63/neutron-httpd/0.log" Jan 21 16:45:55 crc kubenswrapper[4902]: I0121 16:45:55.740325 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8603f024-f71f-486b-93aa-e6397021aa48/nova-api-log/0.log" Jan 21 16:45:56 crc kubenswrapper[4902]: I0121 16:45:56.122290 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8603f024-f71f-486b-93aa-e6397021aa48/nova-api-api/0.log" Jan 21 16:45:56 crc kubenswrapper[4902]: I0121 16:45:56.209825 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_61fa221c-a236-471b-a3ca-0efc339d0fcc/nova-cell0-conductor-conductor/0.log" Jan 21 16:45:56 crc kubenswrapper[4902]: I0121 16:45:56.280918 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f7b3d3ef-1806-4318-95f7-eb9cd2526d32/nova-cell1-conductor-conductor/0.log" Jan 21 16:45:56 crc kubenswrapper[4902]: I0121 16:45:56.362000 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_78825018-5d0a-4fe7-83c7-ef79700642cd/nova-cell1-novncproxy-novncproxy/0.log" Jan 21 16:45:56 crc kubenswrapper[4902]: I0121 16:45:56.435951 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_98338524-801f-465f-8845-1d061027c735/nova-metadata-log/0.log" Jan 21 16:45:57 crc kubenswrapper[4902]: I0121 16:45:57.414053 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_98338524-801f-465f-8845-1d061027c735/nova-metadata-metadata/0.log" Jan 21 16:45:57 crc kubenswrapper[4902]: I0121 16:45:57.523397 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_6d12c9a0-2841-4a53-abd3-0cdb15d404fb/nova-scheduler-scheduler/0.log" Jan 21 16:45:57 crc kubenswrapper[4902]: I0121 16:45:57.802453 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-77584c4dc-lmbjv_441cf475-eec9-4cee-84ab-7807e9ab0b75/octavia-api/0.log" Jan 21 16:45:57 crc kubenswrapper[4902]: I0121 16:45:57.827094 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-77584c4dc-lmbjv_441cf475-eec9-4cee-84ab-7807e9ab0b75/octavia-api-provider-agent/0.log" Jan 21 16:45:57 crc kubenswrapper[4902]: I0121 16:45:57.847267 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-77584c4dc-lmbjv_441cf475-eec9-4cee-84ab-7807e9ab0b75/init/0.log" Jan 21 16:45:57 crc kubenswrapper[4902]: I0121 16:45:57.910841 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-vtnkx_e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39/octavia-healthmanager/0.log" Jan 21 16:45:57 crc kubenswrapper[4902]: I0121 16:45:57.918481 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-vtnkx_e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39/init/0.log" Jan 21 16:45:57 crc kubenswrapper[4902]: I0121 16:45:57.947703 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-pr9tl_34cb5d58-0b3f-40eb-a5ee-b8ab812c8008/octavia-housekeeping/0.log" Jan 21 16:45:57 crc kubenswrapper[4902]: I0121 16:45:57.957030 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-pr9tl_34cb5d58-0b3f-40eb-a5ee-b8ab812c8008/init/0.log" Jan 21 16:45:57 crc kubenswrapper[4902]: I0121 16:45:57.967634 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-kn74s_802fca2f-9dae-4f46-aaf3-c688c8ebbdfb/octavia-rsyslog/0.log" Jan 21 16:45:57 crc kubenswrapper[4902]: I0121 16:45:57.982810 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-kn74s_802fca2f-9dae-4f46-aaf3-c688c8ebbdfb/init/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.096027 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-drv9p_646b20f3-5a05-4352-9645-69bed7f67dae/octavia-worker/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.108438 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-drv9p_646b20f3-5a05-4352-9645-69bed7f67dae/init/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.131754 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a211ebd7-f82f-4cc7-91d3-77ec265a5d11/galera/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.149801 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a211ebd7-f82f-4cc7-91d3-77ec265a5d11/mysql-bootstrap/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.172950 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a02660d2-21f1-4d0b-9351-efc03413d6f8/galera/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.183270 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a02660d2-21f1-4d0b-9351-efc03413d6f8/mysql-bootstrap/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.197226 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_052c7402-6934-4f86-bb78-e83d7da3b587/openstackclient/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.216649 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-9bqlx_cc475055-769c-4199-8486-3bdca7cd05bc/ovn-controller/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.252747 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-9vx8r_8f209787-a9f8-41df-8298-79c1381eecbb/openstack-network-exporter/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.270595 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qfhz4_d120f671-59d9-42ef-a905-2a6203c5896c/ovsdb-server/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.284062 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qfhz4_d120f671-59d9-42ef-a905-2a6203c5896c/ovs-vswitchd/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.290017 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qfhz4_d120f671-59d9-42ef-a905-2a6203c5896c/ovsdb-server-init/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.309226 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_15260f61-f63b-48cf-8c1d-1269ed5264d6/adoption/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.322446 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b8db1a8e-13c3-41be-9f21-24077d0e4e29/ovn-northd/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.327026 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b8db1a8e-13c3-41be-9f21-24077d0e4e29/openstack-network-exporter/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.643646 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_52b530ea-b7ee-4420-a3d6-d140ac75c474/ovsdbserver-nb/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.650816 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_52b530ea-b7ee-4420-a3d6-d140ac75c474/openstack-network-exporter/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.667000 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_fcf74aba-3fc7-42ea-9537-a176dbf2a2e2/ovsdbserver-nb/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.672707 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_fcf74aba-3fc7-42ea-9537-a176dbf2a2e2/openstack-network-exporter/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.688240 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_69d6d956-f400-4339-8b68-c2644bb9b9eb/ovsdbserver-nb/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.692712 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_69d6d956-f400-4339-8b68-c2644bb9b9eb/openstack-network-exporter/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.708285 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_fa609e80-09d5-4393-a79f-9989f9223bdd/ovsdbserver-sb/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.712867 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_fa609e80-09d5-4393-a79f-9989f9223bdd/openstack-network-exporter/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.732955 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_51aa3a3a-61f9-4757-b302-aa170904d97f/ovsdbserver-sb/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.738415 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_51aa3a3a-61f9-4757-b302-aa170904d97f/openstack-network-exporter/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.992771 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_aadc3978-ec1c-4d8d-8d02-f199d6509d5c/ovsdbserver-sb/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:58.999960 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_aadc3978-ec1c-4d8d-8d02-f199d6509d5c/openstack-network-exporter/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.097547 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-856775b9dd-twjxc_43a8c70b-ebc7-4ce0-8d5c-e790226eff45/placement-log/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.126628 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-856775b9dd-twjxc_43a8c70b-ebc7-4ce0-8d5c-e790226eff45/placement-api/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.261514 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q_8b723bd7-4449-4516-bcc6-9d57d981fbda/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.282361 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_294a561c-9181-4330-86e5-ab51e9f3c07c/prometheus/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.296594 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_294a561c-9181-4330-86e5-ab51e9f3c07c/config-reloader/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.304570 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_294a561c-9181-4330-86e5-ab51e9f3c07c/thanos-sidecar/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.364650 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_294a561c-9181-4330-86e5-ab51e9f3c07c/init-config-reloader/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.394354 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e0bcf8cd-3dd9-409b-84d9-693f7e471fc1/rabbitmq/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.402618 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e0bcf8cd-3dd9-409b-84d9-693f7e471fc1/setup-container/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.438048 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7f24aaa5-50e0-4e80-ba28-3fa2b770fac8/rabbitmq/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.448171 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7f24aaa5-50e0-4e80-ba28-3fa2b770fac8/setup-container/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.537513 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5866fbc874-ktwnr_4d3194a4-20d2-47cf-8d32-37a8afa5738d/proxy-httpd/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.549559 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5866fbc874-ktwnr_4d3194a4-20d2-47cf-8d32-37a8afa5738d/proxy-server/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.559661 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-mmsfz_4000cb23-899c-4f52-8c37-8e1c7108a21d/swift-ring-rebalance/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.632041 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c_18a1d8a3-fcb5-408d-88ab-97d74bad0a8f/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.642037 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-5c9t8_ffce6892-25f4-48d1-b314-24d784fbc43f/validate-network-openstack-openstack-cell1/0.log" Jan 21 16:46:01 crc kubenswrapper[4902]: I0121 16:46:01.832834 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-649bt/crc-debug-4hbxl" event={"ID":"db88f391-e5de-44fc-8bb9-7d7b4bddd96d","Type":"ContainerStarted","Data":"31fc280c2d7e2874d5d3ebbb00ce9f04de5add4709a413f82f3fb2ff907c3669"} Jan 21 16:46:01 crc kubenswrapper[4902]: I0121 16:46:01.856842 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-649bt/crc-debug-4hbxl" podStartSLOduration=1.8305830429999999 podStartE2EDuration="14.856827322s" podCreationTimestamp="2026-01-21 16:45:47 +0000 UTC" firstStartedPulling="2026-01-21 16:45:48.193863256 +0000 UTC m=+7910.270696285" lastFinishedPulling="2026-01-21 16:46:01.220107535 +0000 UTC m=+7923.296940564" observedRunningTime="2026-01-21 16:46:01.851567213 +0000 UTC m=+7923.928400242" watchObservedRunningTime="2026-01-21 16:46:01.856827322 +0000 UTC m=+7923.933660351" Jan 21 16:46:15 crc kubenswrapper[4902]: I0121 16:46:15.204374 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-h2pgt_694bf42b-c612-44c2-964b-c91336b8afa1/controller/0.log" Jan 21 16:46:15 crc kubenswrapper[4902]: I0121 16:46:15.211228 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-h2pgt_694bf42b-c612-44c2-964b-c91336b8afa1/kube-rbac-proxy/0.log" Jan 21 16:46:15 crc kubenswrapper[4902]: I0121 16:46:15.228518 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-72rgj_4f8bf62b-aae0-4080-a5ee-2472a60fe41f/frr-k8s-webhook-server/0.log" Jan 21 16:46:15 crc kubenswrapper[4902]: I0121 16:46:15.251790 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/controller/0.log" Jan 21 16:46:17 crc kubenswrapper[4902]: I0121 16:46:17.769679 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:46:17 crc kubenswrapper[4902]: I0121 16:46:17.770404 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:46:18 crc kubenswrapper[4902]: I0121 16:46:18.252780 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/frr/0.log" Jan 21 16:46:18 crc kubenswrapper[4902]: I0121 16:46:18.309400 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/reloader/0.log" Jan 21 16:46:18 crc kubenswrapper[4902]: I0121 16:46:18.407122 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/frr-metrics/0.log" Jan 21 16:46:18 crc kubenswrapper[4902]: I0121 16:46:18.501758 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/kube-rbac-proxy/0.log" Jan 21 16:46:18 crc kubenswrapper[4902]: I0121 16:46:18.521700 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/kube-rbac-proxy-frr/0.log" Jan 21 16:46:18 crc kubenswrapper[4902]: I0121 16:46:18.527321 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/cp-frr-files/0.log" Jan 21 16:46:18 crc kubenswrapper[4902]: I0121 16:46:18.534183 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/cp-reloader/0.log" Jan 21 16:46:18 crc kubenswrapper[4902]: I0121 16:46:18.543194 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/cp-metrics/0.log" Jan 21 16:46:18 crc kubenswrapper[4902]: I0121 16:46:18.573416 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c6bfc4dcb-mzr68_1ddec7fa-7afd-4d77-af77-509910e52c70/manager/0.log" Jan 21 16:46:18 crc kubenswrapper[4902]: I0121 16:46:18.583653 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-79cc595b65-5xnzn_050f3d44-1ff2-4334-8fa8-c5124c7199d9/webhook-server/0.log" Jan 21 16:46:19 crc kubenswrapper[4902]: I0121 16:46:19.200523 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5m6ct_4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501/speaker/0.log" Jan 21 16:46:19 crc kubenswrapper[4902]: I0121 16:46:19.211672 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5m6ct_4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501/kube-rbac-proxy/0.log" Jan 21 16:46:32 crc kubenswrapper[4902]: I0121 16:46:32.609000 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv_f7119ded-6a7d-468d-acc4-9d1d1045656c/extract/0.log" Jan 21 16:46:32 crc kubenswrapper[4902]: I0121 16:46:32.621530 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv_f7119ded-6a7d-468d-acc4-9d1d1045656c/util/0.log" Jan 21 16:46:32 crc kubenswrapper[4902]: I0121 16:46:32.629293 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv_f7119ded-6a7d-468d-acc4-9d1d1045656c/pull/0.log" Jan 21 16:46:32 crc kubenswrapper[4902]: I0121 16:46:32.773763 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-j6fwd_66bb9ed9-5aee-41c4-a7d0-4b2ff5cff91e/manager/0.log" Jan 21 16:46:32 crc kubenswrapper[4902]: I0121 16:46:32.839317 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-nh8zr_b924ea4f-71c9-4f42-aa0a-a4945ea589e3/manager/0.log" Jan 21 16:46:32 crc kubenswrapper[4902]: I0121 16:46:32.859554 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-sdkxs_bc4c2749-7073-4bb8-8c87-736187565b08/manager/0.log" Jan 21 16:46:32 crc kubenswrapper[4902]: I0121 16:46:32.998325 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-gffs4_3c1e8b4d-a47d-4a6e-be63-bfc41d04d964/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4902]: I0121 16:46:33.052909 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-lttm9_56c38bff-8549-485e-a91f-1d89d801a8ee/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4902]: I0121 16:46:33.075085 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-nqnfh_05001c4b-c8f0-46ea-bf02-d7537d8a373b/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4902]: I0121 16:46:33.704699 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-46xm9_cea39ffd-421f-4b74-9f26-065f49e00786/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4902]: I0121 16:46:33.728226 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-khcxt_f3f5f576-48b8-4175-8d70-d8de7e41a63a/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4902]: I0121 16:46:33.843460 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-qwcvn_7d33c2a4-c369-4a5f-9592-289c162f095c/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4902]: I0121 16:46:33.854158 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-x6xrb_a5d9aa95-7d14-4a6e-af38-dddad85007f4/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4902]: I0121 16:46:33.931145 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-xrlqr_01091192-af46-486f-8890-787505f3b41c/manager/0.log" Jan 21 16:46:34 crc kubenswrapper[4902]: I0121 16:46:34.139878 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-8vfnj_0b55bf9c-cc65-446c-849e-035fb1bba4c4/manager/0.log" Jan 21 16:46:34 crc kubenswrapper[4902]: I0121 16:46:34.276590 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-nql9r_b01862fd-dfad-4a73-ac90-5ef7823c06ea/manager/0.log" Jan 21 16:46:34 crc kubenswrapper[4902]: I0121 16:46:34.328400 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-c2nb6_bc7bedc3-7b23-4f5c-bfbb-7b05694e6b90/manager/0.log" Jan 21 16:46:34 crc kubenswrapper[4902]: I0121 16:46:34.342021 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5b9875986dhp6x8_14dc1630-021a-4b05-8ac4-d99368b51726/manager/0.log" Jan 21 16:46:34 crc kubenswrapper[4902]: I0121 16:46:34.474137 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6d4d7d8545-mvcwp_1fbcd3da-0b42-4d83-b774-776f9d1612d5/operator/0.log" Jan 21 16:46:36 crc kubenswrapper[4902]: I0121 16:46:36.707801 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75bfd788c8-hr66g_77e35131-84f1-4df7-b6de-ceda247df931/manager/0.log" Jan 21 16:46:36 crc kubenswrapper[4902]: I0121 16:46:36.821425 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-dp8mf_2d05d6f5-a861-4117-b4a0-00e98da2fe57/registry-server/0.log" Jan 21 16:46:36 crc kubenswrapper[4902]: I0121 16:46:36.923829 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-lljfd_3912b1da-b132-48da-9b67-1f4aeb2203c4/manager/0.log" Jan 21 16:46:36 crc kubenswrapper[4902]: I0121 16:46:36.973514 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-pmvgc_c5d64dc8-80f6-4076-9068-11ec25d524b5/manager/0.log" Jan 21 16:46:37 crc kubenswrapper[4902]: I0121 16:46:37.006222 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-s7vgs_1ffd452b-d331-4c80-a6f6-0b1b21d5fd84/operator/0.log" Jan 21 16:46:37 crc kubenswrapper[4902]: I0121 16:46:37.043559 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-wqmq2_1e685238-529c-4964-af9d-8abed4dfcfae/manager/0.log" Jan 21 16:46:37 crc kubenswrapper[4902]: I0121 16:46:37.220635 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-v7bj9_2ad74206-4131-4395-8392-9697c2c164eb/manager/0.log" Jan 21 16:46:37 crc kubenswrapper[4902]: I0121 16:46:37.235448 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-gn5kf_624ad6d5-5647-43c8-8e62-751e4c5989b3/manager/0.log" Jan 21 16:46:37 crc kubenswrapper[4902]: I0121 16:46:37.247733 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-s8g8n_6783daa1-082d-4ab7-be65-dc2fb211be6c/manager/0.log" Jan 21 16:46:41 crc kubenswrapper[4902]: I0121 16:46:41.507623 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qm6gk_9467c15f-f3fe-4594-b97d-0838d43877d1/control-plane-machine-set-operator/0.log" Jan 21 16:46:41 crc kubenswrapper[4902]: I0121 16:46:41.530373 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-57jmg_91a268d0-59c0-4e7f-8b78-260d14051e34/kube-rbac-proxy/0.log" Jan 21 16:46:41 crc kubenswrapper[4902]: I0121 16:46:41.549929 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-57jmg_91a268d0-59c0-4e7f-8b78-260d14051e34/machine-api-operator/0.log" Jan 21 16:46:43 crc kubenswrapper[4902]: I0121 16:46:43.420060 4902 generic.go:334] "Generic (PLEG): container finished" podID="db88f391-e5de-44fc-8bb9-7d7b4bddd96d" containerID="31fc280c2d7e2874d5d3ebbb00ce9f04de5add4709a413f82f3fb2ff907c3669" exitCode=0 Jan 21 16:46:43 crc kubenswrapper[4902]: I0121 16:46:43.420377 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-649bt/crc-debug-4hbxl" event={"ID":"db88f391-e5de-44fc-8bb9-7d7b4bddd96d","Type":"ContainerDied","Data":"31fc280c2d7e2874d5d3ebbb00ce9f04de5add4709a413f82f3fb2ff907c3669"} Jan 21 16:46:44 crc kubenswrapper[4902]: I0121 16:46:44.582017 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/crc-debug-4hbxl" Jan 21 16:46:44 crc kubenswrapper[4902]: I0121 16:46:44.614279 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-649bt/crc-debug-4hbxl"] Jan 21 16:46:44 crc kubenswrapper[4902]: I0121 16:46:44.623337 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-649bt/crc-debug-4hbxl"] Jan 21 16:46:44 crc kubenswrapper[4902]: I0121 16:46:44.774848 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db88f391-e5de-44fc-8bb9-7d7b4bddd96d-host\") pod \"db88f391-e5de-44fc-8bb9-7d7b4bddd96d\" (UID: \"db88f391-e5de-44fc-8bb9-7d7b4bddd96d\") " Jan 21 16:46:44 crc kubenswrapper[4902]: I0121 16:46:44.774910 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8lt5\" (UniqueName: \"kubernetes.io/projected/db88f391-e5de-44fc-8bb9-7d7b4bddd96d-kube-api-access-b8lt5\") pod \"db88f391-e5de-44fc-8bb9-7d7b4bddd96d\" (UID: \"db88f391-e5de-44fc-8bb9-7d7b4bddd96d\") " Jan 21 16:46:44 crc kubenswrapper[4902]: I0121 16:46:44.777183 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db88f391-e5de-44fc-8bb9-7d7b4bddd96d-host" (OuterVolumeSpecName: "host") pod "db88f391-e5de-44fc-8bb9-7d7b4bddd96d" (UID: "db88f391-e5de-44fc-8bb9-7d7b4bddd96d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:46:44 crc kubenswrapper[4902]: I0121 16:46:44.781364 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db88f391-e5de-44fc-8bb9-7d7b4bddd96d-kube-api-access-b8lt5" (OuterVolumeSpecName: "kube-api-access-b8lt5") pod "db88f391-e5de-44fc-8bb9-7d7b4bddd96d" (UID: "db88f391-e5de-44fc-8bb9-7d7b4bddd96d"). InnerVolumeSpecName "kube-api-access-b8lt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:46:44 crc kubenswrapper[4902]: I0121 16:46:44.877584 4902 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db88f391-e5de-44fc-8bb9-7d7b4bddd96d-host\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:44 crc kubenswrapper[4902]: I0121 16:46:44.877621 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8lt5\" (UniqueName: \"kubernetes.io/projected/db88f391-e5de-44fc-8bb9-7d7b4bddd96d-kube-api-access-b8lt5\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:45 crc kubenswrapper[4902]: I0121 16:46:45.438263 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7c7697f0980d691d3313b527a6369dc55eb3e3c975f66aca6b883c28023fe16" Jan 21 16:46:45 crc kubenswrapper[4902]: I0121 16:46:45.438324 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/crc-debug-4hbxl" Jan 21 16:46:45 crc kubenswrapper[4902]: I0121 16:46:45.813533 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-649bt/crc-debug-pv9xv"] Jan 21 16:46:45 crc kubenswrapper[4902]: E0121 16:46:45.814091 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db88f391-e5de-44fc-8bb9-7d7b4bddd96d" containerName="container-00" Jan 21 16:46:45 crc kubenswrapper[4902]: I0121 16:46:45.814108 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="db88f391-e5de-44fc-8bb9-7d7b4bddd96d" containerName="container-00" Jan 21 16:46:45 crc kubenswrapper[4902]: I0121 16:46:45.814386 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="db88f391-e5de-44fc-8bb9-7d7b4bddd96d" containerName="container-00" Jan 21 16:46:45 crc kubenswrapper[4902]: I0121 16:46:45.815457 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/crc-debug-pv9xv" Jan 21 16:46:46 crc kubenswrapper[4902]: I0121 16:46:46.001149 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3fe6195f-4ceb-42d9-b303-f1e722166c5e-host\") pod \"crc-debug-pv9xv\" (UID: \"3fe6195f-4ceb-42d9-b303-f1e722166c5e\") " pod="openshift-must-gather-649bt/crc-debug-pv9xv" Jan 21 16:46:46 crc kubenswrapper[4902]: I0121 16:46:46.001237 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sclh6\" (UniqueName: \"kubernetes.io/projected/3fe6195f-4ceb-42d9-b303-f1e722166c5e-kube-api-access-sclh6\") pod \"crc-debug-pv9xv\" (UID: \"3fe6195f-4ceb-42d9-b303-f1e722166c5e\") " pod="openshift-must-gather-649bt/crc-debug-pv9xv" Jan 21 16:46:46 crc kubenswrapper[4902]: I0121 16:46:46.103417 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3fe6195f-4ceb-42d9-b303-f1e722166c5e-host\") pod \"crc-debug-pv9xv\" (UID: \"3fe6195f-4ceb-42d9-b303-f1e722166c5e\") " pod="openshift-must-gather-649bt/crc-debug-pv9xv" Jan 21 16:46:46 crc kubenswrapper[4902]: I0121 16:46:46.103504 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sclh6\" (UniqueName: \"kubernetes.io/projected/3fe6195f-4ceb-42d9-b303-f1e722166c5e-kube-api-access-sclh6\") pod \"crc-debug-pv9xv\" (UID: \"3fe6195f-4ceb-42d9-b303-f1e722166c5e\") " pod="openshift-must-gather-649bt/crc-debug-pv9xv" Jan 21 16:46:46 crc kubenswrapper[4902]: I0121 16:46:46.103750 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3fe6195f-4ceb-42d9-b303-f1e722166c5e-host\") pod \"crc-debug-pv9xv\" (UID: \"3fe6195f-4ceb-42d9-b303-f1e722166c5e\") " pod="openshift-must-gather-649bt/crc-debug-pv9xv" Jan 21 16:46:46 crc kubenswrapper[4902]: I0121 16:46:46.125487 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sclh6\" (UniqueName: \"kubernetes.io/projected/3fe6195f-4ceb-42d9-b303-f1e722166c5e-kube-api-access-sclh6\") pod \"crc-debug-pv9xv\" (UID: \"3fe6195f-4ceb-42d9-b303-f1e722166c5e\") " pod="openshift-must-gather-649bt/crc-debug-pv9xv" Jan 21 16:46:46 crc kubenswrapper[4902]: I0121 16:46:46.136713 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/crc-debug-pv9xv" Jan 21 16:46:46 crc kubenswrapper[4902]: I0121 16:46:46.315661 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db88f391-e5de-44fc-8bb9-7d7b4bddd96d" path="/var/lib/kubelet/pods/db88f391-e5de-44fc-8bb9-7d7b4bddd96d/volumes" Jan 21 16:46:46 crc kubenswrapper[4902]: I0121 16:46:46.456579 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-649bt/crc-debug-pv9xv" event={"ID":"3fe6195f-4ceb-42d9-b303-f1e722166c5e","Type":"ContainerStarted","Data":"107d08e5a61bbb816ede570e57aabc9602def3d2777c3d991d3591f77d98535a"} Jan 21 16:46:47 crc kubenswrapper[4902]: I0121 16:46:47.468655 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-649bt/crc-debug-pv9xv" event={"ID":"3fe6195f-4ceb-42d9-b303-f1e722166c5e","Type":"ContainerDied","Data":"899b0d27b49bb0fa1d47ffc0ca83404011feda128ea60d1928b8e625e3893f24"} Jan 21 16:46:47 crc kubenswrapper[4902]: I0121 16:46:47.469093 4902 generic.go:334] "Generic (PLEG): container finished" podID="3fe6195f-4ceb-42d9-b303-f1e722166c5e" containerID="899b0d27b49bb0fa1d47ffc0ca83404011feda128ea60d1928b8e625e3893f24" exitCode=0 Jan 21 16:46:47 crc kubenswrapper[4902]: I0121 16:46:47.769894 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:46:47 crc kubenswrapper[4902]: I0121 16:46:47.769941 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:46:47 crc kubenswrapper[4902]: I0121 16:46:47.769982 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 16:46:47 crc kubenswrapper[4902]: I0121 16:46:47.770798 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"46dab60a77a31c9c125a7eb039a17b28b44898970f8705055f9ff1b6d0fef030"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:46:47 crc kubenswrapper[4902]: I0121 16:46:47.770857 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://46dab60a77a31c9c125a7eb039a17b28b44898970f8705055f9ff1b6d0fef030" gracePeriod=600 Jan 21 16:46:47 crc kubenswrapper[4902]: I0121 16:46:47.914746 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-649bt/crc-debug-pv9xv"] Jan 21 16:46:47 crc kubenswrapper[4902]: I0121 16:46:47.923369 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-649bt/crc-debug-pv9xv"] Jan 21 16:46:48 crc kubenswrapper[4902]: I0121 16:46:48.481252 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="46dab60a77a31c9c125a7eb039a17b28b44898970f8705055f9ff1b6d0fef030" exitCode=0 Jan 21 16:46:48 crc kubenswrapper[4902]: I0121 16:46:48.481314 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"46dab60a77a31c9c125a7eb039a17b28b44898970f8705055f9ff1b6d0fef030"} Jan 21 16:46:48 crc kubenswrapper[4902]: I0121 16:46:48.481662 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc"} Jan 21 16:46:48 crc kubenswrapper[4902]: I0121 16:46:48.481682 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:46:48 crc kubenswrapper[4902]: I0121 16:46:48.602589 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/crc-debug-pv9xv" Jan 21 16:46:48 crc kubenswrapper[4902]: I0121 16:46:48.675793 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3fe6195f-4ceb-42d9-b303-f1e722166c5e-host\") pod \"3fe6195f-4ceb-42d9-b303-f1e722166c5e\" (UID: \"3fe6195f-4ceb-42d9-b303-f1e722166c5e\") " Jan 21 16:46:48 crc kubenswrapper[4902]: I0121 16:46:48.675918 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fe6195f-4ceb-42d9-b303-f1e722166c5e-host" (OuterVolumeSpecName: "host") pod "3fe6195f-4ceb-42d9-b303-f1e722166c5e" (UID: "3fe6195f-4ceb-42d9-b303-f1e722166c5e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:46:48 crc kubenswrapper[4902]: I0121 16:46:48.676438 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sclh6\" (UniqueName: \"kubernetes.io/projected/3fe6195f-4ceb-42d9-b303-f1e722166c5e-kube-api-access-sclh6\") pod \"3fe6195f-4ceb-42d9-b303-f1e722166c5e\" (UID: \"3fe6195f-4ceb-42d9-b303-f1e722166c5e\") " Jan 21 16:46:48 crc kubenswrapper[4902]: I0121 16:46:48.677189 4902 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3fe6195f-4ceb-42d9-b303-f1e722166c5e-host\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:48 crc kubenswrapper[4902]: I0121 16:46:48.682459 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fe6195f-4ceb-42d9-b303-f1e722166c5e-kube-api-access-sclh6" (OuterVolumeSpecName: "kube-api-access-sclh6") pod "3fe6195f-4ceb-42d9-b303-f1e722166c5e" (UID: "3fe6195f-4ceb-42d9-b303-f1e722166c5e"). InnerVolumeSpecName "kube-api-access-sclh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:46:48 crc kubenswrapper[4902]: I0121 16:46:48.778568 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sclh6\" (UniqueName: \"kubernetes.io/projected/3fe6195f-4ceb-42d9-b303-f1e722166c5e-kube-api-access-sclh6\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.093451 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-649bt/crc-debug-w5ctj"] Jan 21 16:46:49 crc kubenswrapper[4902]: E0121 16:46:49.094459 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fe6195f-4ceb-42d9-b303-f1e722166c5e" containerName="container-00" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.094483 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fe6195f-4ceb-42d9-b303-f1e722166c5e" containerName="container-00" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.095167 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fe6195f-4ceb-42d9-b303-f1e722166c5e" containerName="container-00" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.096620 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/crc-debug-w5ctj" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.211448 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44mr5\" (UniqueName: \"kubernetes.io/projected/97270538-b46a-4318-9a2b-11eec116c8d3-kube-api-access-44mr5\") pod \"crc-debug-w5ctj\" (UID: \"97270538-b46a-4318-9a2b-11eec116c8d3\") " pod="openshift-must-gather-649bt/crc-debug-w5ctj" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.211530 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97270538-b46a-4318-9a2b-11eec116c8d3-host\") pod \"crc-debug-w5ctj\" (UID: \"97270538-b46a-4318-9a2b-11eec116c8d3\") " pod="openshift-must-gather-649bt/crc-debug-w5ctj" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.314133 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44mr5\" (UniqueName: \"kubernetes.io/projected/97270538-b46a-4318-9a2b-11eec116c8d3-kube-api-access-44mr5\") pod \"crc-debug-w5ctj\" (UID: \"97270538-b46a-4318-9a2b-11eec116c8d3\") " pod="openshift-must-gather-649bt/crc-debug-w5ctj" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.314388 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97270538-b46a-4318-9a2b-11eec116c8d3-host\") pod \"crc-debug-w5ctj\" (UID: \"97270538-b46a-4318-9a2b-11eec116c8d3\") " pod="openshift-must-gather-649bt/crc-debug-w5ctj" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.314557 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97270538-b46a-4318-9a2b-11eec116c8d3-host\") pod \"crc-debug-w5ctj\" (UID: \"97270538-b46a-4318-9a2b-11eec116c8d3\") " pod="openshift-must-gather-649bt/crc-debug-w5ctj" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.336222 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44mr5\" (UniqueName: \"kubernetes.io/projected/97270538-b46a-4318-9a2b-11eec116c8d3-kube-api-access-44mr5\") pod \"crc-debug-w5ctj\" (UID: \"97270538-b46a-4318-9a2b-11eec116c8d3\") " pod="openshift-must-gather-649bt/crc-debug-w5ctj" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.424369 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/crc-debug-w5ctj" Jan 21 16:46:49 crc kubenswrapper[4902]: W0121 16:46:49.465910 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97270538_b46a_4318_9a2b_11eec116c8d3.slice/crio-b234558abb96a2a20989651bc81c72e57ae981e453627beb3d433dbd2d223483 WatchSource:0}: Error finding container b234558abb96a2a20989651bc81c72e57ae981e453627beb3d433dbd2d223483: Status 404 returned error can't find the container with id b234558abb96a2a20989651bc81c72e57ae981e453627beb3d433dbd2d223483 Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.495783 4902 scope.go:117] "RemoveContainer" containerID="899b0d27b49bb0fa1d47ffc0ca83404011feda128ea60d1928b8e625e3893f24" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.495812 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/crc-debug-pv9xv" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.497524 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-649bt/crc-debug-w5ctj" event={"ID":"97270538-b46a-4318-9a2b-11eec116c8d3","Type":"ContainerStarted","Data":"b234558abb96a2a20989651bc81c72e57ae981e453627beb3d433dbd2d223483"} Jan 21 16:46:50 crc kubenswrapper[4902]: I0121 16:46:50.307266 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fe6195f-4ceb-42d9-b303-f1e722166c5e" path="/var/lib/kubelet/pods/3fe6195f-4ceb-42d9-b303-f1e722166c5e/volumes" Jan 21 16:46:50 crc kubenswrapper[4902]: I0121 16:46:50.517012 4902 generic.go:334] "Generic (PLEG): container finished" podID="97270538-b46a-4318-9a2b-11eec116c8d3" containerID="1a03d23c5092a33fc8a964770fef978491b258293bf35aedcd8a382acf457933" exitCode=0 Jan 21 16:46:50 crc kubenswrapper[4902]: I0121 16:46:50.517088 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-649bt/crc-debug-w5ctj" event={"ID":"97270538-b46a-4318-9a2b-11eec116c8d3","Type":"ContainerDied","Data":"1a03d23c5092a33fc8a964770fef978491b258293bf35aedcd8a382acf457933"} Jan 21 16:46:50 crc kubenswrapper[4902]: I0121 16:46:50.556873 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-649bt/crc-debug-w5ctj"] Jan 21 16:46:50 crc kubenswrapper[4902]: I0121 16:46:50.568618 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-649bt/crc-debug-w5ctj"] Jan 21 16:46:51 crc kubenswrapper[4902]: I0121 16:46:51.679960 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/crc-debug-w5ctj" Jan 21 16:46:51 crc kubenswrapper[4902]: I0121 16:46:51.765746 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44mr5\" (UniqueName: \"kubernetes.io/projected/97270538-b46a-4318-9a2b-11eec116c8d3-kube-api-access-44mr5\") pod \"97270538-b46a-4318-9a2b-11eec116c8d3\" (UID: \"97270538-b46a-4318-9a2b-11eec116c8d3\") " Jan 21 16:46:51 crc kubenswrapper[4902]: I0121 16:46:51.765827 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97270538-b46a-4318-9a2b-11eec116c8d3-host\") pod \"97270538-b46a-4318-9a2b-11eec116c8d3\" (UID: \"97270538-b46a-4318-9a2b-11eec116c8d3\") " Jan 21 16:46:51 crc kubenswrapper[4902]: I0121 16:46:51.766009 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97270538-b46a-4318-9a2b-11eec116c8d3-host" (OuterVolumeSpecName: "host") pod "97270538-b46a-4318-9a2b-11eec116c8d3" (UID: "97270538-b46a-4318-9a2b-11eec116c8d3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:46:51 crc kubenswrapper[4902]: I0121 16:46:51.766586 4902 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97270538-b46a-4318-9a2b-11eec116c8d3-host\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:51 crc kubenswrapper[4902]: I0121 16:46:51.772256 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97270538-b46a-4318-9a2b-11eec116c8d3-kube-api-access-44mr5" (OuterVolumeSpecName: "kube-api-access-44mr5") pod "97270538-b46a-4318-9a2b-11eec116c8d3" (UID: "97270538-b46a-4318-9a2b-11eec116c8d3"). InnerVolumeSpecName "kube-api-access-44mr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:46:51 crc kubenswrapper[4902]: I0121 16:46:51.868373 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44mr5\" (UniqueName: \"kubernetes.io/projected/97270538-b46a-4318-9a2b-11eec116c8d3-kube-api-access-44mr5\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:52 crc kubenswrapper[4902]: I0121 16:46:52.310451 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97270538-b46a-4318-9a2b-11eec116c8d3" path="/var/lib/kubelet/pods/97270538-b46a-4318-9a2b-11eec116c8d3/volumes" Jan 21 16:46:52 crc kubenswrapper[4902]: I0121 16:46:52.539617 4902 scope.go:117] "RemoveContainer" containerID="1a03d23c5092a33fc8a964770fef978491b258293bf35aedcd8a382acf457933" Jan 21 16:46:52 crc kubenswrapper[4902]: I0121 16:46:52.539647 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/crc-debug-w5ctj" Jan 21 16:48:10 crc kubenswrapper[4902]: I0121 16:48:10.795915 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r4kj6"] Jan 21 16:48:10 crc kubenswrapper[4902]: E0121 16:48:10.797077 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97270538-b46a-4318-9a2b-11eec116c8d3" containerName="container-00" Jan 21 16:48:10 crc kubenswrapper[4902]: I0121 16:48:10.797093 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="97270538-b46a-4318-9a2b-11eec116c8d3" containerName="container-00" Jan 21 16:48:10 crc kubenswrapper[4902]: I0121 16:48:10.797378 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="97270538-b46a-4318-9a2b-11eec116c8d3" containerName="container-00" Jan 21 16:48:10 crc kubenswrapper[4902]: I0121 16:48:10.799436 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:10 crc kubenswrapper[4902]: I0121 16:48:10.813162 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4kj6"] Jan 21 16:48:10 crc kubenswrapper[4902]: I0121 16:48:10.945668 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fae9ccc8-6a44-4a51-9379-4a9df0699618-catalog-content\") pod \"redhat-marketplace-r4kj6\" (UID: \"fae9ccc8-6a44-4a51-9379-4a9df0699618\") " pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:10 crc kubenswrapper[4902]: I0121 16:48:10.945877 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fae9ccc8-6a44-4a51-9379-4a9df0699618-utilities\") pod \"redhat-marketplace-r4kj6\" (UID: \"fae9ccc8-6a44-4a51-9379-4a9df0699618\") " pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:10 crc kubenswrapper[4902]: I0121 16:48:10.946124 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkg8m\" (UniqueName: \"kubernetes.io/projected/fae9ccc8-6a44-4a51-9379-4a9df0699618-kube-api-access-vkg8m\") pod \"redhat-marketplace-r4kj6\" (UID: \"fae9ccc8-6a44-4a51-9379-4a9df0699618\") " pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:11 crc kubenswrapper[4902]: I0121 16:48:11.048624 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fae9ccc8-6a44-4a51-9379-4a9df0699618-catalog-content\") pod \"redhat-marketplace-r4kj6\" (UID: \"fae9ccc8-6a44-4a51-9379-4a9df0699618\") " pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:11 crc kubenswrapper[4902]: I0121 16:48:11.048738 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fae9ccc8-6a44-4a51-9379-4a9df0699618-utilities\") pod \"redhat-marketplace-r4kj6\" (UID: \"fae9ccc8-6a44-4a51-9379-4a9df0699618\") " pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:11 crc kubenswrapper[4902]: I0121 16:48:11.048810 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkg8m\" (UniqueName: \"kubernetes.io/projected/fae9ccc8-6a44-4a51-9379-4a9df0699618-kube-api-access-vkg8m\") pod \"redhat-marketplace-r4kj6\" (UID: \"fae9ccc8-6a44-4a51-9379-4a9df0699618\") " pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:11 crc kubenswrapper[4902]: I0121 16:48:11.049256 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fae9ccc8-6a44-4a51-9379-4a9df0699618-catalog-content\") pod \"redhat-marketplace-r4kj6\" (UID: \"fae9ccc8-6a44-4a51-9379-4a9df0699618\") " pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:11 crc kubenswrapper[4902]: I0121 16:48:11.049381 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fae9ccc8-6a44-4a51-9379-4a9df0699618-utilities\") pod \"redhat-marketplace-r4kj6\" (UID: \"fae9ccc8-6a44-4a51-9379-4a9df0699618\") " pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:11 crc kubenswrapper[4902]: I0121 16:48:11.074018 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkg8m\" (UniqueName: \"kubernetes.io/projected/fae9ccc8-6a44-4a51-9379-4a9df0699618-kube-api-access-vkg8m\") pod \"redhat-marketplace-r4kj6\" (UID: \"fae9ccc8-6a44-4a51-9379-4a9df0699618\") " pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:11 crc kubenswrapper[4902]: I0121 16:48:11.120361 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:11 crc kubenswrapper[4902]: I0121 16:48:11.599305 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4kj6"] Jan 21 16:48:12 crc kubenswrapper[4902]: I0121 16:48:12.436342 4902 generic.go:334] "Generic (PLEG): container finished" podID="fae9ccc8-6a44-4a51-9379-4a9df0699618" containerID="d04813b0dfa34448d4bc479347c907437209745ae71579bdf28067674701bc54" exitCode=0 Jan 21 16:48:12 crc kubenswrapper[4902]: I0121 16:48:12.436424 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4kj6" event={"ID":"fae9ccc8-6a44-4a51-9379-4a9df0699618","Type":"ContainerDied","Data":"d04813b0dfa34448d4bc479347c907437209745ae71579bdf28067674701bc54"} Jan 21 16:48:12 crc kubenswrapper[4902]: I0121 16:48:12.436639 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4kj6" event={"ID":"fae9ccc8-6a44-4a51-9379-4a9df0699618","Type":"ContainerStarted","Data":"9800e26045e04895284317f27a8063b63ad4f1c304ba26cb489d923200abe7c6"} Jan 21 16:48:13 crc kubenswrapper[4902]: I0121 16:48:13.446604 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4kj6" event={"ID":"fae9ccc8-6a44-4a51-9379-4a9df0699618","Type":"ContainerStarted","Data":"35f49c9eae975e2cd80dd2ddfb13bcfbdf508aa9edfdc7671c7e7ad98c86f900"} Jan 21 16:48:14 crc kubenswrapper[4902]: I0121 16:48:14.457635 4902 generic.go:334] "Generic (PLEG): container finished" podID="fae9ccc8-6a44-4a51-9379-4a9df0699618" containerID="35f49c9eae975e2cd80dd2ddfb13bcfbdf508aa9edfdc7671c7e7ad98c86f900" exitCode=0 Jan 21 16:48:14 crc kubenswrapper[4902]: I0121 16:48:14.458154 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4kj6" event={"ID":"fae9ccc8-6a44-4a51-9379-4a9df0699618","Type":"ContainerDied","Data":"35f49c9eae975e2cd80dd2ddfb13bcfbdf508aa9edfdc7671c7e7ad98c86f900"} Jan 21 16:48:15 crc kubenswrapper[4902]: I0121 16:48:15.471186 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4kj6" event={"ID":"fae9ccc8-6a44-4a51-9379-4a9df0699618","Type":"ContainerStarted","Data":"ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96"} Jan 21 16:48:15 crc kubenswrapper[4902]: I0121 16:48:15.506668 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r4kj6" podStartSLOduration=3.025205395 podStartE2EDuration="5.50665024s" podCreationTimestamp="2026-01-21 16:48:10 +0000 UTC" firstStartedPulling="2026-01-21 16:48:12.438554985 +0000 UTC m=+8054.515388024" lastFinishedPulling="2026-01-21 16:48:14.91999984 +0000 UTC m=+8056.996832869" observedRunningTime="2026-01-21 16:48:15.49499463 +0000 UTC m=+8057.571827689" watchObservedRunningTime="2026-01-21 16:48:15.50665024 +0000 UTC m=+8057.583483269" Jan 21 16:48:21 crc kubenswrapper[4902]: I0121 16:48:21.120558 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:21 crc kubenswrapper[4902]: I0121 16:48:21.120976 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:21 crc kubenswrapper[4902]: I0121 16:48:21.166493 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:21 crc kubenswrapper[4902]: I0121 16:48:21.577895 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:21 crc kubenswrapper[4902]: I0121 16:48:21.621341 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4kj6"] Jan 21 16:48:23 crc kubenswrapper[4902]: I0121 16:48:23.554361 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r4kj6" podUID="fae9ccc8-6a44-4a51-9379-4a9df0699618" containerName="registry-server" containerID="cri-o://ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96" gracePeriod=2 Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.040513 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.141130 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkg8m\" (UniqueName: \"kubernetes.io/projected/fae9ccc8-6a44-4a51-9379-4a9df0699618-kube-api-access-vkg8m\") pod \"fae9ccc8-6a44-4a51-9379-4a9df0699618\" (UID: \"fae9ccc8-6a44-4a51-9379-4a9df0699618\") " Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.141302 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fae9ccc8-6a44-4a51-9379-4a9df0699618-utilities\") pod \"fae9ccc8-6a44-4a51-9379-4a9df0699618\" (UID: \"fae9ccc8-6a44-4a51-9379-4a9df0699618\") " Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.141585 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fae9ccc8-6a44-4a51-9379-4a9df0699618-catalog-content\") pod \"fae9ccc8-6a44-4a51-9379-4a9df0699618\" (UID: \"fae9ccc8-6a44-4a51-9379-4a9df0699618\") " Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.146350 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fae9ccc8-6a44-4a51-9379-4a9df0699618-utilities" (OuterVolumeSpecName: "utilities") pod "fae9ccc8-6a44-4a51-9379-4a9df0699618" (UID: "fae9ccc8-6a44-4a51-9379-4a9df0699618"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.150855 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fae9ccc8-6a44-4a51-9379-4a9df0699618-kube-api-access-vkg8m" (OuterVolumeSpecName: "kube-api-access-vkg8m") pod "fae9ccc8-6a44-4a51-9379-4a9df0699618" (UID: "fae9ccc8-6a44-4a51-9379-4a9df0699618"). InnerVolumeSpecName "kube-api-access-vkg8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.184920 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fae9ccc8-6a44-4a51-9379-4a9df0699618-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fae9ccc8-6a44-4a51-9379-4a9df0699618" (UID: "fae9ccc8-6a44-4a51-9379-4a9df0699618"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.248589 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fae9ccc8-6a44-4a51-9379-4a9df0699618-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.248633 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fae9ccc8-6a44-4a51-9379-4a9df0699618-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.248654 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkg8m\" (UniqueName: \"kubernetes.io/projected/fae9ccc8-6a44-4a51-9379-4a9df0699618-kube-api-access-vkg8m\") on node \"crc\" DevicePath \"\"" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.565517 4902 generic.go:334] "Generic (PLEG): container finished" podID="fae9ccc8-6a44-4a51-9379-4a9df0699618" containerID="ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96" exitCode=0 Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.565586 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.565611 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4kj6" event={"ID":"fae9ccc8-6a44-4a51-9379-4a9df0699618","Type":"ContainerDied","Data":"ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96"} Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.567065 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4kj6" event={"ID":"fae9ccc8-6a44-4a51-9379-4a9df0699618","Type":"ContainerDied","Data":"9800e26045e04895284317f27a8063b63ad4f1c304ba26cb489d923200abe7c6"} Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.567102 4902 scope.go:117] "RemoveContainer" containerID="ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.595825 4902 scope.go:117] "RemoveContainer" containerID="35f49c9eae975e2cd80dd2ddfb13bcfbdf508aa9edfdc7671c7e7ad98c86f900" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.598177 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4kj6"] Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.607516 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4kj6"] Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.613448 4902 scope.go:117] "RemoveContainer" containerID="d04813b0dfa34448d4bc479347c907437209745ae71579bdf28067674701bc54" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.671603 4902 scope.go:117] "RemoveContainer" containerID="ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96" Jan 21 16:48:24 crc kubenswrapper[4902]: E0121 16:48:24.672126 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96\": container with ID starting with ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96 not found: ID does not exist" containerID="ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.672169 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96"} err="failed to get container status \"ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96\": rpc error: code = NotFound desc = could not find container \"ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96\": container with ID starting with ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96 not found: ID does not exist" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.672200 4902 scope.go:117] "RemoveContainer" containerID="35f49c9eae975e2cd80dd2ddfb13bcfbdf508aa9edfdc7671c7e7ad98c86f900" Jan 21 16:48:24 crc kubenswrapper[4902]: E0121 16:48:24.672500 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35f49c9eae975e2cd80dd2ddfb13bcfbdf508aa9edfdc7671c7e7ad98c86f900\": container with ID starting with 35f49c9eae975e2cd80dd2ddfb13bcfbdf508aa9edfdc7671c7e7ad98c86f900 not found: ID does not exist" containerID="35f49c9eae975e2cd80dd2ddfb13bcfbdf508aa9edfdc7671c7e7ad98c86f900" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.672536 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35f49c9eae975e2cd80dd2ddfb13bcfbdf508aa9edfdc7671c7e7ad98c86f900"} err="failed to get container status \"35f49c9eae975e2cd80dd2ddfb13bcfbdf508aa9edfdc7671c7e7ad98c86f900\": rpc error: code = NotFound desc = could not find container \"35f49c9eae975e2cd80dd2ddfb13bcfbdf508aa9edfdc7671c7e7ad98c86f900\": container with ID starting with 35f49c9eae975e2cd80dd2ddfb13bcfbdf508aa9edfdc7671c7e7ad98c86f900 not found: ID does not exist" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.672562 4902 scope.go:117] "RemoveContainer" containerID="d04813b0dfa34448d4bc479347c907437209745ae71579bdf28067674701bc54" Jan 21 16:48:24 crc kubenswrapper[4902]: E0121 16:48:24.672766 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d04813b0dfa34448d4bc479347c907437209745ae71579bdf28067674701bc54\": container with ID starting with d04813b0dfa34448d4bc479347c907437209745ae71579bdf28067674701bc54 not found: ID does not exist" containerID="d04813b0dfa34448d4bc479347c907437209745ae71579bdf28067674701bc54" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.672788 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d04813b0dfa34448d4bc479347c907437209745ae71579bdf28067674701bc54"} err="failed to get container status \"d04813b0dfa34448d4bc479347c907437209745ae71579bdf28067674701bc54\": rpc error: code = NotFound desc = could not find container \"d04813b0dfa34448d4bc479347c907437209745ae71579bdf28067674701bc54\": container with ID starting with d04813b0dfa34448d4bc479347c907437209745ae71579bdf28067674701bc54 not found: ID does not exist" Jan 21 16:48:26 crc kubenswrapper[4902]: I0121 16:48:26.501627 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fae9ccc8-6a44-4a51-9379-4a9df0699618" path="/var/lib/kubelet/pods/fae9ccc8-6a44-4a51-9379-4a9df0699618/volumes" Jan 21 16:48:28 crc kubenswrapper[4902]: I0121 16:48:28.052152 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-4cd6m_12dae6d4-a2b1-4ef8-ae74-369697c9172b/cert-manager-controller/0.log" Jan 21 16:48:28 crc kubenswrapper[4902]: I0121 16:48:28.073958 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-llf68_21799993-1de7-4aef-9cfa-c132249ecf74/cert-manager-cainjector/0.log" Jan 21 16:48:28 crc kubenswrapper[4902]: I0121 16:48:28.083390 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-p2522_9093daac-4fd2-4075-8e73-d358cd885c3c/cert-manager-webhook/0.log" Jan 21 16:48:34 crc kubenswrapper[4902]: I0121 16:48:34.268292 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-6vz5c_ce3bf701-2498-42d7-969d-8944df02f1c7/nmstate-console-plugin/0.log" Jan 21 16:48:34 crc kubenswrapper[4902]: I0121 16:48:34.306503 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-p9t9n_14dd02e5-8cb3-4382-9107-5f5b698a2701/nmstate-handler/0.log" Jan 21 16:48:34 crc kubenswrapper[4902]: I0121 16:48:34.316729 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-x6qnj_d406f136-7416-4694-b6cd-d6bdf6b60e1f/nmstate-metrics/0.log" Jan 21 16:48:34 crc kubenswrapper[4902]: I0121 16:48:34.332275 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-x6qnj_d406f136-7416-4694-b6cd-d6bdf6b60e1f/kube-rbac-proxy/0.log" Jan 21 16:48:34 crc kubenswrapper[4902]: I0121 16:48:34.342774 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-q2fs2_bb74694a-8b82-4c31-85da-4ba2c732bbb8/nmstate-operator/0.log" Jan 21 16:48:34 crc kubenswrapper[4902]: I0121 16:48:34.362527 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-88bkr_87768889-c41f-4563-8b38-3d939fa22303/nmstate-webhook/0.log" Jan 21 16:48:40 crc kubenswrapper[4902]: I0121 16:48:40.514401 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-tw4cr_5bef9b7b-7b8b-4a3b-82ca-cc12bfa8d7a5/prometheus-operator/0.log" Jan 21 16:48:40 crc kubenswrapper[4902]: I0121 16:48:40.525124 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5cb6669c59-csqks_c014cd52-9da2-4fa7-96b6-0a400835f56e/prometheus-operator-admission-webhook/0.log" Jan 21 16:48:40 crc kubenswrapper[4902]: I0121 16:48:40.535279 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5cb6669c59-l469x_dce978e0-318d-4086-8594-08da83f1fe23/prometheus-operator-admission-webhook/0.log" Jan 21 16:48:40 crc kubenswrapper[4902]: I0121 16:48:40.575763 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-6xc5d_cdfe14cf-a2d6-4df7-92b5-c4146bdab44d/operator/0.log" Jan 21 16:48:40 crc kubenswrapper[4902]: I0121 16:48:40.589206 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-k6f6k_ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a/perses-operator/0.log" Jan 21 16:48:46 crc kubenswrapper[4902]: I0121 16:48:46.882297 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-h2pgt_694bf42b-c612-44c2-964b-c91336b8afa1/controller/0.log" Jan 21 16:48:46 crc kubenswrapper[4902]: I0121 16:48:46.889806 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-h2pgt_694bf42b-c612-44c2-964b-c91336b8afa1/kube-rbac-proxy/0.log" Jan 21 16:48:46 crc kubenswrapper[4902]: I0121 16:48:46.904519 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-72rgj_4f8bf62b-aae0-4080-a5ee-2472a60fe41f/frr-k8s-webhook-server/0.log" Jan 21 16:48:46 crc kubenswrapper[4902]: I0121 16:48:46.927759 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/controller/0.log" Jan 21 16:48:49 crc kubenswrapper[4902]: I0121 16:48:49.642940 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/frr/0.log" Jan 21 16:48:49 crc kubenswrapper[4902]: I0121 16:48:49.653458 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/reloader/0.log" Jan 21 16:48:49 crc kubenswrapper[4902]: I0121 16:48:49.658321 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/frr-metrics/0.log" Jan 21 16:48:49 crc kubenswrapper[4902]: I0121 16:48:49.667411 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/kube-rbac-proxy/0.log" Jan 21 16:48:49 crc kubenswrapper[4902]: I0121 16:48:49.676008 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/kube-rbac-proxy-frr/0.log" Jan 21 16:48:49 crc kubenswrapper[4902]: I0121 16:48:49.681957 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/cp-frr-files/0.log" Jan 21 16:48:49 crc kubenswrapper[4902]: I0121 16:48:49.689079 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/cp-reloader/0.log" Jan 21 16:48:49 crc kubenswrapper[4902]: I0121 16:48:49.694321 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/cp-metrics/0.log" Jan 21 16:48:49 crc kubenswrapper[4902]: I0121 16:48:49.724679 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c6bfc4dcb-mzr68_1ddec7fa-7afd-4d77-af77-509910e52c70/manager/0.log" Jan 21 16:48:49 crc kubenswrapper[4902]: I0121 16:48:49.741385 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-79cc595b65-5xnzn_050f3d44-1ff2-4334-8fa8-c5124c7199d9/webhook-server/0.log" Jan 21 16:48:50 crc kubenswrapper[4902]: I0121 16:48:50.359823 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5m6ct_4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501/speaker/0.log" Jan 21 16:48:50 crc kubenswrapper[4902]: I0121 16:48:50.520938 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5m6ct_4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501/kube-rbac-proxy/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.132032 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj_b4942197-db6e-4bb6-af6d-24694a007a0b/extract/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.145416 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj_b4942197-db6e-4bb6-af6d-24694a007a0b/util/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.171721 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj_b4942197-db6e-4bb6-af6d-24694a007a0b/pull/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.181949 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw_5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea/extract/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.190335 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw_5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea/util/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.200501 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw_5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea/pull/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.214550 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr_91ab62d2-e4b6-44ce-afc8-292ac5685c46/extract/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.225434 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr_91ab62d2-e4b6-44ce-afc8-292ac5685c46/util/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.236297 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr_91ab62d2-e4b6-44ce-afc8-292ac5685c46/pull/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.253187 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn_052d7e2b-1135-41ae-8c3e-a750c22fce27/extract/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.262598 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn_052d7e2b-1135-41ae-8c3e-a750c22fce27/util/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.272133 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn_052d7e2b-1135-41ae-8c3e-a750c22fce27/pull/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.933910 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mklsf_2ec2690b-73b2-45db-b14b-355b80ab92a6/registry-server/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.943376 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mklsf_2ec2690b-73b2-45db-b14b-355b80ab92a6/extract-utilities/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.950057 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mklsf_2ec2690b-73b2-45db-b14b-355b80ab92a6/extract-content/0.log" Jan 21 16:48:54 crc kubenswrapper[4902]: I0121 16:48:54.848522 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjsh4_e5fe57c1-6b56-4abe-8067-dd74165e5937/registry-server/0.log" Jan 21 16:48:54 crc kubenswrapper[4902]: I0121 16:48:54.853354 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjsh4_e5fe57c1-6b56-4abe-8067-dd74165e5937/extract-utilities/0.log" Jan 21 16:48:54 crc kubenswrapper[4902]: I0121 16:48:54.864897 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjsh4_e5fe57c1-6b56-4abe-8067-dd74165e5937/extract-content/0.log" Jan 21 16:48:54 crc kubenswrapper[4902]: I0121 16:48:54.893865 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-z4vkp_021a0823-715d-4b67-b5b2-b52ec6d6c7e8/marketplace-operator/0.log" Jan 21 16:48:55 crc kubenswrapper[4902]: I0121 16:48:55.223962 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ppndl_663aee99-c55e-45ba-b5ff-a67def0f524e/registry-server/0.log" Jan 21 16:48:55 crc kubenswrapper[4902]: I0121 16:48:55.229873 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ppndl_663aee99-c55e-45ba-b5ff-a67def0f524e/extract-utilities/0.log" Jan 21 16:48:55 crc kubenswrapper[4902]: I0121 16:48:55.236333 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ppndl_663aee99-c55e-45ba-b5ff-a67def0f524e/extract-content/0.log" Jan 21 16:48:56 crc kubenswrapper[4902]: I0121 16:48:56.278106 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8kplb_fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c/registry-server/0.log" Jan 21 16:48:56 crc kubenswrapper[4902]: I0121 16:48:56.283211 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8kplb_fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c/extract-utilities/0.log" Jan 21 16:48:56 crc kubenswrapper[4902]: I0121 16:48:56.290526 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8kplb_fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c/extract-content/0.log" Jan 21 16:48:59 crc kubenswrapper[4902]: I0121 16:48:59.433707 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-tw4cr_5bef9b7b-7b8b-4a3b-82ca-cc12bfa8d7a5/prometheus-operator/0.log" Jan 21 16:48:59 crc kubenswrapper[4902]: I0121 16:48:59.446271 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5cb6669c59-csqks_c014cd52-9da2-4fa7-96b6-0a400835f56e/prometheus-operator-admission-webhook/0.log" Jan 21 16:48:59 crc kubenswrapper[4902]: I0121 16:48:59.459167 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5cb6669c59-l469x_dce978e0-318d-4086-8594-08da83f1fe23/prometheus-operator-admission-webhook/0.log" Jan 21 16:48:59 crc kubenswrapper[4902]: I0121 16:48:59.478572 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-6xc5d_cdfe14cf-a2d6-4df7-92b5-c4146bdab44d/operator/0.log" Jan 21 16:48:59 crc kubenswrapper[4902]: I0121 16:48:59.487893 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-k6f6k_ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a/perses-operator/0.log" Jan 21 16:49:17 crc kubenswrapper[4902]: I0121 16:49:17.769601 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:49:17 crc kubenswrapper[4902]: I0121 16:49:17.770219 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:49:47 crc kubenswrapper[4902]: I0121 16:49:47.770134 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:49:47 crc kubenswrapper[4902]: I0121 16:49:47.770629 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:50:17 crc kubenswrapper[4902]: I0121 16:50:17.770128 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:50:17 crc kubenswrapper[4902]: I0121 16:50:17.770551 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:50:17 crc kubenswrapper[4902]: I0121 16:50:17.770592 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 16:50:17 crc kubenswrapper[4902]: I0121 16:50:17.771435 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:50:17 crc kubenswrapper[4902]: I0121 16:50:17.771483 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" gracePeriod=600 Jan 21 16:50:18 crc kubenswrapper[4902]: E0121 16:50:18.408900 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:50:18 crc kubenswrapper[4902]: I0121 16:50:18.659902 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" exitCode=0 Jan 21 16:50:18 crc kubenswrapper[4902]: I0121 16:50:18.659947 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc"} Jan 21 16:50:18 crc kubenswrapper[4902]: I0121 16:50:18.659981 4902 scope.go:117] "RemoveContainer" containerID="46dab60a77a31c9c125a7eb039a17b28b44898970f8705055f9ff1b6d0fef030" Jan 21 16:50:18 crc kubenswrapper[4902]: I0121 16:50:18.660915 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:50:18 crc kubenswrapper[4902]: E0121 16:50:18.661289 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:50:30 crc kubenswrapper[4902]: I0121 16:50:30.295233 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:50:30 crc kubenswrapper[4902]: E0121 16:50:30.296018 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:50:43 crc kubenswrapper[4902]: I0121 16:50:43.295467 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:50:43 crc kubenswrapper[4902]: E0121 16:50:43.296182 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:50:45 crc kubenswrapper[4902]: I0121 16:50:45.348340 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-tw4cr_5bef9b7b-7b8b-4a3b-82ca-cc12bfa8d7a5/prometheus-operator/0.log" Jan 21 16:50:45 crc kubenswrapper[4902]: I0121 16:50:45.360367 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5cb6669c59-csqks_c014cd52-9da2-4fa7-96b6-0a400835f56e/prometheus-operator-admission-webhook/0.log" Jan 21 16:50:45 crc kubenswrapper[4902]: I0121 16:50:45.371389 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5cb6669c59-l469x_dce978e0-318d-4086-8594-08da83f1fe23/prometheus-operator-admission-webhook/0.log" Jan 21 16:50:45 crc kubenswrapper[4902]: I0121 16:50:45.397169 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-6xc5d_cdfe14cf-a2d6-4df7-92b5-c4146bdab44d/operator/0.log" Jan 21 16:50:45 crc kubenswrapper[4902]: I0121 16:50:45.405058 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-k6f6k_ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a/perses-operator/0.log" Jan 21 16:50:45 crc kubenswrapper[4902]: I0121 16:50:45.604171 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-4cd6m_12dae6d4-a2b1-4ef8-ae74-369697c9172b/cert-manager-controller/0.log" Jan 21 16:50:45 crc kubenswrapper[4902]: I0121 16:50:45.637952 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-llf68_21799993-1de7-4aef-9cfa-c132249ecf74/cert-manager-cainjector/0.log" Jan 21 16:50:45 crc kubenswrapper[4902]: I0121 16:50:45.650984 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-p2522_9093daac-4fd2-4075-8e73-d358cd885c3c/cert-manager-webhook/0.log" Jan 21 16:50:46 crc kubenswrapper[4902]: I0121 16:50:46.537772 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv_f7119ded-6a7d-468d-acc4-9d1d1045656c/extract/0.log" Jan 21 16:50:46 crc kubenswrapper[4902]: I0121 16:50:46.552575 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv_f7119ded-6a7d-468d-acc4-9d1d1045656c/util/0.log" Jan 21 16:50:46 crc kubenswrapper[4902]: I0121 16:50:46.560845 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv_f7119ded-6a7d-468d-acc4-9d1d1045656c/pull/0.log" Jan 21 16:50:46 crc kubenswrapper[4902]: I0121 16:50:46.721715 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-j6fwd_66bb9ed9-5aee-41c4-a7d0-4b2ff5cff91e/manager/0.log" Jan 21 16:50:46 crc kubenswrapper[4902]: I0121 16:50:46.849528 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-nh8zr_b924ea4f-71c9-4f42-aa0a-a4945ea589e3/manager/0.log" Jan 21 16:50:46 crc kubenswrapper[4902]: I0121 16:50:46.865081 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-sdkxs_bc4c2749-7073-4bb8-8c87-736187565b08/manager/0.log" Jan 21 16:50:46 crc kubenswrapper[4902]: I0121 16:50:46.995468 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-h2pgt_694bf42b-c612-44c2-964b-c91336b8afa1/controller/0.log" Jan 21 16:50:47 crc kubenswrapper[4902]: I0121 16:50:47.005567 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-h2pgt_694bf42b-c612-44c2-964b-c91336b8afa1/kube-rbac-proxy/0.log" Jan 21 16:50:47 crc kubenswrapper[4902]: I0121 16:50:47.017723 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-72rgj_4f8bf62b-aae0-4080-a5ee-2472a60fe41f/frr-k8s-webhook-server/0.log" Jan 21 16:50:47 crc kubenswrapper[4902]: I0121 16:50:47.046499 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-gffs4_3c1e8b4d-a47d-4a6e-be63-bfc41d04d964/manager/0.log" Jan 21 16:50:47 crc kubenswrapper[4902]: I0121 16:50:47.053875 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/controller/0.log" Jan 21 16:50:47 crc kubenswrapper[4902]: I0121 16:50:47.113644 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-lttm9_56c38bff-8549-485e-a91f-1d89d801a8ee/manager/0.log" Jan 21 16:50:47 crc kubenswrapper[4902]: I0121 16:50:47.138200 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-nqnfh_05001c4b-c8f0-46ea-bf02-d7537d8a373b/manager/0.log" Jan 21 16:50:48 crc kubenswrapper[4902]: I0121 16:50:48.009589 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-46xm9_cea39ffd-421f-4b74-9f26-065f49e00786/manager/0.log" Jan 21 16:50:48 crc kubenswrapper[4902]: I0121 16:50:48.027489 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-khcxt_f3f5f576-48b8-4175-8d70-d8de7e41a63a/manager/0.log" Jan 21 16:50:48 crc kubenswrapper[4902]: I0121 16:50:48.232136 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-qwcvn_7d33c2a4-c369-4a5f-9592-289c162f095c/manager/0.log" Jan 21 16:50:48 crc kubenswrapper[4902]: I0121 16:50:48.243114 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-x6xrb_a5d9aa95-7d14-4a6e-af38-dddad85007f4/manager/0.log" Jan 21 16:50:48 crc kubenswrapper[4902]: I0121 16:50:48.323955 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-xrlqr_01091192-af46-486f-8890-787505f3b41c/manager/0.log" Jan 21 16:50:48 crc kubenswrapper[4902]: I0121 16:50:48.423809 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-8vfnj_0b55bf9c-cc65-446c-849e-035fb1bba4c4/manager/0.log" Jan 21 16:50:48 crc kubenswrapper[4902]: I0121 16:50:48.643157 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-nql9r_b01862fd-dfad-4a73-ac90-5ef7823c06ea/manager/0.log" Jan 21 16:50:48 crc kubenswrapper[4902]: I0121 16:50:48.702531 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-c2nb6_bc7bedc3-7b23-4f5c-bfbb-7b05694e6b90/manager/0.log" Jan 21 16:50:48 crc kubenswrapper[4902]: I0121 16:50:48.727318 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5b9875986dhp6x8_14dc1630-021a-4b05-8ac4-d99368b51726/manager/0.log" Jan 21 16:50:48 crc kubenswrapper[4902]: I0121 16:50:48.953575 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6d4d7d8545-mvcwp_1fbcd3da-0b42-4d83-b774-776f9d1612d5/operator/0.log" Jan 21 16:50:51 crc kubenswrapper[4902]: I0121 16:50:51.897991 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/frr/0.log" Jan 21 16:50:51 crc kubenswrapper[4902]: I0121 16:50:51.927393 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/reloader/0.log" Jan 21 16:50:51 crc kubenswrapper[4902]: I0121 16:50:51.931886 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/frr-metrics/0.log" Jan 21 16:50:51 crc kubenswrapper[4902]: I0121 16:50:51.939932 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/kube-rbac-proxy/0.log" Jan 21 16:50:51 crc kubenswrapper[4902]: I0121 16:50:51.949187 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/kube-rbac-proxy-frr/0.log" Jan 21 16:50:51 crc kubenswrapper[4902]: I0121 16:50:51.956430 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/cp-frr-files/0.log" Jan 21 16:50:51 crc kubenswrapper[4902]: I0121 16:50:51.957886 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75bfd788c8-hr66g_77e35131-84f1-4df7-b6de-ceda247df931/manager/0.log" Jan 21 16:50:51 crc kubenswrapper[4902]: I0121 16:50:51.966445 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/cp-reloader/0.log" Jan 21 16:50:51 crc kubenswrapper[4902]: I0121 16:50:51.973096 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/cp-metrics/0.log" Jan 21 16:50:51 crc kubenswrapper[4902]: I0121 16:50:51.999756 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c6bfc4dcb-mzr68_1ddec7fa-7afd-4d77-af77-509910e52c70/manager/0.log" Jan 21 16:50:52 crc kubenswrapper[4902]: I0121 16:50:52.008470 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-79cc595b65-5xnzn_050f3d44-1ff2-4334-8fa8-c5124c7199d9/webhook-server/0.log" Jan 21 16:50:52 crc kubenswrapper[4902]: I0121 16:50:52.143536 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-dp8mf_2d05d6f5-a861-4117-b4a0-00e98da2fe57/registry-server/0.log" Jan 21 16:50:52 crc kubenswrapper[4902]: I0121 16:50:52.284692 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-lljfd_3912b1da-b132-48da-9b67-1f4aeb2203c4/manager/0.log" Jan 21 16:50:52 crc kubenswrapper[4902]: I0121 16:50:52.329758 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-pmvgc_c5d64dc8-80f6-4076-9068-11ec25d524b5/manager/0.log" Jan 21 16:50:52 crc kubenswrapper[4902]: I0121 16:50:52.378097 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-s7vgs_1ffd452b-d331-4c80-a6f6-0b1b21d5fd84/operator/0.log" Jan 21 16:50:52 crc kubenswrapper[4902]: I0121 16:50:52.436949 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-wqmq2_1e685238-529c-4964-af9d-8abed4dfcfae/manager/0.log" Jan 21 16:50:52 crc kubenswrapper[4902]: I0121 16:50:52.700476 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-v7bj9_2ad74206-4131-4395-8392-9697c2c164eb/manager/0.log" Jan 21 16:50:52 crc kubenswrapper[4902]: I0121 16:50:52.714763 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-gn5kf_624ad6d5-5647-43c8-8e62-751e4c5989b3/manager/0.log" Jan 21 16:50:52 crc kubenswrapper[4902]: I0121 16:50:52.724940 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-s8g8n_6783daa1-082d-4ab7-be65-dc2fb211be6c/manager/0.log" Jan 21 16:50:52 crc kubenswrapper[4902]: I0121 16:50:52.932288 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5m6ct_4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501/speaker/0.log" Jan 21 16:50:52 crc kubenswrapper[4902]: I0121 16:50:52.941750 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5m6ct_4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501/kube-rbac-proxy/0.log" Jan 21 16:50:53 crc kubenswrapper[4902]: I0121 16:50:53.776331 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-4cd6m_12dae6d4-a2b1-4ef8-ae74-369697c9172b/cert-manager-controller/0.log" Jan 21 16:50:53 crc kubenswrapper[4902]: I0121 16:50:53.799585 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-llf68_21799993-1de7-4aef-9cfa-c132249ecf74/cert-manager-cainjector/0.log" Jan 21 16:50:53 crc kubenswrapper[4902]: I0121 16:50:53.817004 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-p2522_9093daac-4fd2-4075-8e73-d358cd885c3c/cert-manager-webhook/0.log" Jan 21 16:50:54 crc kubenswrapper[4902]: I0121 16:50:54.201675 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-6vz5c_ce3bf701-2498-42d7-969d-8944df02f1c7/nmstate-console-plugin/0.log" Jan 21 16:50:54 crc kubenswrapper[4902]: I0121 16:50:54.221132 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-p9t9n_14dd02e5-8cb3-4382-9107-5f5b698a2701/nmstate-handler/0.log" Jan 21 16:50:54 crc kubenswrapper[4902]: I0121 16:50:54.232402 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-x6qnj_d406f136-7416-4694-b6cd-d6bdf6b60e1f/nmstate-metrics/0.log" Jan 21 16:50:54 crc kubenswrapper[4902]: I0121 16:50:54.240882 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-x6qnj_d406f136-7416-4694-b6cd-d6bdf6b60e1f/kube-rbac-proxy/0.log" Jan 21 16:50:54 crc kubenswrapper[4902]: I0121 16:50:54.256652 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-q2fs2_bb74694a-8b82-4c31-85da-4ba2c732bbb8/nmstate-operator/0.log" Jan 21 16:50:54 crc kubenswrapper[4902]: I0121 16:50:54.267945 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-88bkr_87768889-c41f-4563-8b38-3d939fa22303/nmstate-webhook/0.log" Jan 21 16:50:54 crc kubenswrapper[4902]: I0121 16:50:54.466608 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qm6gk_9467c15f-f3fe-4594-b97d-0838d43877d1/control-plane-machine-set-operator/0.log" Jan 21 16:50:54 crc kubenswrapper[4902]: I0121 16:50:54.481108 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-57jmg_91a268d0-59c0-4e7f-8b78-260d14051e34/kube-rbac-proxy/0.log" Jan 21 16:50:54 crc kubenswrapper[4902]: I0121 16:50:54.490342 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-57jmg_91a268d0-59c0-4e7f-8b78-260d14051e34/machine-api-operator/0.log" Jan 21 16:50:55 crc kubenswrapper[4902]: I0121 16:50:55.102651 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv_f7119ded-6a7d-468d-acc4-9d1d1045656c/extract/0.log" Jan 21 16:50:55 crc kubenswrapper[4902]: I0121 16:50:55.109748 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv_f7119ded-6a7d-468d-acc4-9d1d1045656c/util/0.log" Jan 21 16:50:55 crc kubenswrapper[4902]: I0121 16:50:55.117649 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv_f7119ded-6a7d-468d-acc4-9d1d1045656c/pull/0.log" Jan 21 16:50:55 crc kubenswrapper[4902]: I0121 16:50:55.252920 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-j6fwd_66bb9ed9-5aee-41c4-a7d0-4b2ff5cff91e/manager/0.log" Jan 21 16:50:55 crc kubenswrapper[4902]: I0121 16:50:55.330680 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-nh8zr_b924ea4f-71c9-4f42-aa0a-a4945ea589e3/manager/0.log" Jan 21 16:50:55 crc kubenswrapper[4902]: I0121 16:50:55.341591 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-sdkxs_bc4c2749-7073-4bb8-8c87-736187565b08/manager/0.log" Jan 21 16:50:55 crc kubenswrapper[4902]: I0121 16:50:55.573820 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-gffs4_3c1e8b4d-a47d-4a6e-be63-bfc41d04d964/manager/0.log" Jan 21 16:50:55 crc kubenswrapper[4902]: I0121 16:50:55.635727 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-lttm9_56c38bff-8549-485e-a91f-1d89d801a8ee/manager/0.log" Jan 21 16:50:55 crc kubenswrapper[4902]: I0121 16:50:55.655513 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-nqnfh_05001c4b-c8f0-46ea-bf02-d7537d8a373b/manager/0.log" Jan 21 16:50:56 crc kubenswrapper[4902]: I0121 16:50:56.295810 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:50:56 crc kubenswrapper[4902]: E0121 16:50:56.296186 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:50:56 crc kubenswrapper[4902]: I0121 16:50:56.434355 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-46xm9_cea39ffd-421f-4b74-9f26-065f49e00786/manager/0.log" Jan 21 16:50:56 crc kubenswrapper[4902]: I0121 16:50:56.446832 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-khcxt_f3f5f576-48b8-4175-8d70-d8de7e41a63a/manager/0.log" Jan 21 16:50:56 crc kubenswrapper[4902]: I0121 16:50:56.617604 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-qwcvn_7d33c2a4-c369-4a5f-9592-289c162f095c/manager/0.log" Jan 21 16:50:56 crc kubenswrapper[4902]: I0121 16:50:56.632161 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-x6xrb_a5d9aa95-7d14-4a6e-af38-dddad85007f4/manager/0.log" Jan 21 16:50:56 crc kubenswrapper[4902]: I0121 16:50:56.720660 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-xrlqr_01091192-af46-486f-8890-787505f3b41c/manager/0.log" Jan 21 16:50:56 crc kubenswrapper[4902]: I0121 16:50:56.804018 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-8vfnj_0b55bf9c-cc65-446c-849e-035fb1bba4c4/manager/0.log" Jan 21 16:50:56 crc kubenswrapper[4902]: I0121 16:50:56.975322 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-nql9r_b01862fd-dfad-4a73-ac90-5ef7823c06ea/manager/0.log" Jan 21 16:50:57 crc kubenswrapper[4902]: I0121 16:50:57.030674 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-c2nb6_bc7bedc3-7b23-4f5c-bfbb-7b05694e6b90/manager/0.log" Jan 21 16:50:57 crc kubenswrapper[4902]: I0121 16:50:57.051875 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5b9875986dhp6x8_14dc1630-021a-4b05-8ac4-d99368b51726/manager/0.log" Jan 21 16:50:57 crc kubenswrapper[4902]: I0121 16:50:57.225021 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6d4d7d8545-mvcwp_1fbcd3da-0b42-4d83-b774-776f9d1612d5/operator/0.log" Jan 21 16:50:59 crc kubenswrapper[4902]: I0121 16:50:59.581512 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75bfd788c8-hr66g_77e35131-84f1-4df7-b6de-ceda247df931/manager/0.log" Jan 21 16:50:59 crc kubenswrapper[4902]: I0121 16:50:59.719133 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-dp8mf_2d05d6f5-a861-4117-b4a0-00e98da2fe57/registry-server/0.log" Jan 21 16:50:59 crc kubenswrapper[4902]: I0121 16:50:59.810492 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-lljfd_3912b1da-b132-48da-9b67-1f4aeb2203c4/manager/0.log" Jan 21 16:50:59 crc kubenswrapper[4902]: I0121 16:50:59.858473 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-pmvgc_c5d64dc8-80f6-4076-9068-11ec25d524b5/manager/0.log" Jan 21 16:50:59 crc kubenswrapper[4902]: I0121 16:50:59.887999 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-s7vgs_1ffd452b-d331-4c80-a6f6-0b1b21d5fd84/operator/0.log" Jan 21 16:50:59 crc kubenswrapper[4902]: I0121 16:50:59.930630 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-wqmq2_1e685238-529c-4964-af9d-8abed4dfcfae/manager/0.log" Jan 21 16:51:00 crc kubenswrapper[4902]: I0121 16:51:00.126558 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-v7bj9_2ad74206-4131-4395-8392-9697c2c164eb/manager/0.log" Jan 21 16:51:00 crc kubenswrapper[4902]: I0121 16:51:00.135435 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-gn5kf_624ad6d5-5647-43c8-8e62-751e4c5989b3/manager/0.log" Jan 21 16:51:00 crc kubenswrapper[4902]: I0121 16:51:00.146328 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-s8g8n_6783daa1-082d-4ab7-be65-dc2fb211be6c/manager/0.log" Jan 21 16:51:01 crc kubenswrapper[4902]: I0121 16:51:01.827221 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-h68nf_7dbee8a9-6952-46b5-a958-ff8f1847fabd/kube-multus-additional-cni-plugins/0.log" Jan 21 16:51:01 crc kubenswrapper[4902]: I0121 16:51:01.835447 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-h68nf_7dbee8a9-6952-46b5-a958-ff8f1847fabd/egress-router-binary-copy/0.log" Jan 21 16:51:01 crc kubenswrapper[4902]: I0121 16:51:01.844350 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-h68nf_7dbee8a9-6952-46b5-a958-ff8f1847fabd/cni-plugins/0.log" Jan 21 16:51:01 crc kubenswrapper[4902]: I0121 16:51:01.851000 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-h68nf_7dbee8a9-6952-46b5-a958-ff8f1847fabd/bond-cni-plugin/0.log" Jan 21 16:51:01 crc kubenswrapper[4902]: I0121 16:51:01.860447 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-h68nf_7dbee8a9-6952-46b5-a958-ff8f1847fabd/routeoverride-cni/0.log" Jan 21 16:51:01 crc kubenswrapper[4902]: I0121 16:51:01.873828 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-h68nf_7dbee8a9-6952-46b5-a958-ff8f1847fabd/whereabouts-cni-bincopy/0.log" Jan 21 16:51:01 crc kubenswrapper[4902]: I0121 16:51:01.880954 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-h68nf_7dbee8a9-6952-46b5-a958-ff8f1847fabd/whereabouts-cni/0.log" Jan 21 16:51:01 crc kubenswrapper[4902]: I0121 16:51:01.929718 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-q69sb_4c2958e3-5395-4efd-8b8f-f3e70fd9fcea/multus-admission-controller/0.log" Jan 21 16:51:01 crc kubenswrapper[4902]: I0121 16:51:01.937127 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-q69sb_4c2958e3-5395-4efd-8b8f-f3e70fd9fcea/kube-rbac-proxy/0.log" Jan 21 16:51:01 crc kubenswrapper[4902]: I0121 16:51:01.986675 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mztd6_037b55cf-cb9e-41ce-8b1e-3898f490a4aa/kube-multus/2.log" Jan 21 16:51:02 crc kubenswrapper[4902]: I0121 16:51:02.199195 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mztd6_037b55cf-cb9e-41ce-8b1e-3898f490a4aa/kube-multus/3.log" Jan 21 16:51:02 crc kubenswrapper[4902]: I0121 16:51:02.253551 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-kq588_05d94e6a-249a-484c-8895-085e81f1dfaa/network-metrics-daemon/0.log" Jan 21 16:51:02 crc kubenswrapper[4902]: I0121 16:51:02.261154 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-kq588_05d94e6a-249a-484c-8895-085e81f1dfaa/kube-rbac-proxy/0.log" Jan 21 16:51:08 crc kubenswrapper[4902]: I0121 16:51:08.305657 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:51:08 crc kubenswrapper[4902]: E0121 16:51:08.306590 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:51:12 crc kubenswrapper[4902]: I0121 16:51:12.805674 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7wg99"] Jan 21 16:51:12 crc kubenswrapper[4902]: E0121 16:51:12.807269 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae9ccc8-6a44-4a51-9379-4a9df0699618" containerName="extract-utilities" Jan 21 16:51:12 crc kubenswrapper[4902]: I0121 16:51:12.807302 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae9ccc8-6a44-4a51-9379-4a9df0699618" containerName="extract-utilities" Jan 21 16:51:12 crc kubenswrapper[4902]: E0121 16:51:12.807393 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae9ccc8-6a44-4a51-9379-4a9df0699618" containerName="extract-content" Jan 21 16:51:12 crc kubenswrapper[4902]: I0121 16:51:12.807412 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae9ccc8-6a44-4a51-9379-4a9df0699618" containerName="extract-content" Jan 21 16:51:12 crc kubenswrapper[4902]: E0121 16:51:12.807447 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae9ccc8-6a44-4a51-9379-4a9df0699618" containerName="registry-server" Jan 21 16:51:12 crc kubenswrapper[4902]: I0121 16:51:12.807464 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae9ccc8-6a44-4a51-9379-4a9df0699618" containerName="registry-server" Jan 21 16:51:12 crc kubenswrapper[4902]: I0121 16:51:12.808004 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="fae9ccc8-6a44-4a51-9379-4a9df0699618" containerName="registry-server" Jan 21 16:51:12 crc kubenswrapper[4902]: I0121 16:51:12.829279 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7wg99"] Jan 21 16:51:12 crc kubenswrapper[4902]: I0121 16:51:12.829502 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:12 crc kubenswrapper[4902]: I0121 16:51:12.969490 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/143242b4-3aff-4e7a-9b24-30668b357d16-catalog-content\") pod \"community-operators-7wg99\" (UID: \"143242b4-3aff-4e7a-9b24-30668b357d16\") " pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:12 crc kubenswrapper[4902]: I0121 16:51:12.969977 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/143242b4-3aff-4e7a-9b24-30668b357d16-utilities\") pod \"community-operators-7wg99\" (UID: \"143242b4-3aff-4e7a-9b24-30668b357d16\") " pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:12 crc kubenswrapper[4902]: I0121 16:51:12.970028 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqswl\" (UniqueName: \"kubernetes.io/projected/143242b4-3aff-4e7a-9b24-30668b357d16-kube-api-access-bqswl\") pod \"community-operators-7wg99\" (UID: \"143242b4-3aff-4e7a-9b24-30668b357d16\") " pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:13 crc kubenswrapper[4902]: I0121 16:51:13.071584 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/143242b4-3aff-4e7a-9b24-30668b357d16-catalog-content\") pod \"community-operators-7wg99\" (UID: \"143242b4-3aff-4e7a-9b24-30668b357d16\") " pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:13 crc kubenswrapper[4902]: I0121 16:51:13.071735 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/143242b4-3aff-4e7a-9b24-30668b357d16-utilities\") pod \"community-operators-7wg99\" (UID: \"143242b4-3aff-4e7a-9b24-30668b357d16\") " pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:13 crc kubenswrapper[4902]: I0121 16:51:13.071773 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqswl\" (UniqueName: \"kubernetes.io/projected/143242b4-3aff-4e7a-9b24-30668b357d16-kube-api-access-bqswl\") pod \"community-operators-7wg99\" (UID: \"143242b4-3aff-4e7a-9b24-30668b357d16\") " pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:13 crc kubenswrapper[4902]: I0121 16:51:13.072451 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/143242b4-3aff-4e7a-9b24-30668b357d16-utilities\") pod \"community-operators-7wg99\" (UID: \"143242b4-3aff-4e7a-9b24-30668b357d16\") " pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:13 crc kubenswrapper[4902]: I0121 16:51:13.072442 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/143242b4-3aff-4e7a-9b24-30668b357d16-catalog-content\") pod \"community-operators-7wg99\" (UID: \"143242b4-3aff-4e7a-9b24-30668b357d16\") " pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:13 crc kubenswrapper[4902]: I0121 16:51:13.089828 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqswl\" (UniqueName: \"kubernetes.io/projected/143242b4-3aff-4e7a-9b24-30668b357d16-kube-api-access-bqswl\") pod \"community-operators-7wg99\" (UID: \"143242b4-3aff-4e7a-9b24-30668b357d16\") " pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:13 crc kubenswrapper[4902]: I0121 16:51:13.159602 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:13 crc kubenswrapper[4902]: I0121 16:51:13.814066 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7wg99"] Jan 21 16:51:14 crc kubenswrapper[4902]: I0121 16:51:14.215346 4902 generic.go:334] "Generic (PLEG): container finished" podID="143242b4-3aff-4e7a-9b24-30668b357d16" containerID="39fb987f6fc04b8891dfae5f80b4bcb45c7d727cc3da63ac2ce75a0959709d9c" exitCode=0 Jan 21 16:51:14 crc kubenswrapper[4902]: I0121 16:51:14.215436 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wg99" event={"ID":"143242b4-3aff-4e7a-9b24-30668b357d16","Type":"ContainerDied","Data":"39fb987f6fc04b8891dfae5f80b4bcb45c7d727cc3da63ac2ce75a0959709d9c"} Jan 21 16:51:14 crc kubenswrapper[4902]: I0121 16:51:14.215746 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wg99" event={"ID":"143242b4-3aff-4e7a-9b24-30668b357d16","Type":"ContainerStarted","Data":"bf23ec466aed83cc1d86a8222ea846f4133c82d2cb505fed79112940bbd33e10"} Jan 21 16:51:14 crc kubenswrapper[4902]: I0121 16:51:14.219487 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:51:15 crc kubenswrapper[4902]: I0121 16:51:15.233267 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wg99" event={"ID":"143242b4-3aff-4e7a-9b24-30668b357d16","Type":"ContainerStarted","Data":"b26802e67045e55d7b02cb67bbfff611c338ee5ed68aa72686be32a75ef95e8f"} Jan 21 16:51:16 crc kubenswrapper[4902]: E0121 16:51:16.076369 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod143242b4_3aff_4e7a_9b24_30668b357d16.slice/crio-conmon-b26802e67045e55d7b02cb67bbfff611c338ee5ed68aa72686be32a75ef95e8f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod143242b4_3aff_4e7a_9b24_30668b357d16.slice/crio-b26802e67045e55d7b02cb67bbfff611c338ee5ed68aa72686be32a75ef95e8f.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:51:16 crc kubenswrapper[4902]: I0121 16:51:16.243309 4902 generic.go:334] "Generic (PLEG): container finished" podID="143242b4-3aff-4e7a-9b24-30668b357d16" containerID="b26802e67045e55d7b02cb67bbfff611c338ee5ed68aa72686be32a75ef95e8f" exitCode=0 Jan 21 16:51:16 crc kubenswrapper[4902]: I0121 16:51:16.243358 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wg99" event={"ID":"143242b4-3aff-4e7a-9b24-30668b357d16","Type":"ContainerDied","Data":"b26802e67045e55d7b02cb67bbfff611c338ee5ed68aa72686be32a75ef95e8f"} Jan 21 16:51:17 crc kubenswrapper[4902]: I0121 16:51:17.260509 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wg99" event={"ID":"143242b4-3aff-4e7a-9b24-30668b357d16","Type":"ContainerStarted","Data":"9d8f98e938d60b36e7af87ed0a3021f240993a31ebde5f6419b6b63c546c26a1"} Jan 21 16:51:17 crc kubenswrapper[4902]: I0121 16:51:17.292223 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7wg99" podStartSLOduration=2.837766131 podStartE2EDuration="5.292203241s" podCreationTimestamp="2026-01-21 16:51:12 +0000 UTC" firstStartedPulling="2026-01-21 16:51:14.219107239 +0000 UTC m=+8236.295940278" lastFinishedPulling="2026-01-21 16:51:16.673544359 +0000 UTC m=+8238.750377388" observedRunningTime="2026-01-21 16:51:17.287391886 +0000 UTC m=+8239.364224915" watchObservedRunningTime="2026-01-21 16:51:17.292203241 +0000 UTC m=+8239.369036270" Jan 21 16:51:23 crc kubenswrapper[4902]: I0121 16:51:23.159905 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:23 crc kubenswrapper[4902]: I0121 16:51:23.160522 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:23 crc kubenswrapper[4902]: I0121 16:51:23.207796 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:23 crc kubenswrapper[4902]: I0121 16:51:23.295120 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:51:23 crc kubenswrapper[4902]: E0121 16:51:23.295633 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:51:23 crc kubenswrapper[4902]: I0121 16:51:23.396129 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:23 crc kubenswrapper[4902]: I0121 16:51:23.446490 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7wg99"] Jan 21 16:51:25 crc kubenswrapper[4902]: I0121 16:51:25.349437 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7wg99" podUID="143242b4-3aff-4e7a-9b24-30668b357d16" containerName="registry-server" containerID="cri-o://9d8f98e938d60b36e7af87ed0a3021f240993a31ebde5f6419b6b63c546c26a1" gracePeriod=2 Jan 21 16:51:26 crc kubenswrapper[4902]: I0121 16:51:26.393817 4902 generic.go:334] "Generic (PLEG): container finished" podID="143242b4-3aff-4e7a-9b24-30668b357d16" containerID="9d8f98e938d60b36e7af87ed0a3021f240993a31ebde5f6419b6b63c546c26a1" exitCode=0 Jan 21 16:51:26 crc kubenswrapper[4902]: I0121 16:51:26.394163 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wg99" event={"ID":"143242b4-3aff-4e7a-9b24-30668b357d16","Type":"ContainerDied","Data":"9d8f98e938d60b36e7af87ed0a3021f240993a31ebde5f6419b6b63c546c26a1"} Jan 21 16:51:26 crc kubenswrapper[4902]: I0121 16:51:26.509846 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:26 crc kubenswrapper[4902]: I0121 16:51:26.645987 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqswl\" (UniqueName: \"kubernetes.io/projected/143242b4-3aff-4e7a-9b24-30668b357d16-kube-api-access-bqswl\") pod \"143242b4-3aff-4e7a-9b24-30668b357d16\" (UID: \"143242b4-3aff-4e7a-9b24-30668b357d16\") " Jan 21 16:51:26 crc kubenswrapper[4902]: I0121 16:51:26.646064 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/143242b4-3aff-4e7a-9b24-30668b357d16-catalog-content\") pod \"143242b4-3aff-4e7a-9b24-30668b357d16\" (UID: \"143242b4-3aff-4e7a-9b24-30668b357d16\") " Jan 21 16:51:26 crc kubenswrapper[4902]: I0121 16:51:26.646139 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/143242b4-3aff-4e7a-9b24-30668b357d16-utilities\") pod \"143242b4-3aff-4e7a-9b24-30668b357d16\" (UID: \"143242b4-3aff-4e7a-9b24-30668b357d16\") " Jan 21 16:51:26 crc kubenswrapper[4902]: I0121 16:51:26.646956 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/143242b4-3aff-4e7a-9b24-30668b357d16-utilities" (OuterVolumeSpecName: "utilities") pod "143242b4-3aff-4e7a-9b24-30668b357d16" (UID: "143242b4-3aff-4e7a-9b24-30668b357d16"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:51:26 crc kubenswrapper[4902]: I0121 16:51:26.652070 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/143242b4-3aff-4e7a-9b24-30668b357d16-kube-api-access-bqswl" (OuterVolumeSpecName: "kube-api-access-bqswl") pod "143242b4-3aff-4e7a-9b24-30668b357d16" (UID: "143242b4-3aff-4e7a-9b24-30668b357d16"). InnerVolumeSpecName "kube-api-access-bqswl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:51:26 crc kubenswrapper[4902]: I0121 16:51:26.700401 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/143242b4-3aff-4e7a-9b24-30668b357d16-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "143242b4-3aff-4e7a-9b24-30668b357d16" (UID: "143242b4-3aff-4e7a-9b24-30668b357d16"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:51:26 crc kubenswrapper[4902]: I0121 16:51:26.748849 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqswl\" (UniqueName: \"kubernetes.io/projected/143242b4-3aff-4e7a-9b24-30668b357d16-kube-api-access-bqswl\") on node \"crc\" DevicePath \"\"" Jan 21 16:51:26 crc kubenswrapper[4902]: I0121 16:51:26.748879 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/143242b4-3aff-4e7a-9b24-30668b357d16-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:51:26 crc kubenswrapper[4902]: I0121 16:51:26.748889 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/143242b4-3aff-4e7a-9b24-30668b357d16-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:51:27 crc kubenswrapper[4902]: I0121 16:51:27.404142 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wg99" event={"ID":"143242b4-3aff-4e7a-9b24-30668b357d16","Type":"ContainerDied","Data":"bf23ec466aed83cc1d86a8222ea846f4133c82d2cb505fed79112940bbd33e10"} Jan 21 16:51:27 crc kubenswrapper[4902]: I0121 16:51:27.404511 4902 scope.go:117] "RemoveContainer" containerID="9d8f98e938d60b36e7af87ed0a3021f240993a31ebde5f6419b6b63c546c26a1" Jan 21 16:51:27 crc kubenswrapper[4902]: I0121 16:51:27.404710 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:27 crc kubenswrapper[4902]: I0121 16:51:27.429289 4902 scope.go:117] "RemoveContainer" containerID="b26802e67045e55d7b02cb67bbfff611c338ee5ed68aa72686be32a75ef95e8f" Jan 21 16:51:27 crc kubenswrapper[4902]: I0121 16:51:27.446577 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7wg99"] Jan 21 16:51:27 crc kubenswrapper[4902]: I0121 16:51:27.457480 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7wg99"] Jan 21 16:51:27 crc kubenswrapper[4902]: I0121 16:51:27.463833 4902 scope.go:117] "RemoveContainer" containerID="39fb987f6fc04b8891dfae5f80b4bcb45c7d727cc3da63ac2ce75a0959709d9c" Jan 21 16:51:28 crc kubenswrapper[4902]: I0121 16:51:28.315989 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="143242b4-3aff-4e7a-9b24-30668b357d16" path="/var/lib/kubelet/pods/143242b4-3aff-4e7a-9b24-30668b357d16/volumes" Jan 21 16:51:35 crc kubenswrapper[4902]: I0121 16:51:35.301242 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:51:35 crc kubenswrapper[4902]: E0121 16:51:35.303430 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:51:48 crc kubenswrapper[4902]: I0121 16:51:48.302286 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:51:48 crc kubenswrapper[4902]: E0121 16:51:48.304485 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:52:00 crc kubenswrapper[4902]: I0121 16:52:00.294966 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:52:00 crc kubenswrapper[4902]: E0121 16:52:00.295874 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:52:12 crc kubenswrapper[4902]: I0121 16:52:12.295545 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:52:12 crc kubenswrapper[4902]: E0121 16:52:12.296731 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:52:25 crc kubenswrapper[4902]: I0121 16:52:25.296148 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:52:25 crc kubenswrapper[4902]: E0121 16:52:25.297485 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:52:36 crc kubenswrapper[4902]: I0121 16:52:36.533024 4902 scope.go:117] "RemoveContainer" containerID="31fc280c2d7e2874d5d3ebbb00ce9f04de5add4709a413f82f3fb2ff907c3669" Jan 21 16:52:39 crc kubenswrapper[4902]: I0121 16:52:39.295513 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:52:39 crc kubenswrapper[4902]: E0121 16:52:39.296741 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:52:54 crc kubenswrapper[4902]: I0121 16:52:54.295903 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:52:54 crc kubenswrapper[4902]: E0121 16:52:54.296592 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.168278 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-54qdd"] Jan 21 16:53:01 crc kubenswrapper[4902]: E0121 16:53:01.172580 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143242b4-3aff-4e7a-9b24-30668b357d16" containerName="registry-server" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.172625 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="143242b4-3aff-4e7a-9b24-30668b357d16" containerName="registry-server" Jan 21 16:53:01 crc kubenswrapper[4902]: E0121 16:53:01.172672 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143242b4-3aff-4e7a-9b24-30668b357d16" containerName="extract-content" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.172680 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="143242b4-3aff-4e7a-9b24-30668b357d16" containerName="extract-content" Jan 21 16:53:01 crc kubenswrapper[4902]: E0121 16:53:01.172695 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143242b4-3aff-4e7a-9b24-30668b357d16" containerName="extract-utilities" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.172704 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="143242b4-3aff-4e7a-9b24-30668b357d16" containerName="extract-utilities" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.172963 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="143242b4-3aff-4e7a-9b24-30668b357d16" containerName="registry-server" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.174864 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.182463 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-54qdd"] Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.359319 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e0d27aa-f265-4f75-b74b-8ec006ae7151-utilities\") pod \"redhat-operators-54qdd\" (UID: \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\") " pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.359396 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e0d27aa-f265-4f75-b74b-8ec006ae7151-catalog-content\") pod \"redhat-operators-54qdd\" (UID: \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\") " pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.359506 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bpnh\" (UniqueName: \"kubernetes.io/projected/4e0d27aa-f265-4f75-b74b-8ec006ae7151-kube-api-access-6bpnh\") pod \"redhat-operators-54qdd\" (UID: \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\") " pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.461193 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bpnh\" (UniqueName: \"kubernetes.io/projected/4e0d27aa-f265-4f75-b74b-8ec006ae7151-kube-api-access-6bpnh\") pod \"redhat-operators-54qdd\" (UID: \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\") " pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.461422 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e0d27aa-f265-4f75-b74b-8ec006ae7151-utilities\") pod \"redhat-operators-54qdd\" (UID: \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\") " pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.461482 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e0d27aa-f265-4f75-b74b-8ec006ae7151-catalog-content\") pod \"redhat-operators-54qdd\" (UID: \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\") " pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.462198 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e0d27aa-f265-4f75-b74b-8ec006ae7151-utilities\") pod \"redhat-operators-54qdd\" (UID: \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\") " pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.462775 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e0d27aa-f265-4f75-b74b-8ec006ae7151-catalog-content\") pod \"redhat-operators-54qdd\" (UID: \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\") " pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.492151 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bpnh\" (UniqueName: \"kubernetes.io/projected/4e0d27aa-f265-4f75-b74b-8ec006ae7151-kube-api-access-6bpnh\") pod \"redhat-operators-54qdd\" (UID: \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\") " pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.504075 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.991194 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-54qdd"] Jan 21 16:53:02 crc kubenswrapper[4902]: I0121 16:53:02.428711 4902 generic.go:334] "Generic (PLEG): container finished" podID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" containerID="275d539d248d506426dd37cf223fc6b72a621e4ab41cebe0898c7d344b6b6b84" exitCode=0 Jan 21 16:53:02 crc kubenswrapper[4902]: I0121 16:53:02.428767 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54qdd" event={"ID":"4e0d27aa-f265-4f75-b74b-8ec006ae7151","Type":"ContainerDied","Data":"275d539d248d506426dd37cf223fc6b72a621e4ab41cebe0898c7d344b6b6b84"} Jan 21 16:53:02 crc kubenswrapper[4902]: I0121 16:53:02.428798 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54qdd" event={"ID":"4e0d27aa-f265-4f75-b74b-8ec006ae7151","Type":"ContainerStarted","Data":"ef9cc39908a6f856e15324133e405fe3affe18c520ff62385e5a7910a73603f5"} Jan 21 16:53:02 crc kubenswrapper[4902]: I0121 16:53:02.966463 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-96gtf"] Jan 21 16:53:02 crc kubenswrapper[4902]: I0121 16:53:02.972956 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:02 crc kubenswrapper[4902]: I0121 16:53:02.978629 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-96gtf"] Jan 21 16:53:03 crc kubenswrapper[4902]: I0121 16:53:03.104603 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv8gw\" (UniqueName: \"kubernetes.io/projected/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-kube-api-access-hv8gw\") pod \"certified-operators-96gtf\" (UID: \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\") " pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:03 crc kubenswrapper[4902]: I0121 16:53:03.105148 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-catalog-content\") pod \"certified-operators-96gtf\" (UID: \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\") " pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:03 crc kubenswrapper[4902]: I0121 16:53:03.105612 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-utilities\") pod \"certified-operators-96gtf\" (UID: \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\") " pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:03 crc kubenswrapper[4902]: I0121 16:53:03.209008 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-catalog-content\") pod \"certified-operators-96gtf\" (UID: \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\") " pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:03 crc kubenswrapper[4902]: I0121 16:53:03.209234 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-utilities\") pod \"certified-operators-96gtf\" (UID: \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\") " pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:03 crc kubenswrapper[4902]: I0121 16:53:03.209312 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv8gw\" (UniqueName: \"kubernetes.io/projected/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-kube-api-access-hv8gw\") pod \"certified-operators-96gtf\" (UID: \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\") " pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:03 crc kubenswrapper[4902]: I0121 16:53:03.209709 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-catalog-content\") pod \"certified-operators-96gtf\" (UID: \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\") " pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:03 crc kubenswrapper[4902]: I0121 16:53:03.209889 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-utilities\") pod \"certified-operators-96gtf\" (UID: \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\") " pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:03 crc kubenswrapper[4902]: I0121 16:53:03.239137 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv8gw\" (UniqueName: \"kubernetes.io/projected/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-kube-api-access-hv8gw\") pod \"certified-operators-96gtf\" (UID: \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\") " pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:03 crc kubenswrapper[4902]: I0121 16:53:03.314460 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:03 crc kubenswrapper[4902]: I0121 16:53:03.851002 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-96gtf"] Jan 21 16:53:03 crc kubenswrapper[4902]: W0121 16:53:03.854006 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d7d3b40_1a5c_452d_864a_7e67d8c1e7bd.slice/crio-f292bf2a284eb647def3d5cb87f512309efa8438e63e51fdcf6e4e9a64cf90f0 WatchSource:0}: Error finding container f292bf2a284eb647def3d5cb87f512309efa8438e63e51fdcf6e4e9a64cf90f0: Status 404 returned error can't find the container with id f292bf2a284eb647def3d5cb87f512309efa8438e63e51fdcf6e4e9a64cf90f0 Jan 21 16:53:04 crc kubenswrapper[4902]: I0121 16:53:04.460070 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54qdd" event={"ID":"4e0d27aa-f265-4f75-b74b-8ec006ae7151","Type":"ContainerStarted","Data":"1947ec42fd69cb76effdc0a8c28c8eabe6bb409dc74b40899ca094bce00b5f10"} Jan 21 16:53:04 crc kubenswrapper[4902]: I0121 16:53:04.461593 4902 generic.go:334] "Generic (PLEG): container finished" podID="0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" containerID="5965a2aece2742e9ac6eafb08915f1944b45e3c3ffab36ecf483c021eab47984" exitCode=0 Jan 21 16:53:04 crc kubenswrapper[4902]: I0121 16:53:04.461621 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96gtf" event={"ID":"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd","Type":"ContainerDied","Data":"5965a2aece2742e9ac6eafb08915f1944b45e3c3ffab36ecf483c021eab47984"} Jan 21 16:53:04 crc kubenswrapper[4902]: I0121 16:53:04.461636 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96gtf" event={"ID":"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd","Type":"ContainerStarted","Data":"f292bf2a284eb647def3d5cb87f512309efa8438e63e51fdcf6e4e9a64cf90f0"} Jan 21 16:53:05 crc kubenswrapper[4902]: I0121 16:53:05.295541 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:53:05 crc kubenswrapper[4902]: E0121 16:53:05.296143 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:53:05 crc kubenswrapper[4902]: I0121 16:53:05.471800 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96gtf" event={"ID":"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd","Type":"ContainerStarted","Data":"7268e6dd157fe43cbbe8c45fd47080f6c430f5a549e86befcb425888d045f3b0"} Jan 21 16:53:06 crc kubenswrapper[4902]: I0121 16:53:06.482229 4902 generic.go:334] "Generic (PLEG): container finished" podID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" containerID="1947ec42fd69cb76effdc0a8c28c8eabe6bb409dc74b40899ca094bce00b5f10" exitCode=0 Jan 21 16:53:06 crc kubenswrapper[4902]: I0121 16:53:06.482292 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54qdd" event={"ID":"4e0d27aa-f265-4f75-b74b-8ec006ae7151","Type":"ContainerDied","Data":"1947ec42fd69cb76effdc0a8c28c8eabe6bb409dc74b40899ca094bce00b5f10"} Jan 21 16:53:07 crc kubenswrapper[4902]: I0121 16:53:07.507931 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54qdd" event={"ID":"4e0d27aa-f265-4f75-b74b-8ec006ae7151","Type":"ContainerStarted","Data":"70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea"} Jan 21 16:53:07 crc kubenswrapper[4902]: I0121 16:53:07.513532 4902 generic.go:334] "Generic (PLEG): container finished" podID="0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" containerID="7268e6dd157fe43cbbe8c45fd47080f6c430f5a549e86befcb425888d045f3b0" exitCode=0 Jan 21 16:53:07 crc kubenswrapper[4902]: I0121 16:53:07.513576 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96gtf" event={"ID":"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd","Type":"ContainerDied","Data":"7268e6dd157fe43cbbe8c45fd47080f6c430f5a549e86befcb425888d045f3b0"} Jan 21 16:53:07 crc kubenswrapper[4902]: I0121 16:53:07.532819 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-54qdd" podStartSLOduration=1.9059979249999999 podStartE2EDuration="6.532802882s" podCreationTimestamp="2026-01-21 16:53:01 +0000 UTC" firstStartedPulling="2026-01-21 16:53:02.437024812 +0000 UTC m=+8344.513857841" lastFinishedPulling="2026-01-21 16:53:07.063829769 +0000 UTC m=+8349.140662798" observedRunningTime="2026-01-21 16:53:07.529509089 +0000 UTC m=+8349.606342128" watchObservedRunningTime="2026-01-21 16:53:07.532802882 +0000 UTC m=+8349.609635911" Jan 21 16:53:08 crc kubenswrapper[4902]: I0121 16:53:08.527758 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96gtf" event={"ID":"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd","Type":"ContainerStarted","Data":"f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1"} Jan 21 16:53:11 crc kubenswrapper[4902]: I0121 16:53:11.504808 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:11 crc kubenswrapper[4902]: I0121 16:53:11.507709 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:12 crc kubenswrapper[4902]: I0121 16:53:12.567054 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-54qdd" podUID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" containerName="registry-server" probeResult="failure" output=< Jan 21 16:53:12 crc kubenswrapper[4902]: timeout: failed to connect service ":50051" within 1s Jan 21 16:53:12 crc kubenswrapper[4902]: > Jan 21 16:53:13 crc kubenswrapper[4902]: I0121 16:53:13.315018 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:13 crc kubenswrapper[4902]: I0121 16:53:13.315127 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:13 crc kubenswrapper[4902]: I0121 16:53:13.363076 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:13 crc kubenswrapper[4902]: I0121 16:53:13.387378 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-96gtf" podStartSLOduration=7.840538326 podStartE2EDuration="11.387032046s" podCreationTimestamp="2026-01-21 16:53:02 +0000 UTC" firstStartedPulling="2026-01-21 16:53:04.463323941 +0000 UTC m=+8346.540156960" lastFinishedPulling="2026-01-21 16:53:08.009817651 +0000 UTC m=+8350.086650680" observedRunningTime="2026-01-21 16:53:08.55458262 +0000 UTC m=+8350.631415659" watchObservedRunningTime="2026-01-21 16:53:13.387032046 +0000 UTC m=+8355.463865075" Jan 21 16:53:13 crc kubenswrapper[4902]: I0121 16:53:13.622584 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:13 crc kubenswrapper[4902]: I0121 16:53:13.749910 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-96gtf"] Jan 21 16:53:15 crc kubenswrapper[4902]: I0121 16:53:15.592113 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-96gtf" podUID="0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" containerName="registry-server" containerID="cri-o://f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1" gracePeriod=2 Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.087517 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.224624 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-utilities\") pod \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\" (UID: \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\") " Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.224763 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-catalog-content\") pod \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\" (UID: \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\") " Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.224895 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv8gw\" (UniqueName: \"kubernetes.io/projected/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-kube-api-access-hv8gw\") pod \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\" (UID: \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\") " Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.225903 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-utilities" (OuterVolumeSpecName: "utilities") pod "0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" (UID: "0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.231195 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-kube-api-access-hv8gw" (OuterVolumeSpecName: "kube-api-access-hv8gw") pod "0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" (UID: "0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd"). InnerVolumeSpecName "kube-api-access-hv8gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.268866 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" (UID: "0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.328751 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv8gw\" (UniqueName: \"kubernetes.io/projected/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-kube-api-access-hv8gw\") on node \"crc\" DevicePath \"\"" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.328791 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.328803 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.607010 4902 generic.go:334] "Generic (PLEG): container finished" podID="0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" containerID="f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1" exitCode=0 Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.607084 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96gtf" event={"ID":"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd","Type":"ContainerDied","Data":"f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1"} Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.607111 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96gtf" event={"ID":"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd","Type":"ContainerDied","Data":"f292bf2a284eb647def3d5cb87f512309efa8438e63e51fdcf6e4e9a64cf90f0"} Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.607127 4902 scope.go:117] "RemoveContainer" containerID="f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.607471 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.642710 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-96gtf"] Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.656751 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-96gtf"] Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.659789 4902 scope.go:117] "RemoveContainer" containerID="7268e6dd157fe43cbbe8c45fd47080f6c430f5a549e86befcb425888d045f3b0" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.688076 4902 scope.go:117] "RemoveContainer" containerID="5965a2aece2742e9ac6eafb08915f1944b45e3c3ffab36ecf483c021eab47984" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.740581 4902 scope.go:117] "RemoveContainer" containerID="f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1" Jan 21 16:53:16 crc kubenswrapper[4902]: E0121 16:53:16.741026 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1\": container with ID starting with f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1 not found: ID does not exist" containerID="f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.741091 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1"} err="failed to get container status \"f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1\": rpc error: code = NotFound desc = could not find container \"f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1\": container with ID starting with f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1 not found: ID does not exist" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.741121 4902 scope.go:117] "RemoveContainer" containerID="7268e6dd157fe43cbbe8c45fd47080f6c430f5a549e86befcb425888d045f3b0" Jan 21 16:53:16 crc kubenswrapper[4902]: E0121 16:53:16.741633 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7268e6dd157fe43cbbe8c45fd47080f6c430f5a549e86befcb425888d045f3b0\": container with ID starting with 7268e6dd157fe43cbbe8c45fd47080f6c430f5a549e86befcb425888d045f3b0 not found: ID does not exist" containerID="7268e6dd157fe43cbbe8c45fd47080f6c430f5a549e86befcb425888d045f3b0" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.741664 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7268e6dd157fe43cbbe8c45fd47080f6c430f5a549e86befcb425888d045f3b0"} err="failed to get container status \"7268e6dd157fe43cbbe8c45fd47080f6c430f5a549e86befcb425888d045f3b0\": rpc error: code = NotFound desc = could not find container \"7268e6dd157fe43cbbe8c45fd47080f6c430f5a549e86befcb425888d045f3b0\": container with ID starting with 7268e6dd157fe43cbbe8c45fd47080f6c430f5a549e86befcb425888d045f3b0 not found: ID does not exist" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.741697 4902 scope.go:117] "RemoveContainer" containerID="5965a2aece2742e9ac6eafb08915f1944b45e3c3ffab36ecf483c021eab47984" Jan 21 16:53:16 crc kubenswrapper[4902]: E0121 16:53:16.742118 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5965a2aece2742e9ac6eafb08915f1944b45e3c3ffab36ecf483c021eab47984\": container with ID starting with 5965a2aece2742e9ac6eafb08915f1944b45e3c3ffab36ecf483c021eab47984 not found: ID does not exist" containerID="5965a2aece2742e9ac6eafb08915f1944b45e3c3ffab36ecf483c021eab47984" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.742147 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5965a2aece2742e9ac6eafb08915f1944b45e3c3ffab36ecf483c021eab47984"} err="failed to get container status \"5965a2aece2742e9ac6eafb08915f1944b45e3c3ffab36ecf483c021eab47984\": rpc error: code = NotFound desc = could not find container \"5965a2aece2742e9ac6eafb08915f1944b45e3c3ffab36ecf483c021eab47984\": container with ID starting with 5965a2aece2742e9ac6eafb08915f1944b45e3c3ffab36ecf483c021eab47984 not found: ID does not exist" Jan 21 16:53:18 crc kubenswrapper[4902]: I0121 16:53:18.308214 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" path="/var/lib/kubelet/pods/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd/volumes" Jan 21 16:53:19 crc kubenswrapper[4902]: I0121 16:53:19.295446 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:53:19 crc kubenswrapper[4902]: E0121 16:53:19.296028 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:53:21 crc kubenswrapper[4902]: I0121 16:53:21.571404 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:21 crc kubenswrapper[4902]: I0121 16:53:21.631670 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:21 crc kubenswrapper[4902]: I0121 16:53:21.816369 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-54qdd"] Jan 21 16:53:22 crc kubenswrapper[4902]: I0121 16:53:22.663328 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-54qdd" podUID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" containerName="registry-server" containerID="cri-o://70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea" gracePeriod=2 Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.191924 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.310891 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bpnh\" (UniqueName: \"kubernetes.io/projected/4e0d27aa-f265-4f75-b74b-8ec006ae7151-kube-api-access-6bpnh\") pod \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\" (UID: \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\") " Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.311003 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e0d27aa-f265-4f75-b74b-8ec006ae7151-utilities\") pod \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\" (UID: \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\") " Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.311072 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e0d27aa-f265-4f75-b74b-8ec006ae7151-catalog-content\") pod \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\" (UID: \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\") " Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.312202 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e0d27aa-f265-4f75-b74b-8ec006ae7151-utilities" (OuterVolumeSpecName: "utilities") pod "4e0d27aa-f265-4f75-b74b-8ec006ae7151" (UID: "4e0d27aa-f265-4f75-b74b-8ec006ae7151"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.318120 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e0d27aa-f265-4f75-b74b-8ec006ae7151-kube-api-access-6bpnh" (OuterVolumeSpecName: "kube-api-access-6bpnh") pod "4e0d27aa-f265-4f75-b74b-8ec006ae7151" (UID: "4e0d27aa-f265-4f75-b74b-8ec006ae7151"). InnerVolumeSpecName "kube-api-access-6bpnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.413034 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bpnh\" (UniqueName: \"kubernetes.io/projected/4e0d27aa-f265-4f75-b74b-8ec006ae7151-kube-api-access-6bpnh\") on node \"crc\" DevicePath \"\"" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.413072 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e0d27aa-f265-4f75-b74b-8ec006ae7151-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.442245 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e0d27aa-f265-4f75-b74b-8ec006ae7151-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e0d27aa-f265-4f75-b74b-8ec006ae7151" (UID: "4e0d27aa-f265-4f75-b74b-8ec006ae7151"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.514883 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e0d27aa-f265-4f75-b74b-8ec006ae7151-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.672892 4902 generic.go:334] "Generic (PLEG): container finished" podID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" containerID="70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea" exitCode=0 Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.672934 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54qdd" event={"ID":"4e0d27aa-f265-4f75-b74b-8ec006ae7151","Type":"ContainerDied","Data":"70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea"} Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.672966 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.672979 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54qdd" event={"ID":"4e0d27aa-f265-4f75-b74b-8ec006ae7151","Type":"ContainerDied","Data":"ef9cc39908a6f856e15324133e405fe3affe18c520ff62385e5a7910a73603f5"} Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.673009 4902 scope.go:117] "RemoveContainer" containerID="70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.710289 4902 scope.go:117] "RemoveContainer" containerID="1947ec42fd69cb76effdc0a8c28c8eabe6bb409dc74b40899ca094bce00b5f10" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.712649 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-54qdd"] Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.735349 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-54qdd"] Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.751898 4902 scope.go:117] "RemoveContainer" containerID="275d539d248d506426dd37cf223fc6b72a621e4ab41cebe0898c7d344b6b6b84" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.797694 4902 scope.go:117] "RemoveContainer" containerID="70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea" Jan 21 16:53:23 crc kubenswrapper[4902]: E0121 16:53:23.798317 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea\": container with ID starting with 70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea not found: ID does not exist" containerID="70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.798354 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea"} err="failed to get container status \"70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea\": rpc error: code = NotFound desc = could not find container \"70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea\": container with ID starting with 70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea not found: ID does not exist" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.798382 4902 scope.go:117] "RemoveContainer" containerID="1947ec42fd69cb76effdc0a8c28c8eabe6bb409dc74b40899ca094bce00b5f10" Jan 21 16:53:23 crc kubenswrapper[4902]: E0121 16:53:23.798679 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1947ec42fd69cb76effdc0a8c28c8eabe6bb409dc74b40899ca094bce00b5f10\": container with ID starting with 1947ec42fd69cb76effdc0a8c28c8eabe6bb409dc74b40899ca094bce00b5f10 not found: ID does not exist" containerID="1947ec42fd69cb76effdc0a8c28c8eabe6bb409dc74b40899ca094bce00b5f10" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.798709 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1947ec42fd69cb76effdc0a8c28c8eabe6bb409dc74b40899ca094bce00b5f10"} err="failed to get container status \"1947ec42fd69cb76effdc0a8c28c8eabe6bb409dc74b40899ca094bce00b5f10\": rpc error: code = NotFound desc = could not find container \"1947ec42fd69cb76effdc0a8c28c8eabe6bb409dc74b40899ca094bce00b5f10\": container with ID starting with 1947ec42fd69cb76effdc0a8c28c8eabe6bb409dc74b40899ca094bce00b5f10 not found: ID does not exist" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.798728 4902 scope.go:117] "RemoveContainer" containerID="275d539d248d506426dd37cf223fc6b72a621e4ab41cebe0898c7d344b6b6b84" Jan 21 16:53:23 crc kubenswrapper[4902]: E0121 16:53:23.799574 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"275d539d248d506426dd37cf223fc6b72a621e4ab41cebe0898c7d344b6b6b84\": container with ID starting with 275d539d248d506426dd37cf223fc6b72a621e4ab41cebe0898c7d344b6b6b84 not found: ID does not exist" containerID="275d539d248d506426dd37cf223fc6b72a621e4ab41cebe0898c7d344b6b6b84" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.799605 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"275d539d248d506426dd37cf223fc6b72a621e4ab41cebe0898c7d344b6b6b84"} err="failed to get container status \"275d539d248d506426dd37cf223fc6b72a621e4ab41cebe0898c7d344b6b6b84\": rpc error: code = NotFound desc = could not find container \"275d539d248d506426dd37cf223fc6b72a621e4ab41cebe0898c7d344b6b6b84\": container with ID starting with 275d539d248d506426dd37cf223fc6b72a621e4ab41cebe0898c7d344b6b6b84 not found: ID does not exist" Jan 21 16:53:24 crc kubenswrapper[4902]: I0121 16:53:24.308185 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" path="/var/lib/kubelet/pods/4e0d27aa-f265-4f75-b74b-8ec006ae7151/volumes" Jan 21 16:53:31 crc kubenswrapper[4902]: I0121 16:53:31.297324 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:53:31 crc kubenswrapper[4902]: E0121 16:53:31.299158 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:53:44 crc kubenswrapper[4902]: I0121 16:53:44.296771 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:53:44 crc kubenswrapper[4902]: E0121 16:53:44.297607 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:53:56 crc kubenswrapper[4902]: I0121 16:53:56.296544 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:53:56 crc kubenswrapper[4902]: E0121 16:53:56.297823 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:54:11 crc kubenswrapper[4902]: I0121 16:54:11.295890 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:54:11 crc kubenswrapper[4902]: E0121 16:54:11.299205 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:54:23 crc kubenswrapper[4902]: I0121 16:54:23.295095 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:54:23 crc kubenswrapper[4902]: E0121 16:54:23.296119 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:54:35 crc kubenswrapper[4902]: I0121 16:54:35.298754 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:54:35 crc kubenswrapper[4902]: E0121 16:54:35.299576 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:54:49 crc kubenswrapper[4902]: I0121 16:54:49.297384 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:54:49 crc kubenswrapper[4902]: E0121 16:54:49.304792 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:55:02 crc kubenswrapper[4902]: I0121 16:55:02.294940 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:55:02 crc kubenswrapper[4902]: E0121 16:55:02.295646 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:55:13 crc kubenswrapper[4902]: I0121 16:55:13.294889 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:55:13 crc kubenswrapper[4902]: E0121 16:55:13.296269 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:55:24 crc kubenswrapper[4902]: I0121 16:55:24.294871 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:55:25 crc kubenswrapper[4902]: I0121 16:55:25.003700 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"9f776d5840d31ab607fa3e29bc18a92f4b5a166c8a644779f2d27b3187fe84b1"} Jan 21 16:57:47 crc kubenswrapper[4902]: I0121 16:57:47.770221 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:57:47 crc kubenswrapper[4902]: I0121 16:57:47.770844 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:58:17 crc kubenswrapper[4902]: I0121 16:58:17.769557 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:58:17 crc kubenswrapper[4902]: I0121 16:58:17.770110 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:58:47 crc kubenswrapper[4902]: I0121 16:58:47.770332 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:58:47 crc kubenswrapper[4902]: I0121 16:58:47.770847 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:58:47 crc kubenswrapper[4902]: I0121 16:58:47.770914 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 16:58:47 crc kubenswrapper[4902]: I0121 16:58:47.771827 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9f776d5840d31ab607fa3e29bc18a92f4b5a166c8a644779f2d27b3187fe84b1"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:58:47 crc kubenswrapper[4902]: I0121 16:58:47.771903 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://9f776d5840d31ab607fa3e29bc18a92f4b5a166c8a644779f2d27b3187fe84b1" gracePeriod=600 Jan 21 16:58:48 crc kubenswrapper[4902]: I0121 16:58:48.829525 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="9f776d5840d31ab607fa3e29bc18a92f4b5a166c8a644779f2d27b3187fe84b1" exitCode=0 Jan 21 16:58:48 crc kubenswrapper[4902]: I0121 16:58:48.829585 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"9f776d5840d31ab607fa3e29bc18a92f4b5a166c8a644779f2d27b3187fe84b1"} Jan 21 16:58:48 crc kubenswrapper[4902]: I0121 16:58:48.830027 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd"} Jan 21 16:58:48 crc kubenswrapper[4902]: I0121 16:58:48.830053 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.431687 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rs8bm"] Jan 21 16:58:50 crc kubenswrapper[4902]: E0121 16:58:50.432615 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" containerName="extract-content" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.432628 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" containerName="extract-content" Jan 21 16:58:50 crc kubenswrapper[4902]: E0121 16:58:50.432650 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" containerName="extract-utilities" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.432658 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" containerName="extract-utilities" Jan 21 16:58:50 crc kubenswrapper[4902]: E0121 16:58:50.432681 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" containerName="extract-content" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.432688 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" containerName="extract-content" Jan 21 16:58:50 crc kubenswrapper[4902]: E0121 16:58:50.432700 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" containerName="registry-server" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.432706 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" containerName="registry-server" Jan 21 16:58:50 crc kubenswrapper[4902]: E0121 16:58:50.432713 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" containerName="extract-utilities" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.432718 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" containerName="extract-utilities" Jan 21 16:58:50 crc kubenswrapper[4902]: E0121 16:58:50.432733 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" containerName="registry-server" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.432739 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" containerName="registry-server" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.432929 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" containerName="registry-server" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.432955 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" containerName="registry-server" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.437647 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.446309 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rs8bm"] Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.570404 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06359608-b433-4b84-8058-97775f4976ff-utilities\") pod \"redhat-marketplace-rs8bm\" (UID: \"06359608-b433-4b84-8058-97775f4976ff\") " pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.570493 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06359608-b433-4b84-8058-97775f4976ff-catalog-content\") pod \"redhat-marketplace-rs8bm\" (UID: \"06359608-b433-4b84-8058-97775f4976ff\") " pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.570576 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvj9s\" (UniqueName: \"kubernetes.io/projected/06359608-b433-4b84-8058-97775f4976ff-kube-api-access-hvj9s\") pod \"redhat-marketplace-rs8bm\" (UID: \"06359608-b433-4b84-8058-97775f4976ff\") " pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.672246 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvj9s\" (UniqueName: \"kubernetes.io/projected/06359608-b433-4b84-8058-97775f4976ff-kube-api-access-hvj9s\") pod \"redhat-marketplace-rs8bm\" (UID: \"06359608-b433-4b84-8058-97775f4976ff\") " pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.672580 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06359608-b433-4b84-8058-97775f4976ff-utilities\") pod \"redhat-marketplace-rs8bm\" (UID: \"06359608-b433-4b84-8058-97775f4976ff\") " pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.672675 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06359608-b433-4b84-8058-97775f4976ff-catalog-content\") pod \"redhat-marketplace-rs8bm\" (UID: \"06359608-b433-4b84-8058-97775f4976ff\") " pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.673289 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06359608-b433-4b84-8058-97775f4976ff-catalog-content\") pod \"redhat-marketplace-rs8bm\" (UID: \"06359608-b433-4b84-8058-97775f4976ff\") " pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.673334 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06359608-b433-4b84-8058-97775f4976ff-utilities\") pod \"redhat-marketplace-rs8bm\" (UID: \"06359608-b433-4b84-8058-97775f4976ff\") " pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.695507 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvj9s\" (UniqueName: \"kubernetes.io/projected/06359608-b433-4b84-8058-97775f4976ff-kube-api-access-hvj9s\") pod \"redhat-marketplace-rs8bm\" (UID: \"06359608-b433-4b84-8058-97775f4976ff\") " pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.757902 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:58:51 crc kubenswrapper[4902]: I0121 16:58:51.296539 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rs8bm"] Jan 21 16:58:51 crc kubenswrapper[4902]: W0121 16:58:51.299092 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06359608_b433_4b84_8058_97775f4976ff.slice/crio-a5bfc0b8a99730dfe546e6d8b4235e9b0d61cbcfbd4125eacac833383f98740f WatchSource:0}: Error finding container a5bfc0b8a99730dfe546e6d8b4235e9b0d61cbcfbd4125eacac833383f98740f: Status 404 returned error can't find the container with id a5bfc0b8a99730dfe546e6d8b4235e9b0d61cbcfbd4125eacac833383f98740f Jan 21 16:58:51 crc kubenswrapper[4902]: I0121 16:58:51.858352 4902 generic.go:334] "Generic (PLEG): container finished" podID="06359608-b433-4b84-8058-97775f4976ff" containerID="6a3ce4f8064bdf34495fa62ef85475e96bbba60e6e02b25e4a3738e1c4a8f686" exitCode=0 Jan 21 16:58:51 crc kubenswrapper[4902]: I0121 16:58:51.858420 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs8bm" event={"ID":"06359608-b433-4b84-8058-97775f4976ff","Type":"ContainerDied","Data":"6a3ce4f8064bdf34495fa62ef85475e96bbba60e6e02b25e4a3738e1c4a8f686"} Jan 21 16:58:51 crc kubenswrapper[4902]: I0121 16:58:51.858664 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs8bm" event={"ID":"06359608-b433-4b84-8058-97775f4976ff","Type":"ContainerStarted","Data":"a5bfc0b8a99730dfe546e6d8b4235e9b0d61cbcfbd4125eacac833383f98740f"} Jan 21 16:58:51 crc kubenswrapper[4902]: I0121 16:58:51.861058 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:58:53 crc kubenswrapper[4902]: I0121 16:58:53.882273 4902 generic.go:334] "Generic (PLEG): container finished" podID="06359608-b433-4b84-8058-97775f4976ff" containerID="938d283e329d2ca24ffbe1c576d31c6474a89d21dd9ab2d69812e949032b5860" exitCode=0 Jan 21 16:58:53 crc kubenswrapper[4902]: I0121 16:58:53.882388 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs8bm" event={"ID":"06359608-b433-4b84-8058-97775f4976ff","Type":"ContainerDied","Data":"938d283e329d2ca24ffbe1c576d31c6474a89d21dd9ab2d69812e949032b5860"} Jan 21 16:58:55 crc kubenswrapper[4902]: I0121 16:58:55.905493 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs8bm" event={"ID":"06359608-b433-4b84-8058-97775f4976ff","Type":"ContainerStarted","Data":"415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9"} Jan 21 16:58:55 crc kubenswrapper[4902]: I0121 16:58:55.930599 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rs8bm" podStartSLOduration=3.462393264 podStartE2EDuration="5.930581152s" podCreationTimestamp="2026-01-21 16:58:50 +0000 UTC" firstStartedPulling="2026-01-21 16:58:51.860798079 +0000 UTC m=+8693.937631108" lastFinishedPulling="2026-01-21 16:58:54.328985977 +0000 UTC m=+8696.405818996" observedRunningTime="2026-01-21 16:58:55.924015817 +0000 UTC m=+8698.000848856" watchObservedRunningTime="2026-01-21 16:58:55.930581152 +0000 UTC m=+8698.007414181" Jan 21 16:59:00 crc kubenswrapper[4902]: I0121 16:59:00.758486 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:59:00 crc kubenswrapper[4902]: I0121 16:59:00.758968 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:59:00 crc kubenswrapper[4902]: I0121 16:59:00.814850 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:59:01 crc kubenswrapper[4902]: I0121 16:59:01.019503 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:59:01 crc kubenswrapper[4902]: I0121 16:59:01.084150 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rs8bm"] Jan 21 16:59:02 crc kubenswrapper[4902]: I0121 16:59:02.970101 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rs8bm" podUID="06359608-b433-4b84-8058-97775f4976ff" containerName="registry-server" containerID="cri-o://415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9" gracePeriod=2 Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.452995 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.602712 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvj9s\" (UniqueName: \"kubernetes.io/projected/06359608-b433-4b84-8058-97775f4976ff-kube-api-access-hvj9s\") pod \"06359608-b433-4b84-8058-97775f4976ff\" (UID: \"06359608-b433-4b84-8058-97775f4976ff\") " Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.602898 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06359608-b433-4b84-8058-97775f4976ff-utilities\") pod \"06359608-b433-4b84-8058-97775f4976ff\" (UID: \"06359608-b433-4b84-8058-97775f4976ff\") " Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.602994 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06359608-b433-4b84-8058-97775f4976ff-catalog-content\") pod \"06359608-b433-4b84-8058-97775f4976ff\" (UID: \"06359608-b433-4b84-8058-97775f4976ff\") " Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.603789 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06359608-b433-4b84-8058-97775f4976ff-utilities" (OuterVolumeSpecName: "utilities") pod "06359608-b433-4b84-8058-97775f4976ff" (UID: "06359608-b433-4b84-8058-97775f4976ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.608193 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06359608-b433-4b84-8058-97775f4976ff-kube-api-access-hvj9s" (OuterVolumeSpecName: "kube-api-access-hvj9s") pod "06359608-b433-4b84-8058-97775f4976ff" (UID: "06359608-b433-4b84-8058-97775f4976ff"). InnerVolumeSpecName "kube-api-access-hvj9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.625242 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06359608-b433-4b84-8058-97775f4976ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06359608-b433-4b84-8058-97775f4976ff" (UID: "06359608-b433-4b84-8058-97775f4976ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.705697 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06359608-b433-4b84-8058-97775f4976ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.705737 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvj9s\" (UniqueName: \"kubernetes.io/projected/06359608-b433-4b84-8058-97775f4976ff-kube-api-access-hvj9s\") on node \"crc\" DevicePath \"\"" Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.705749 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06359608-b433-4b84-8058-97775f4976ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.984460 4902 generic.go:334] "Generic (PLEG): container finished" podID="06359608-b433-4b84-8058-97775f4976ff" containerID="415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9" exitCode=0 Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.984507 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs8bm" event={"ID":"06359608-b433-4b84-8058-97775f4976ff","Type":"ContainerDied","Data":"415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9"} Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.985451 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs8bm" event={"ID":"06359608-b433-4b84-8058-97775f4976ff","Type":"ContainerDied","Data":"a5bfc0b8a99730dfe546e6d8b4235e9b0d61cbcfbd4125eacac833383f98740f"} Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.984572 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.985477 4902 scope.go:117] "RemoveContainer" containerID="415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9" Jan 21 16:59:04 crc kubenswrapper[4902]: I0121 16:59:04.013616 4902 scope.go:117] "RemoveContainer" containerID="938d283e329d2ca24ffbe1c576d31c6474a89d21dd9ab2d69812e949032b5860" Jan 21 16:59:04 crc kubenswrapper[4902]: I0121 16:59:04.035918 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rs8bm"] Jan 21 16:59:04 crc kubenswrapper[4902]: I0121 16:59:04.045546 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rs8bm"] Jan 21 16:59:04 crc kubenswrapper[4902]: I0121 16:59:04.057816 4902 scope.go:117] "RemoveContainer" containerID="6a3ce4f8064bdf34495fa62ef85475e96bbba60e6e02b25e4a3738e1c4a8f686" Jan 21 16:59:04 crc kubenswrapper[4902]: I0121 16:59:04.105605 4902 scope.go:117] "RemoveContainer" containerID="415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9" Jan 21 16:59:04 crc kubenswrapper[4902]: E0121 16:59:04.106157 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9\": container with ID starting with 415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9 not found: ID does not exist" containerID="415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9" Jan 21 16:59:04 crc kubenswrapper[4902]: I0121 16:59:04.106205 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9"} err="failed to get container status \"415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9\": rpc error: code = NotFound desc = could not find container \"415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9\": container with ID starting with 415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9 not found: ID does not exist" Jan 21 16:59:04 crc kubenswrapper[4902]: I0121 16:59:04.106227 4902 scope.go:117] "RemoveContainer" containerID="938d283e329d2ca24ffbe1c576d31c6474a89d21dd9ab2d69812e949032b5860" Jan 21 16:59:04 crc kubenswrapper[4902]: E0121 16:59:04.106438 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"938d283e329d2ca24ffbe1c576d31c6474a89d21dd9ab2d69812e949032b5860\": container with ID starting with 938d283e329d2ca24ffbe1c576d31c6474a89d21dd9ab2d69812e949032b5860 not found: ID does not exist" containerID="938d283e329d2ca24ffbe1c576d31c6474a89d21dd9ab2d69812e949032b5860" Jan 21 16:59:04 crc kubenswrapper[4902]: I0121 16:59:04.106470 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"938d283e329d2ca24ffbe1c576d31c6474a89d21dd9ab2d69812e949032b5860"} err="failed to get container status \"938d283e329d2ca24ffbe1c576d31c6474a89d21dd9ab2d69812e949032b5860\": rpc error: code = NotFound desc = could not find container \"938d283e329d2ca24ffbe1c576d31c6474a89d21dd9ab2d69812e949032b5860\": container with ID starting with 938d283e329d2ca24ffbe1c576d31c6474a89d21dd9ab2d69812e949032b5860 not found: ID does not exist" Jan 21 16:59:04 crc kubenswrapper[4902]: I0121 16:59:04.106488 4902 scope.go:117] "RemoveContainer" containerID="6a3ce4f8064bdf34495fa62ef85475e96bbba60e6e02b25e4a3738e1c4a8f686" Jan 21 16:59:04 crc kubenswrapper[4902]: E0121 16:59:04.106730 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a3ce4f8064bdf34495fa62ef85475e96bbba60e6e02b25e4a3738e1c4a8f686\": container with ID starting with 6a3ce4f8064bdf34495fa62ef85475e96bbba60e6e02b25e4a3738e1c4a8f686 not found: ID does not exist" containerID="6a3ce4f8064bdf34495fa62ef85475e96bbba60e6e02b25e4a3738e1c4a8f686" Jan 21 16:59:04 crc kubenswrapper[4902]: I0121 16:59:04.106747 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a3ce4f8064bdf34495fa62ef85475e96bbba60e6e02b25e4a3738e1c4a8f686"} err="failed to get container status \"6a3ce4f8064bdf34495fa62ef85475e96bbba60e6e02b25e4a3738e1c4a8f686\": rpc error: code = NotFound desc = could not find container \"6a3ce4f8064bdf34495fa62ef85475e96bbba60e6e02b25e4a3738e1c4a8f686\": container with ID starting with 6a3ce4f8064bdf34495fa62ef85475e96bbba60e6e02b25e4a3738e1c4a8f686 not found: ID does not exist" Jan 21 16:59:04 crc kubenswrapper[4902]: I0121 16:59:04.307804 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06359608-b433-4b84-8058-97775f4976ff" path="/var/lib/kubelet/pods/06359608-b433-4b84-8058-97775f4976ff/volumes" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.156266 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5"] Jan 21 17:00:00 crc kubenswrapper[4902]: E0121 17:00:00.157133 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06359608-b433-4b84-8058-97775f4976ff" containerName="extract-content" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.157145 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="06359608-b433-4b84-8058-97775f4976ff" containerName="extract-content" Jan 21 17:00:00 crc kubenswrapper[4902]: E0121 17:00:00.157167 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06359608-b433-4b84-8058-97775f4976ff" containerName="registry-server" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.157173 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="06359608-b433-4b84-8058-97775f4976ff" containerName="registry-server" Jan 21 17:00:00 crc kubenswrapper[4902]: E0121 17:00:00.157196 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06359608-b433-4b84-8058-97775f4976ff" containerName="extract-utilities" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.157202 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="06359608-b433-4b84-8058-97775f4976ff" containerName="extract-utilities" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.157406 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="06359608-b433-4b84-8058-97775f4976ff" containerName="registry-server" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.158127 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.165393 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.165405 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.175316 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5"] Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.321670 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b4bb0bd-43ae-4455-8695-1123a4597e26-config-volume\") pod \"collect-profiles-29483580-qxkr5\" (UID: \"0b4bb0bd-43ae-4455-8695-1123a4597e26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.321835 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqj9l\" (UniqueName: \"kubernetes.io/projected/0b4bb0bd-43ae-4455-8695-1123a4597e26-kube-api-access-wqj9l\") pod \"collect-profiles-29483580-qxkr5\" (UID: \"0b4bb0bd-43ae-4455-8695-1123a4597e26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.322006 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b4bb0bd-43ae-4455-8695-1123a4597e26-secret-volume\") pod \"collect-profiles-29483580-qxkr5\" (UID: \"0b4bb0bd-43ae-4455-8695-1123a4597e26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.424490 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b4bb0bd-43ae-4455-8695-1123a4597e26-config-volume\") pod \"collect-profiles-29483580-qxkr5\" (UID: \"0b4bb0bd-43ae-4455-8695-1123a4597e26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.424631 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqj9l\" (UniqueName: \"kubernetes.io/projected/0b4bb0bd-43ae-4455-8695-1123a4597e26-kube-api-access-wqj9l\") pod \"collect-profiles-29483580-qxkr5\" (UID: \"0b4bb0bd-43ae-4455-8695-1123a4597e26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.424850 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b4bb0bd-43ae-4455-8695-1123a4597e26-secret-volume\") pod \"collect-profiles-29483580-qxkr5\" (UID: \"0b4bb0bd-43ae-4455-8695-1123a4597e26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.425718 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b4bb0bd-43ae-4455-8695-1123a4597e26-config-volume\") pod \"collect-profiles-29483580-qxkr5\" (UID: \"0b4bb0bd-43ae-4455-8695-1123a4597e26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.436653 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b4bb0bd-43ae-4455-8695-1123a4597e26-secret-volume\") pod \"collect-profiles-29483580-qxkr5\" (UID: \"0b4bb0bd-43ae-4455-8695-1123a4597e26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.461105 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqj9l\" (UniqueName: \"kubernetes.io/projected/0b4bb0bd-43ae-4455-8695-1123a4597e26-kube-api-access-wqj9l\") pod \"collect-profiles-29483580-qxkr5\" (UID: \"0b4bb0bd-43ae-4455-8695-1123a4597e26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.487288 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:01 crc kubenswrapper[4902]: I0121 17:00:01.031146 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5"] Jan 21 17:00:01 crc kubenswrapper[4902]: I0121 17:00:01.592741 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" event={"ID":"0b4bb0bd-43ae-4455-8695-1123a4597e26","Type":"ContainerStarted","Data":"367859eb580777758fc90831db5c3fe7bd94cfec159c3396efcd6037139700cd"} Jan 21 17:00:01 crc kubenswrapper[4902]: I0121 17:00:01.592783 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" event={"ID":"0b4bb0bd-43ae-4455-8695-1123a4597e26","Type":"ContainerStarted","Data":"da083988cf2107abe293a910cba9bba2569d0f58d7e960099a1081674b442c1e"} Jan 21 17:00:01 crc kubenswrapper[4902]: I0121 17:00:01.614600 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" podStartSLOduration=1.614582266 podStartE2EDuration="1.614582266s" podCreationTimestamp="2026-01-21 17:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:00:01.605003996 +0000 UTC m=+8763.681837025" watchObservedRunningTime="2026-01-21 17:00:01.614582266 +0000 UTC m=+8763.691415295" Jan 21 17:00:02 crc kubenswrapper[4902]: I0121 17:00:02.601573 4902 generic.go:334] "Generic (PLEG): container finished" podID="0b4bb0bd-43ae-4455-8695-1123a4597e26" containerID="367859eb580777758fc90831db5c3fe7bd94cfec159c3396efcd6037139700cd" exitCode=0 Jan 21 17:00:02 crc kubenswrapper[4902]: I0121 17:00:02.601787 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" event={"ID":"0b4bb0bd-43ae-4455-8695-1123a4597e26","Type":"ContainerDied","Data":"367859eb580777758fc90831db5c3fe7bd94cfec159c3396efcd6037139700cd"} Jan 21 17:00:03 crc kubenswrapper[4902]: I0121 17:00:03.998361 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.122340 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqj9l\" (UniqueName: \"kubernetes.io/projected/0b4bb0bd-43ae-4455-8695-1123a4597e26-kube-api-access-wqj9l\") pod \"0b4bb0bd-43ae-4455-8695-1123a4597e26\" (UID: \"0b4bb0bd-43ae-4455-8695-1123a4597e26\") " Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.122451 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b4bb0bd-43ae-4455-8695-1123a4597e26-config-volume\") pod \"0b4bb0bd-43ae-4455-8695-1123a4597e26\" (UID: \"0b4bb0bd-43ae-4455-8695-1123a4597e26\") " Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.122760 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b4bb0bd-43ae-4455-8695-1123a4597e26-secret-volume\") pod \"0b4bb0bd-43ae-4455-8695-1123a4597e26\" (UID: \"0b4bb0bd-43ae-4455-8695-1123a4597e26\") " Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.123415 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b4bb0bd-43ae-4455-8695-1123a4597e26-config-volume" (OuterVolumeSpecName: "config-volume") pod "0b4bb0bd-43ae-4455-8695-1123a4597e26" (UID: "0b4bb0bd-43ae-4455-8695-1123a4597e26"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.131589 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b4bb0bd-43ae-4455-8695-1123a4597e26-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0b4bb0bd-43ae-4455-8695-1123a4597e26" (UID: "0b4bb0bd-43ae-4455-8695-1123a4597e26"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.132754 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b4bb0bd-43ae-4455-8695-1123a4597e26-kube-api-access-wqj9l" (OuterVolumeSpecName: "kube-api-access-wqj9l") pod "0b4bb0bd-43ae-4455-8695-1123a4597e26" (UID: "0b4bb0bd-43ae-4455-8695-1123a4597e26"). InnerVolumeSpecName "kube-api-access-wqj9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.224774 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqj9l\" (UniqueName: \"kubernetes.io/projected/0b4bb0bd-43ae-4455-8695-1123a4597e26-kube-api-access-wqj9l\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.224814 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b4bb0bd-43ae-4455-8695-1123a4597e26-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.224825 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b4bb0bd-43ae-4455-8695-1123a4597e26-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.626224 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" event={"ID":"0b4bb0bd-43ae-4455-8695-1123a4597e26","Type":"ContainerDied","Data":"da083988cf2107abe293a910cba9bba2569d0f58d7e960099a1081674b442c1e"} Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.626269 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da083988cf2107abe293a910cba9bba2569d0f58d7e960099a1081674b442c1e" Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.626335 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.710939 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2"] Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.721567 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2"] Jan 21 17:00:06 crc kubenswrapper[4902]: I0121 17:00:06.318732 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3234509-8b7b-4b77-9a80-f496d21a727e" path="/var/lib/kubelet/pods/c3234509-8b7b-4b77-9a80-f496d21a727e/volumes" Jan 21 17:00:36 crc kubenswrapper[4902]: I0121 17:00:36.841455 4902 scope.go:117] "RemoveContainer" containerID="4e8300ed14fa669d6234d502917b52e699b6641dda6ef60268cdbc2afafd8313" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.166590 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29483581-p4mmj"] Jan 21 17:01:00 crc kubenswrapper[4902]: E0121 17:01:00.168640 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b4bb0bd-43ae-4455-8695-1123a4597e26" containerName="collect-profiles" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.168895 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b4bb0bd-43ae-4455-8695-1123a4597e26" containerName="collect-profiles" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.169544 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b4bb0bd-43ae-4455-8695-1123a4597e26" containerName="collect-profiles" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.171436 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.182350 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483581-p4mmj"] Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.223288 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-combined-ca-bundle\") pod \"keystone-cron-29483581-p4mmj\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.223386 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb88v\" (UniqueName: \"kubernetes.io/projected/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-kube-api-access-hb88v\") pod \"keystone-cron-29483581-p4mmj\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.223436 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-fernet-keys\") pod \"keystone-cron-29483581-p4mmj\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.223694 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-config-data\") pod \"keystone-cron-29483581-p4mmj\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.325636 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-fernet-keys\") pod \"keystone-cron-29483581-p4mmj\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.325798 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-config-data\") pod \"keystone-cron-29483581-p4mmj\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.325850 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-combined-ca-bundle\") pod \"keystone-cron-29483581-p4mmj\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.325997 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb88v\" (UniqueName: \"kubernetes.io/projected/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-kube-api-access-hb88v\") pod \"keystone-cron-29483581-p4mmj\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.333663 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-combined-ca-bundle\") pod \"keystone-cron-29483581-p4mmj\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.334342 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-fernet-keys\") pod \"keystone-cron-29483581-p4mmj\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.343120 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-config-data\") pod \"keystone-cron-29483581-p4mmj\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.352327 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb88v\" (UniqueName: \"kubernetes.io/projected/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-kube-api-access-hb88v\") pod \"keystone-cron-29483581-p4mmj\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.514161 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:01 crc kubenswrapper[4902]: I0121 17:01:01.009167 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483581-p4mmj"] Jan 21 17:01:01 crc kubenswrapper[4902]: W0121 17:01:01.020938 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d2ca2cf_e9dd_4a00_b422_a84ffd14648c.slice/crio-a347605711377bdcb3ac91b7e81ecb742b4ca2002afc6ac3d222e5c71a6460ab WatchSource:0}: Error finding container a347605711377bdcb3ac91b7e81ecb742b4ca2002afc6ac3d222e5c71a6460ab: Status 404 returned error can't find the container with id a347605711377bdcb3ac91b7e81ecb742b4ca2002afc6ac3d222e5c71a6460ab Jan 21 17:01:01 crc kubenswrapper[4902]: I0121 17:01:01.311522 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483581-p4mmj" event={"ID":"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c","Type":"ContainerStarted","Data":"80e4b0de8f14cb4dd8ffc7536921b01171eab937e2dd0722aac183b116cb2d3e"} Jan 21 17:01:01 crc kubenswrapper[4902]: I0121 17:01:01.311968 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483581-p4mmj" event={"ID":"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c","Type":"ContainerStarted","Data":"a347605711377bdcb3ac91b7e81ecb742b4ca2002afc6ac3d222e5c71a6460ab"} Jan 21 17:01:04 crc kubenswrapper[4902]: I0121 17:01:04.361156 4902 generic.go:334] "Generic (PLEG): container finished" podID="2d2ca2cf-e9dd-4a00-b422-a84ffd14648c" containerID="80e4b0de8f14cb4dd8ffc7536921b01171eab937e2dd0722aac183b116cb2d3e" exitCode=0 Jan 21 17:01:04 crc kubenswrapper[4902]: I0121 17:01:04.361229 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483581-p4mmj" event={"ID":"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c","Type":"ContainerDied","Data":"80e4b0de8f14cb4dd8ffc7536921b01171eab937e2dd0722aac183b116cb2d3e"} Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.803644 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.882766 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-fernet-keys\") pod \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.883795 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-combined-ca-bundle\") pod \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.883993 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb88v\" (UniqueName: \"kubernetes.io/projected/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-kube-api-access-hb88v\") pod \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.884178 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-config-data\") pod \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.889206 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2d2ca2cf-e9dd-4a00-b422-a84ffd14648c" (UID: "2d2ca2cf-e9dd-4a00-b422-a84ffd14648c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.894711 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-kube-api-access-hb88v" (OuterVolumeSpecName: "kube-api-access-hb88v") pod "2d2ca2cf-e9dd-4a00-b422-a84ffd14648c" (UID: "2d2ca2cf-e9dd-4a00-b422-a84ffd14648c"). InnerVolumeSpecName "kube-api-access-hb88v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.915634 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d2ca2cf-e9dd-4a00-b422-a84ffd14648c" (UID: "2d2ca2cf-e9dd-4a00-b422-a84ffd14648c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.960336 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-config-data" (OuterVolumeSpecName: "config-data") pod "2d2ca2cf-e9dd-4a00-b422-a84ffd14648c" (UID: "2d2ca2cf-e9dd-4a00-b422-a84ffd14648c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.988557 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb88v\" (UniqueName: \"kubernetes.io/projected/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-kube-api-access-hb88v\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.988592 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.988602 4902 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.988611 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:06 crc kubenswrapper[4902]: I0121 17:01:06.400672 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:06 crc kubenswrapper[4902]: I0121 17:01:06.400712 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483581-p4mmj" event={"ID":"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c","Type":"ContainerDied","Data":"a347605711377bdcb3ac91b7e81ecb742b4ca2002afc6ac3d222e5c71a6460ab"} Jan 21 17:01:06 crc kubenswrapper[4902]: I0121 17:01:06.400753 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a347605711377bdcb3ac91b7e81ecb742b4ca2002afc6ac3d222e5c71a6460ab" Jan 21 17:01:17 crc kubenswrapper[4902]: I0121 17:01:17.769813 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:01:17 crc kubenswrapper[4902]: I0121 17:01:17.770374 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.148456 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wtcd8"] Jan 21 17:01:44 crc kubenswrapper[4902]: E0121 17:01:44.149618 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d2ca2cf-e9dd-4a00-b422-a84ffd14648c" containerName="keystone-cron" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.149638 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d2ca2cf-e9dd-4a00-b422-a84ffd14648c" containerName="keystone-cron" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.149911 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d2ca2cf-e9dd-4a00-b422-a84ffd14648c" containerName="keystone-cron" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.152122 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.164474 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wtcd8"] Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.176919 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q5t7\" (UniqueName: \"kubernetes.io/projected/e63a3374-1941-4924-9ddf-e2638ebd9da5-kube-api-access-8q5t7\") pod \"community-operators-wtcd8\" (UID: \"e63a3374-1941-4924-9ddf-e2638ebd9da5\") " pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.177011 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63a3374-1941-4924-9ddf-e2638ebd9da5-utilities\") pod \"community-operators-wtcd8\" (UID: \"e63a3374-1941-4924-9ddf-e2638ebd9da5\") " pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.178383 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63a3374-1941-4924-9ddf-e2638ebd9da5-catalog-content\") pod \"community-operators-wtcd8\" (UID: \"e63a3374-1941-4924-9ddf-e2638ebd9da5\") " pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.280805 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q5t7\" (UniqueName: \"kubernetes.io/projected/e63a3374-1941-4924-9ddf-e2638ebd9da5-kube-api-access-8q5t7\") pod \"community-operators-wtcd8\" (UID: \"e63a3374-1941-4924-9ddf-e2638ebd9da5\") " pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.281331 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63a3374-1941-4924-9ddf-e2638ebd9da5-utilities\") pod \"community-operators-wtcd8\" (UID: \"e63a3374-1941-4924-9ddf-e2638ebd9da5\") " pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.281489 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63a3374-1941-4924-9ddf-e2638ebd9da5-catalog-content\") pod \"community-operators-wtcd8\" (UID: \"e63a3374-1941-4924-9ddf-e2638ebd9da5\") " pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.282386 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63a3374-1941-4924-9ddf-e2638ebd9da5-catalog-content\") pod \"community-operators-wtcd8\" (UID: \"e63a3374-1941-4924-9ddf-e2638ebd9da5\") " pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.282435 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63a3374-1941-4924-9ddf-e2638ebd9da5-utilities\") pod \"community-operators-wtcd8\" (UID: \"e63a3374-1941-4924-9ddf-e2638ebd9da5\") " pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.319686 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q5t7\" (UniqueName: \"kubernetes.io/projected/e63a3374-1941-4924-9ddf-e2638ebd9da5-kube-api-access-8q5t7\") pod \"community-operators-wtcd8\" (UID: \"e63a3374-1941-4924-9ddf-e2638ebd9da5\") " pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.483557 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:45 crc kubenswrapper[4902]: I0121 17:01:45.202443 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wtcd8"] Jan 21 17:01:45 crc kubenswrapper[4902]: I0121 17:01:45.917501 4902 generic.go:334] "Generic (PLEG): container finished" podID="e63a3374-1941-4924-9ddf-e2638ebd9da5" containerID="f3c478b983036543d91a209a8db5de72a1c06a22268edc0eed2e10db53ff5bf3" exitCode=0 Jan 21 17:01:45 crc kubenswrapper[4902]: I0121 17:01:45.917679 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtcd8" event={"ID":"e63a3374-1941-4924-9ddf-e2638ebd9da5","Type":"ContainerDied","Data":"f3c478b983036543d91a209a8db5de72a1c06a22268edc0eed2e10db53ff5bf3"} Jan 21 17:01:45 crc kubenswrapper[4902]: I0121 17:01:45.917764 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtcd8" event={"ID":"e63a3374-1941-4924-9ddf-e2638ebd9da5","Type":"ContainerStarted","Data":"9a0d90b266baaa13b235a2e441a52fffd29cb0edf367e73f91a2c5e00d545288"} Jan 21 17:01:47 crc kubenswrapper[4902]: I0121 17:01:47.771478 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:01:47 crc kubenswrapper[4902]: I0121 17:01:47.772488 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:01:47 crc kubenswrapper[4902]: I0121 17:01:47.936428 4902 generic.go:334] "Generic (PLEG): container finished" podID="e63a3374-1941-4924-9ddf-e2638ebd9da5" containerID="46074c0df4781f45f76d8de30b4831bcda3c222dcd955c4834c4068554d8889a" exitCode=0 Jan 21 17:01:47 crc kubenswrapper[4902]: I0121 17:01:47.936483 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtcd8" event={"ID":"e63a3374-1941-4924-9ddf-e2638ebd9da5","Type":"ContainerDied","Data":"46074c0df4781f45f76d8de30b4831bcda3c222dcd955c4834c4068554d8889a"} Jan 21 17:01:48 crc kubenswrapper[4902]: I0121 17:01:48.949107 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtcd8" event={"ID":"e63a3374-1941-4924-9ddf-e2638ebd9da5","Type":"ContainerStarted","Data":"b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46"} Jan 21 17:01:48 crc kubenswrapper[4902]: I0121 17:01:48.975718 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wtcd8" podStartSLOduration=2.523178127 podStartE2EDuration="4.975702774s" podCreationTimestamp="2026-01-21 17:01:44 +0000 UTC" firstStartedPulling="2026-01-21 17:01:45.920151275 +0000 UTC m=+8867.996984304" lastFinishedPulling="2026-01-21 17:01:48.372675922 +0000 UTC m=+8870.449508951" observedRunningTime="2026-01-21 17:01:48.973494781 +0000 UTC m=+8871.050327810" watchObservedRunningTime="2026-01-21 17:01:48.975702774 +0000 UTC m=+8871.052535793" Jan 21 17:01:54 crc kubenswrapper[4902]: I0121 17:01:54.484779 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:54 crc kubenswrapper[4902]: I0121 17:01:54.486218 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:54 crc kubenswrapper[4902]: I0121 17:01:54.556257 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:55 crc kubenswrapper[4902]: I0121 17:01:55.063393 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:55 crc kubenswrapper[4902]: I0121 17:01:55.147094 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wtcd8"] Jan 21 17:01:57 crc kubenswrapper[4902]: I0121 17:01:57.034878 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wtcd8" podUID="e63a3374-1941-4924-9ddf-e2638ebd9da5" containerName="registry-server" containerID="cri-o://b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46" gracePeriod=2 Jan 21 17:01:57 crc kubenswrapper[4902]: I0121 17:01:57.550625 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:57 crc kubenswrapper[4902]: I0121 17:01:57.563869 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63a3374-1941-4924-9ddf-e2638ebd9da5-catalog-content\") pod \"e63a3374-1941-4924-9ddf-e2638ebd9da5\" (UID: \"e63a3374-1941-4924-9ddf-e2638ebd9da5\") " Jan 21 17:01:57 crc kubenswrapper[4902]: I0121 17:01:57.564436 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q5t7\" (UniqueName: \"kubernetes.io/projected/e63a3374-1941-4924-9ddf-e2638ebd9da5-kube-api-access-8q5t7\") pod \"e63a3374-1941-4924-9ddf-e2638ebd9da5\" (UID: \"e63a3374-1941-4924-9ddf-e2638ebd9da5\") " Jan 21 17:01:57 crc kubenswrapper[4902]: I0121 17:01:57.564653 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63a3374-1941-4924-9ddf-e2638ebd9da5-utilities\") pod \"e63a3374-1941-4924-9ddf-e2638ebd9da5\" (UID: \"e63a3374-1941-4924-9ddf-e2638ebd9da5\") " Jan 21 17:01:57 crc kubenswrapper[4902]: I0121 17:01:57.567078 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e63a3374-1941-4924-9ddf-e2638ebd9da5-utilities" (OuterVolumeSpecName: "utilities") pod "e63a3374-1941-4924-9ddf-e2638ebd9da5" (UID: "e63a3374-1941-4924-9ddf-e2638ebd9da5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:01:57 crc kubenswrapper[4902]: I0121 17:01:57.576593 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e63a3374-1941-4924-9ddf-e2638ebd9da5-kube-api-access-8q5t7" (OuterVolumeSpecName: "kube-api-access-8q5t7") pod "e63a3374-1941-4924-9ddf-e2638ebd9da5" (UID: "e63a3374-1941-4924-9ddf-e2638ebd9da5"). InnerVolumeSpecName "kube-api-access-8q5t7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:01:57 crc kubenswrapper[4902]: I0121 17:01:57.667449 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q5t7\" (UniqueName: \"kubernetes.io/projected/e63a3374-1941-4924-9ddf-e2638ebd9da5-kube-api-access-8q5t7\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:57 crc kubenswrapper[4902]: I0121 17:01:57.667479 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63a3374-1941-4924-9ddf-e2638ebd9da5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:57 crc kubenswrapper[4902]: I0121 17:01:57.757516 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e63a3374-1941-4924-9ddf-e2638ebd9da5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e63a3374-1941-4924-9ddf-e2638ebd9da5" (UID: "e63a3374-1941-4924-9ddf-e2638ebd9da5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:01:57 crc kubenswrapper[4902]: I0121 17:01:57.768915 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63a3374-1941-4924-9ddf-e2638ebd9da5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.051752 4902 generic.go:334] "Generic (PLEG): container finished" podID="e63a3374-1941-4924-9ddf-e2638ebd9da5" containerID="b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46" exitCode=0 Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.051801 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtcd8" event={"ID":"e63a3374-1941-4924-9ddf-e2638ebd9da5","Type":"ContainerDied","Data":"b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46"} Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.052114 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtcd8" event={"ID":"e63a3374-1941-4924-9ddf-e2638ebd9da5","Type":"ContainerDied","Data":"9a0d90b266baaa13b235a2e441a52fffd29cb0edf367e73f91a2c5e00d545288"} Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.052140 4902 scope.go:117] "RemoveContainer" containerID="b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46" Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.051878 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.087069 4902 scope.go:117] "RemoveContainer" containerID="46074c0df4781f45f76d8de30b4831bcda3c222dcd955c4834c4068554d8889a" Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.102189 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wtcd8"] Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.120334 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wtcd8"] Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.130622 4902 scope.go:117] "RemoveContainer" containerID="f3c478b983036543d91a209a8db5de72a1c06a22268edc0eed2e10db53ff5bf3" Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.180299 4902 scope.go:117] "RemoveContainer" containerID="b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46" Jan 21 17:01:58 crc kubenswrapper[4902]: E0121 17:01:58.180633 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46\": container with ID starting with b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46 not found: ID does not exist" containerID="b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46" Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.180669 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46"} err="failed to get container status \"b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46\": rpc error: code = NotFound desc = could not find container \"b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46\": container with ID starting with b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46 not found: ID does not exist" Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.180691 4902 scope.go:117] "RemoveContainer" containerID="46074c0df4781f45f76d8de30b4831bcda3c222dcd955c4834c4068554d8889a" Jan 21 17:01:58 crc kubenswrapper[4902]: E0121 17:01:58.181134 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46074c0df4781f45f76d8de30b4831bcda3c222dcd955c4834c4068554d8889a\": container with ID starting with 46074c0df4781f45f76d8de30b4831bcda3c222dcd955c4834c4068554d8889a not found: ID does not exist" containerID="46074c0df4781f45f76d8de30b4831bcda3c222dcd955c4834c4068554d8889a" Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.181197 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46074c0df4781f45f76d8de30b4831bcda3c222dcd955c4834c4068554d8889a"} err="failed to get container status \"46074c0df4781f45f76d8de30b4831bcda3c222dcd955c4834c4068554d8889a\": rpc error: code = NotFound desc = could not find container \"46074c0df4781f45f76d8de30b4831bcda3c222dcd955c4834c4068554d8889a\": container with ID starting with 46074c0df4781f45f76d8de30b4831bcda3c222dcd955c4834c4068554d8889a not found: ID does not exist" Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.181236 4902 scope.go:117] "RemoveContainer" containerID="f3c478b983036543d91a209a8db5de72a1c06a22268edc0eed2e10db53ff5bf3" Jan 21 17:01:58 crc kubenswrapper[4902]: E0121 17:01:58.181515 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3c478b983036543d91a209a8db5de72a1c06a22268edc0eed2e10db53ff5bf3\": container with ID starting with f3c478b983036543d91a209a8db5de72a1c06a22268edc0eed2e10db53ff5bf3 not found: ID does not exist" containerID="f3c478b983036543d91a209a8db5de72a1c06a22268edc0eed2e10db53ff5bf3" Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.181536 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c478b983036543d91a209a8db5de72a1c06a22268edc0eed2e10db53ff5bf3"} err="failed to get container status \"f3c478b983036543d91a209a8db5de72a1c06a22268edc0eed2e10db53ff5bf3\": rpc error: code = NotFound desc = could not find container \"f3c478b983036543d91a209a8db5de72a1c06a22268edc0eed2e10db53ff5bf3\": container with ID starting with f3c478b983036543d91a209a8db5de72a1c06a22268edc0eed2e10db53ff5bf3 not found: ID does not exist" Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.340199 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e63a3374-1941-4924-9ddf-e2638ebd9da5" path="/var/lib/kubelet/pods/e63a3374-1941-4924-9ddf-e2638ebd9da5/volumes" Jan 21 17:02:17 crc kubenswrapper[4902]: I0121 17:02:17.769727 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:02:17 crc kubenswrapper[4902]: I0121 17:02:17.770317 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:02:17 crc kubenswrapper[4902]: I0121 17:02:17.770497 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 17:02:17 crc kubenswrapper[4902]: I0121 17:02:17.771519 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:02:17 crc kubenswrapper[4902]: I0121 17:02:17.771579 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" gracePeriod=600 Jan 21 17:02:17 crc kubenswrapper[4902]: E0121 17:02:17.898804 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:02:18 crc kubenswrapper[4902]: I0121 17:02:18.251933 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" exitCode=0 Jan 21 17:02:18 crc kubenswrapper[4902]: I0121 17:02:18.251981 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd"} Jan 21 17:02:18 crc kubenswrapper[4902]: I0121 17:02:18.252018 4902 scope.go:117] "RemoveContainer" containerID="9f776d5840d31ab607fa3e29bc18a92f4b5a166c8a644779f2d27b3187fe84b1" Jan 21 17:02:18 crc kubenswrapper[4902]: I0121 17:02:18.252866 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:02:18 crc kubenswrapper[4902]: E0121 17:02:18.253174 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:02:32 crc kubenswrapper[4902]: I0121 17:02:32.295945 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:02:32 crc kubenswrapper[4902]: E0121 17:02:32.296562 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:02:47 crc kubenswrapper[4902]: I0121 17:02:47.295631 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:02:47 crc kubenswrapper[4902]: E0121 17:02:47.296230 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:02:58 crc kubenswrapper[4902]: I0121 17:02:58.302485 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:02:58 crc kubenswrapper[4902]: E0121 17:02:58.303335 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:03:12 crc kubenswrapper[4902]: I0121 17:03:12.295372 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:03:12 crc kubenswrapper[4902]: E0121 17:03:12.296086 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:03:26 crc kubenswrapper[4902]: I0121 17:03:26.296088 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:03:26 crc kubenswrapper[4902]: E0121 17:03:26.296874 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.275774 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4bfxj"] Jan 21 17:03:35 crc kubenswrapper[4902]: E0121 17:03:35.277183 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63a3374-1941-4924-9ddf-e2638ebd9da5" containerName="registry-server" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.277199 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63a3374-1941-4924-9ddf-e2638ebd9da5" containerName="registry-server" Jan 21 17:03:35 crc kubenswrapper[4902]: E0121 17:03:35.277215 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63a3374-1941-4924-9ddf-e2638ebd9da5" containerName="extract-content" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.277221 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63a3374-1941-4924-9ddf-e2638ebd9da5" containerName="extract-content" Jan 21 17:03:35 crc kubenswrapper[4902]: E0121 17:03:35.277247 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63a3374-1941-4924-9ddf-e2638ebd9da5" containerName="extract-utilities" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.277253 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63a3374-1941-4924-9ddf-e2638ebd9da5" containerName="extract-utilities" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.277433 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e63a3374-1941-4924-9ddf-e2638ebd9da5" containerName="registry-server" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.280456 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.300716 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4bfxj"] Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.350569 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-utilities\") pod \"certified-operators-4bfxj\" (UID: \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\") " pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.350877 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-catalog-content\") pod \"certified-operators-4bfxj\" (UID: \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\") " pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.350967 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkznt\" (UniqueName: \"kubernetes.io/projected/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-kube-api-access-hkznt\") pod \"certified-operators-4bfxj\" (UID: \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\") " pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.452333 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-catalog-content\") pod \"certified-operators-4bfxj\" (UID: \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\") " pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.452418 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkznt\" (UniqueName: \"kubernetes.io/projected/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-kube-api-access-hkznt\") pod \"certified-operators-4bfxj\" (UID: \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\") " pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.452518 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-utilities\") pod \"certified-operators-4bfxj\" (UID: \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\") " pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.453354 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-utilities\") pod \"certified-operators-4bfxj\" (UID: \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\") " pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.453765 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-catalog-content\") pod \"certified-operators-4bfxj\" (UID: \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\") " pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.496417 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkznt\" (UniqueName: \"kubernetes.io/projected/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-kube-api-access-hkznt\") pod \"certified-operators-4bfxj\" (UID: \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\") " pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.624711 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:36 crc kubenswrapper[4902]: I0121 17:03:36.327822 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4bfxj"] Jan 21 17:03:37 crc kubenswrapper[4902]: I0121 17:03:37.120375 4902 generic.go:334] "Generic (PLEG): container finished" podID="9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" containerID="9ebe50bc15b195b3f2ed09d6a3bc4685416b1efc661329b3c6a3ee49a683fedf" exitCode=0 Jan 21 17:03:37 crc kubenswrapper[4902]: I0121 17:03:37.121092 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bfxj" event={"ID":"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b","Type":"ContainerDied","Data":"9ebe50bc15b195b3f2ed09d6a3bc4685416b1efc661329b3c6a3ee49a683fedf"} Jan 21 17:03:37 crc kubenswrapper[4902]: I0121 17:03:37.121147 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bfxj" event={"ID":"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b","Type":"ContainerStarted","Data":"d0832327e40c84a599a387827caf79267c3e8b7bc23b9d54b39042684a4664a0"} Jan 21 17:03:38 crc kubenswrapper[4902]: I0121 17:03:38.131398 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bfxj" event={"ID":"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b","Type":"ContainerStarted","Data":"a08ed91e30c828a66649c4a4ddc774bcee735e2b3e3d97fe2db929050cb90aa5"} Jan 21 17:03:39 crc kubenswrapper[4902]: I0121 17:03:39.143863 4902 generic.go:334] "Generic (PLEG): container finished" podID="9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" containerID="a08ed91e30c828a66649c4a4ddc774bcee735e2b3e3d97fe2db929050cb90aa5" exitCode=0 Jan 21 17:03:39 crc kubenswrapper[4902]: I0121 17:03:39.144210 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bfxj" event={"ID":"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b","Type":"ContainerDied","Data":"a08ed91e30c828a66649c4a4ddc774bcee735e2b3e3d97fe2db929050cb90aa5"} Jan 21 17:03:39 crc kubenswrapper[4902]: I0121 17:03:39.294921 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:03:39 crc kubenswrapper[4902]: E0121 17:03:39.296066 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:03:41 crc kubenswrapper[4902]: I0121 17:03:41.172215 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bfxj" event={"ID":"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b","Type":"ContainerStarted","Data":"dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280"} Jan 21 17:03:41 crc kubenswrapper[4902]: I0121 17:03:41.205536 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4bfxj" podStartSLOduration=3.7859893810000003 podStartE2EDuration="6.205512808s" podCreationTimestamp="2026-01-21 17:03:35 +0000 UTC" firstStartedPulling="2026-01-21 17:03:37.123066517 +0000 UTC m=+8979.199899556" lastFinishedPulling="2026-01-21 17:03:39.542589944 +0000 UTC m=+8981.619422983" observedRunningTime="2026-01-21 17:03:41.197985796 +0000 UTC m=+8983.274818825" watchObservedRunningTime="2026-01-21 17:03:41.205512808 +0000 UTC m=+8983.282345837" Jan 21 17:03:45 crc kubenswrapper[4902]: I0121 17:03:45.625395 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:45 crc kubenswrapper[4902]: I0121 17:03:45.625939 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:45 crc kubenswrapper[4902]: I0121 17:03:45.679582 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:46 crc kubenswrapper[4902]: I0121 17:03:46.276541 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:46 crc kubenswrapper[4902]: I0121 17:03:46.348233 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4bfxj"] Jan 21 17:03:48 crc kubenswrapper[4902]: I0121 17:03:48.235121 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4bfxj" podUID="9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" containerName="registry-server" containerID="cri-o://dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280" gracePeriod=2 Jan 21 17:03:48 crc kubenswrapper[4902]: I0121 17:03:48.731984 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:48 crc kubenswrapper[4902]: I0121 17:03:48.880097 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-catalog-content\") pod \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\" (UID: \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\") " Jan 21 17:03:48 crc kubenswrapper[4902]: I0121 17:03:48.880211 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-utilities\") pod \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\" (UID: \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\") " Jan 21 17:03:48 crc kubenswrapper[4902]: I0121 17:03:48.880432 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkznt\" (UniqueName: \"kubernetes.io/projected/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-kube-api-access-hkznt\") pod \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\" (UID: \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\") " Jan 21 17:03:48 crc kubenswrapper[4902]: I0121 17:03:48.881832 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-utilities" (OuterVolumeSpecName: "utilities") pod "9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" (UID: "9606c3e6-5b1d-4c14-a719-7f3ede91dc0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:03:48 crc kubenswrapper[4902]: I0121 17:03:48.892129 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-kube-api-access-hkznt" (OuterVolumeSpecName: "kube-api-access-hkznt") pod "9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" (UID: "9606c3e6-5b1d-4c14-a719-7f3ede91dc0b"). InnerVolumeSpecName "kube-api-access-hkznt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:03:48 crc kubenswrapper[4902]: I0121 17:03:48.926801 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" (UID: "9606c3e6-5b1d-4c14-a719-7f3ede91dc0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:03:48 crc kubenswrapper[4902]: I0121 17:03:48.983373 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:03:48 crc kubenswrapper[4902]: I0121 17:03:48.983592 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:03:48 crc kubenswrapper[4902]: I0121 17:03:48.983606 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkznt\" (UniqueName: \"kubernetes.io/projected/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-kube-api-access-hkznt\") on node \"crc\" DevicePath \"\"" Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.247404 4902 generic.go:334] "Generic (PLEG): container finished" podID="9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" containerID="dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280" exitCode=0 Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.247576 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bfxj" event={"ID":"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b","Type":"ContainerDied","Data":"dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280"} Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.248654 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bfxj" event={"ID":"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b","Type":"ContainerDied","Data":"d0832327e40c84a599a387827caf79267c3e8b7bc23b9d54b39042684a4664a0"} Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.248733 4902 scope.go:117] "RemoveContainer" containerID="dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280" Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.247647 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.270660 4902 scope.go:117] "RemoveContainer" containerID="a08ed91e30c828a66649c4a4ddc774bcee735e2b3e3d97fe2db929050cb90aa5" Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.298251 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4bfxj"] Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.309036 4902 scope.go:117] "RemoveContainer" containerID="9ebe50bc15b195b3f2ed09d6a3bc4685416b1efc661329b3c6a3ee49a683fedf" Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.312714 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4bfxj"] Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.383357 4902 scope.go:117] "RemoveContainer" containerID="dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280" Jan 21 17:03:49 crc kubenswrapper[4902]: E0121 17:03:49.383888 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280\": container with ID starting with dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280 not found: ID does not exist" containerID="dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280" Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.383932 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280"} err="failed to get container status \"dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280\": rpc error: code = NotFound desc = could not find container \"dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280\": container with ID starting with dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280 not found: ID does not exist" Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.383960 4902 scope.go:117] "RemoveContainer" containerID="a08ed91e30c828a66649c4a4ddc774bcee735e2b3e3d97fe2db929050cb90aa5" Jan 21 17:03:49 crc kubenswrapper[4902]: E0121 17:03:49.384319 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a08ed91e30c828a66649c4a4ddc774bcee735e2b3e3d97fe2db929050cb90aa5\": container with ID starting with a08ed91e30c828a66649c4a4ddc774bcee735e2b3e3d97fe2db929050cb90aa5 not found: ID does not exist" containerID="a08ed91e30c828a66649c4a4ddc774bcee735e2b3e3d97fe2db929050cb90aa5" Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.384350 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a08ed91e30c828a66649c4a4ddc774bcee735e2b3e3d97fe2db929050cb90aa5"} err="failed to get container status \"a08ed91e30c828a66649c4a4ddc774bcee735e2b3e3d97fe2db929050cb90aa5\": rpc error: code = NotFound desc = could not find container \"a08ed91e30c828a66649c4a4ddc774bcee735e2b3e3d97fe2db929050cb90aa5\": container with ID starting with a08ed91e30c828a66649c4a4ddc774bcee735e2b3e3d97fe2db929050cb90aa5 not found: ID does not exist" Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.384367 4902 scope.go:117] "RemoveContainer" containerID="9ebe50bc15b195b3f2ed09d6a3bc4685416b1efc661329b3c6a3ee49a683fedf" Jan 21 17:03:49 crc kubenswrapper[4902]: E0121 17:03:49.384707 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ebe50bc15b195b3f2ed09d6a3bc4685416b1efc661329b3c6a3ee49a683fedf\": container with ID starting with 9ebe50bc15b195b3f2ed09d6a3bc4685416b1efc661329b3c6a3ee49a683fedf not found: ID does not exist" containerID="9ebe50bc15b195b3f2ed09d6a3bc4685416b1efc661329b3c6a3ee49a683fedf" Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.384747 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ebe50bc15b195b3f2ed09d6a3bc4685416b1efc661329b3c6a3ee49a683fedf"} err="failed to get container status \"9ebe50bc15b195b3f2ed09d6a3bc4685416b1efc661329b3c6a3ee49a683fedf\": rpc error: code = NotFound desc = could not find container \"9ebe50bc15b195b3f2ed09d6a3bc4685416b1efc661329b3c6a3ee49a683fedf\": container with ID starting with 9ebe50bc15b195b3f2ed09d6a3bc4685416b1efc661329b3c6a3ee49a683fedf not found: ID does not exist" Jan 21 17:03:50 crc kubenswrapper[4902]: I0121 17:03:50.305130 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" path="/var/lib/kubelet/pods/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b/volumes" Jan 21 17:03:54 crc kubenswrapper[4902]: I0121 17:03:54.295176 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:03:54 crc kubenswrapper[4902]: E0121 17:03:54.295979 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:04:06 crc kubenswrapper[4902]: I0121 17:04:06.296072 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:04:06 crc kubenswrapper[4902]: E0121 17:04:06.297053 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:04:21 crc kubenswrapper[4902]: I0121 17:04:21.295133 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:04:21 crc kubenswrapper[4902]: E0121 17:04:21.296403 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:04:34 crc kubenswrapper[4902]: I0121 17:04:34.295704 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:04:34 crc kubenswrapper[4902]: E0121 17:04:34.296496 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:04:47 crc kubenswrapper[4902]: I0121 17:04:47.295289 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:04:47 crc kubenswrapper[4902]: E0121 17:04:47.297360 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:04:59 crc kubenswrapper[4902]: I0121 17:04:59.295594 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:04:59 crc kubenswrapper[4902]: E0121 17:04:59.296331 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.411996 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fr7wb"] Jan 21 17:05:10 crc kubenswrapper[4902]: E0121 17:05:10.413509 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" containerName="extract-utilities" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.413531 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" containerName="extract-utilities" Jan 21 17:05:10 crc kubenswrapper[4902]: E0121 17:05:10.413589 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" containerName="extract-content" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.413601 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" containerName="extract-content" Jan 21 17:05:10 crc kubenswrapper[4902]: E0121 17:05:10.413624 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" containerName="registry-server" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.413633 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" containerName="registry-server" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.414079 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" containerName="registry-server" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.416161 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fr7wb"] Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.416271 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.476185 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr4fw\" (UniqueName: \"kubernetes.io/projected/ff52a854-5102-46f9-9d63-f3c3db18aab6-kube-api-access-kr4fw\") pod \"redhat-operators-fr7wb\" (UID: \"ff52a854-5102-46f9-9d63-f3c3db18aab6\") " pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.478031 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff52a854-5102-46f9-9d63-f3c3db18aab6-utilities\") pod \"redhat-operators-fr7wb\" (UID: \"ff52a854-5102-46f9-9d63-f3c3db18aab6\") " pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.478839 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff52a854-5102-46f9-9d63-f3c3db18aab6-catalog-content\") pod \"redhat-operators-fr7wb\" (UID: \"ff52a854-5102-46f9-9d63-f3c3db18aab6\") " pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.580883 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff52a854-5102-46f9-9d63-f3c3db18aab6-catalog-content\") pod \"redhat-operators-fr7wb\" (UID: \"ff52a854-5102-46f9-9d63-f3c3db18aab6\") " pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.581312 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr4fw\" (UniqueName: \"kubernetes.io/projected/ff52a854-5102-46f9-9d63-f3c3db18aab6-kube-api-access-kr4fw\") pod \"redhat-operators-fr7wb\" (UID: \"ff52a854-5102-46f9-9d63-f3c3db18aab6\") " pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.581543 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff52a854-5102-46f9-9d63-f3c3db18aab6-catalog-content\") pod \"redhat-operators-fr7wb\" (UID: \"ff52a854-5102-46f9-9d63-f3c3db18aab6\") " pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.581729 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff52a854-5102-46f9-9d63-f3c3db18aab6-utilities\") pod \"redhat-operators-fr7wb\" (UID: \"ff52a854-5102-46f9-9d63-f3c3db18aab6\") " pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.582083 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff52a854-5102-46f9-9d63-f3c3db18aab6-utilities\") pod \"redhat-operators-fr7wb\" (UID: \"ff52a854-5102-46f9-9d63-f3c3db18aab6\") " pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.603788 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr4fw\" (UniqueName: \"kubernetes.io/projected/ff52a854-5102-46f9-9d63-f3c3db18aab6-kube-api-access-kr4fw\") pod \"redhat-operators-fr7wb\" (UID: \"ff52a854-5102-46f9-9d63-f3c3db18aab6\") " pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.751997 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:11 crc kubenswrapper[4902]: I0121 17:05:11.246055 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fr7wb"] Jan 21 17:05:12 crc kubenswrapper[4902]: I0121 17:05:12.178542 4902 generic.go:334] "Generic (PLEG): container finished" podID="ff52a854-5102-46f9-9d63-f3c3db18aab6" containerID="bc28cde46ec2d229ab8038d1fec4ccf2c8da52c2f57eef6129c339d64d23e9d2" exitCode=0 Jan 21 17:05:12 crc kubenswrapper[4902]: I0121 17:05:12.178627 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr7wb" event={"ID":"ff52a854-5102-46f9-9d63-f3c3db18aab6","Type":"ContainerDied","Data":"bc28cde46ec2d229ab8038d1fec4ccf2c8da52c2f57eef6129c339d64d23e9d2"} Jan 21 17:05:12 crc kubenswrapper[4902]: I0121 17:05:12.178880 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr7wb" event={"ID":"ff52a854-5102-46f9-9d63-f3c3db18aab6","Type":"ContainerStarted","Data":"c1f6bcbe0f7b0209e28301ed188965109ae19d5e6c3be4afa9e0fd5913151b90"} Jan 21 17:05:12 crc kubenswrapper[4902]: I0121 17:05:12.183272 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:05:14 crc kubenswrapper[4902]: I0121 17:05:14.200294 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr7wb" event={"ID":"ff52a854-5102-46f9-9d63-f3c3db18aab6","Type":"ContainerStarted","Data":"1f3efc656ab4656ffa053c25b3d893491d881c76787d6d364b4db9e66b04a29c"} Jan 21 17:05:14 crc kubenswrapper[4902]: I0121 17:05:14.296395 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:05:14 crc kubenswrapper[4902]: E0121 17:05:14.296765 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:05:15 crc kubenswrapper[4902]: I0121 17:05:15.213267 4902 generic.go:334] "Generic (PLEG): container finished" podID="ff52a854-5102-46f9-9d63-f3c3db18aab6" containerID="1f3efc656ab4656ffa053c25b3d893491d881c76787d6d364b4db9e66b04a29c" exitCode=0 Jan 21 17:05:15 crc kubenswrapper[4902]: I0121 17:05:15.213454 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr7wb" event={"ID":"ff52a854-5102-46f9-9d63-f3c3db18aab6","Type":"ContainerDied","Data":"1f3efc656ab4656ffa053c25b3d893491d881c76787d6d364b4db9e66b04a29c"} Jan 21 17:05:20 crc kubenswrapper[4902]: I0121 17:05:20.273535 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr7wb" event={"ID":"ff52a854-5102-46f9-9d63-f3c3db18aab6","Type":"ContainerStarted","Data":"63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d"} Jan 21 17:05:20 crc kubenswrapper[4902]: I0121 17:05:20.288822 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fr7wb" podStartSLOduration=2.484739224 podStartE2EDuration="10.288802212s" podCreationTimestamp="2026-01-21 17:05:10 +0000 UTC" firstStartedPulling="2026-01-21 17:05:12.182925134 +0000 UTC m=+9074.259758173" lastFinishedPulling="2026-01-21 17:05:19.986988092 +0000 UTC m=+9082.063821161" observedRunningTime="2026-01-21 17:05:20.288313378 +0000 UTC m=+9082.365146447" watchObservedRunningTime="2026-01-21 17:05:20.288802212 +0000 UTC m=+9082.365635251" Jan 21 17:05:20 crc kubenswrapper[4902]: I0121 17:05:20.752810 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:20 crc kubenswrapper[4902]: I0121 17:05:20.753519 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:21 crc kubenswrapper[4902]: I0121 17:05:21.797447 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fr7wb" podUID="ff52a854-5102-46f9-9d63-f3c3db18aab6" containerName="registry-server" probeResult="failure" output=< Jan 21 17:05:21 crc kubenswrapper[4902]: timeout: failed to connect service ":50051" within 1s Jan 21 17:05:21 crc kubenswrapper[4902]: > Jan 21 17:05:26 crc kubenswrapper[4902]: I0121 17:05:26.295730 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:05:26 crc kubenswrapper[4902]: E0121 17:05:26.296708 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:05:30 crc kubenswrapper[4902]: I0121 17:05:30.952281 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:31 crc kubenswrapper[4902]: I0121 17:05:31.007590 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:32 crc kubenswrapper[4902]: I0121 17:05:32.650321 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fr7wb"] Jan 21 17:05:32 crc kubenswrapper[4902]: I0121 17:05:32.650788 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fr7wb" podUID="ff52a854-5102-46f9-9d63-f3c3db18aab6" containerName="registry-server" containerID="cri-o://63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d" gracePeriod=2 Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.152965 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.258746 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff52a854-5102-46f9-9d63-f3c3db18aab6-catalog-content\") pod \"ff52a854-5102-46f9-9d63-f3c3db18aab6\" (UID: \"ff52a854-5102-46f9-9d63-f3c3db18aab6\") " Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.258805 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr4fw\" (UniqueName: \"kubernetes.io/projected/ff52a854-5102-46f9-9d63-f3c3db18aab6-kube-api-access-kr4fw\") pod \"ff52a854-5102-46f9-9d63-f3c3db18aab6\" (UID: \"ff52a854-5102-46f9-9d63-f3c3db18aab6\") " Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.258922 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff52a854-5102-46f9-9d63-f3c3db18aab6-utilities\") pod \"ff52a854-5102-46f9-9d63-f3c3db18aab6\" (UID: \"ff52a854-5102-46f9-9d63-f3c3db18aab6\") " Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.259778 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff52a854-5102-46f9-9d63-f3c3db18aab6-utilities" (OuterVolumeSpecName: "utilities") pod "ff52a854-5102-46f9-9d63-f3c3db18aab6" (UID: "ff52a854-5102-46f9-9d63-f3c3db18aab6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.260380 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff52a854-5102-46f9-9d63-f3c3db18aab6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.265156 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff52a854-5102-46f9-9d63-f3c3db18aab6-kube-api-access-kr4fw" (OuterVolumeSpecName: "kube-api-access-kr4fw") pod "ff52a854-5102-46f9-9d63-f3c3db18aab6" (UID: "ff52a854-5102-46f9-9d63-f3c3db18aab6"). InnerVolumeSpecName "kube-api-access-kr4fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.362083 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr4fw\" (UniqueName: \"kubernetes.io/projected/ff52a854-5102-46f9-9d63-f3c3db18aab6-kube-api-access-kr4fw\") on node \"crc\" DevicePath \"\"" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.382781 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff52a854-5102-46f9-9d63-f3c3db18aab6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff52a854-5102-46f9-9d63-f3c3db18aab6" (UID: "ff52a854-5102-46f9-9d63-f3c3db18aab6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.410785 4902 generic.go:334] "Generic (PLEG): container finished" podID="ff52a854-5102-46f9-9d63-f3c3db18aab6" containerID="63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d" exitCode=0 Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.410833 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr7wb" event={"ID":"ff52a854-5102-46f9-9d63-f3c3db18aab6","Type":"ContainerDied","Data":"63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d"} Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.410861 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr7wb" event={"ID":"ff52a854-5102-46f9-9d63-f3c3db18aab6","Type":"ContainerDied","Data":"c1f6bcbe0f7b0209e28301ed188965109ae19d5e6c3be4afa9e0fd5913151b90"} Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.410864 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.410880 4902 scope.go:117] "RemoveContainer" containerID="63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.455650 4902 scope.go:117] "RemoveContainer" containerID="1f3efc656ab4656ffa053c25b3d893491d881c76787d6d364b4db9e66b04a29c" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.465660 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff52a854-5102-46f9-9d63-f3c3db18aab6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.482617 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fr7wb"] Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.496442 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fr7wb"] Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.544969 4902 scope.go:117] "RemoveContainer" containerID="bc28cde46ec2d229ab8038d1fec4ccf2c8da52c2f57eef6129c339d64d23e9d2" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.563707 4902 scope.go:117] "RemoveContainer" containerID="63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d" Jan 21 17:05:33 crc kubenswrapper[4902]: E0121 17:05:33.564232 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d\": container with ID starting with 63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d not found: ID does not exist" containerID="63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.564391 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d"} err="failed to get container status \"63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d\": rpc error: code = NotFound desc = could not find container \"63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d\": container with ID starting with 63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d not found: ID does not exist" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.564487 4902 scope.go:117] "RemoveContainer" containerID="1f3efc656ab4656ffa053c25b3d893491d881c76787d6d364b4db9e66b04a29c" Jan 21 17:05:33 crc kubenswrapper[4902]: E0121 17:05:33.564995 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f3efc656ab4656ffa053c25b3d893491d881c76787d6d364b4db9e66b04a29c\": container with ID starting with 1f3efc656ab4656ffa053c25b3d893491d881c76787d6d364b4db9e66b04a29c not found: ID does not exist" containerID="1f3efc656ab4656ffa053c25b3d893491d881c76787d6d364b4db9e66b04a29c" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.565028 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f3efc656ab4656ffa053c25b3d893491d881c76787d6d364b4db9e66b04a29c"} err="failed to get container status \"1f3efc656ab4656ffa053c25b3d893491d881c76787d6d364b4db9e66b04a29c\": rpc error: code = NotFound desc = could not find container \"1f3efc656ab4656ffa053c25b3d893491d881c76787d6d364b4db9e66b04a29c\": container with ID starting with 1f3efc656ab4656ffa053c25b3d893491d881c76787d6d364b4db9e66b04a29c not found: ID does not exist" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.565076 4902 scope.go:117] "RemoveContainer" containerID="bc28cde46ec2d229ab8038d1fec4ccf2c8da52c2f57eef6129c339d64d23e9d2" Jan 21 17:05:33 crc kubenswrapper[4902]: E0121 17:05:33.565333 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc28cde46ec2d229ab8038d1fec4ccf2c8da52c2f57eef6129c339d64d23e9d2\": container with ID starting with bc28cde46ec2d229ab8038d1fec4ccf2c8da52c2f57eef6129c339d64d23e9d2 not found: ID does not exist" containerID="bc28cde46ec2d229ab8038d1fec4ccf2c8da52c2f57eef6129c339d64d23e9d2" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.565364 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc28cde46ec2d229ab8038d1fec4ccf2c8da52c2f57eef6129c339d64d23e9d2"} err="failed to get container status \"bc28cde46ec2d229ab8038d1fec4ccf2c8da52c2f57eef6129c339d64d23e9d2\": rpc error: code = NotFound desc = could not find container \"bc28cde46ec2d229ab8038d1fec4ccf2c8da52c2f57eef6129c339d64d23e9d2\": container with ID starting with bc28cde46ec2d229ab8038d1fec4ccf2c8da52c2f57eef6129c339d64d23e9d2 not found: ID does not exist" Jan 21 17:05:34 crc kubenswrapper[4902]: I0121 17:05:34.306859 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff52a854-5102-46f9-9d63-f3c3db18aab6" path="/var/lib/kubelet/pods/ff52a854-5102-46f9-9d63-f3c3db18aab6/volumes" Jan 21 17:05:38 crc kubenswrapper[4902]: I0121 17:05:38.309930 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:05:38 crc kubenswrapper[4902]: E0121 17:05:38.310692 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:05:52 crc kubenswrapper[4902]: I0121 17:05:52.295808 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:05:52 crc kubenswrapper[4902]: E0121 17:05:52.296500 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:06:03 crc kubenswrapper[4902]: I0121 17:06:03.295557 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:06:03 crc kubenswrapper[4902]: E0121 17:06:03.296577 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:06:16 crc kubenswrapper[4902]: I0121 17:06:16.294781 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:06:16 crc kubenswrapper[4902]: E0121 17:06:16.296566 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:06:27 crc kubenswrapper[4902]: I0121 17:06:27.297278 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:06:27 crc kubenswrapper[4902]: E0121 17:06:27.299576 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:06:41 crc kubenswrapper[4902]: I0121 17:06:41.295373 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:06:41 crc kubenswrapper[4902]: E0121 17:06:41.296481 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:06:55 crc kubenswrapper[4902]: I0121 17:06:55.294798 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:06:55 crc kubenswrapper[4902]: E0121 17:06:55.295782 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:07:06 crc kubenswrapper[4902]: I0121 17:07:06.295380 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:07:06 crc kubenswrapper[4902]: E0121 17:07:06.296988 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:07:21 crc kubenswrapper[4902]: I0121 17:07:21.294967 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:07:22 crc kubenswrapper[4902]: I0121 17:07:22.628791 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"df1e3c29d75db17b6274424fd93ca7c5fe90dd1bfa5747bd7c348f540e868b0b"} Jan 21 17:09:47 crc kubenswrapper[4902]: I0121 17:09:47.769896 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:09:47 crc kubenswrapper[4902]: I0121 17:09:47.770581 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:10:13 crc kubenswrapper[4902]: I0121 17:10:13.856601 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xrptr"] Jan 21 17:10:13 crc kubenswrapper[4902]: E0121 17:10:13.857831 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff52a854-5102-46f9-9d63-f3c3db18aab6" containerName="registry-server" Jan 21 17:10:13 crc kubenswrapper[4902]: I0121 17:10:13.857844 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff52a854-5102-46f9-9d63-f3c3db18aab6" containerName="registry-server" Jan 21 17:10:13 crc kubenswrapper[4902]: E0121 17:10:13.857863 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff52a854-5102-46f9-9d63-f3c3db18aab6" containerName="extract-utilities" Jan 21 17:10:13 crc kubenswrapper[4902]: I0121 17:10:13.857869 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff52a854-5102-46f9-9d63-f3c3db18aab6" containerName="extract-utilities" Jan 21 17:10:13 crc kubenswrapper[4902]: E0121 17:10:13.857875 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff52a854-5102-46f9-9d63-f3c3db18aab6" containerName="extract-content" Jan 21 17:10:13 crc kubenswrapper[4902]: I0121 17:10:13.857881 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff52a854-5102-46f9-9d63-f3c3db18aab6" containerName="extract-content" Jan 21 17:10:13 crc kubenswrapper[4902]: I0121 17:10:13.858112 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff52a854-5102-46f9-9d63-f3c3db18aab6" containerName="registry-server" Jan 21 17:10:13 crc kubenswrapper[4902]: I0121 17:10:13.860116 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:13 crc kubenswrapper[4902]: I0121 17:10:13.873990 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrptr"] Jan 21 17:10:13 crc kubenswrapper[4902]: I0121 17:10:13.927304 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-catalog-content\") pod \"redhat-marketplace-xrptr\" (UID: \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\") " pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:13 crc kubenswrapper[4902]: I0121 17:10:13.927362 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-utilities\") pod \"redhat-marketplace-xrptr\" (UID: \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\") " pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:13 crc kubenswrapper[4902]: I0121 17:10:13.927701 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wpmr\" (UniqueName: \"kubernetes.io/projected/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-kube-api-access-6wpmr\") pod \"redhat-marketplace-xrptr\" (UID: \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\") " pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:14 crc kubenswrapper[4902]: I0121 17:10:14.029579 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wpmr\" (UniqueName: \"kubernetes.io/projected/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-kube-api-access-6wpmr\") pod \"redhat-marketplace-xrptr\" (UID: \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\") " pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:14 crc kubenswrapper[4902]: I0121 17:10:14.030024 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-catalog-content\") pod \"redhat-marketplace-xrptr\" (UID: \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\") " pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:14 crc kubenswrapper[4902]: I0121 17:10:14.030066 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-utilities\") pod \"redhat-marketplace-xrptr\" (UID: \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\") " pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:14 crc kubenswrapper[4902]: I0121 17:10:14.030540 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-catalog-content\") pod \"redhat-marketplace-xrptr\" (UID: \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\") " pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:14 crc kubenswrapper[4902]: I0121 17:10:14.031639 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-utilities\") pod \"redhat-marketplace-xrptr\" (UID: \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\") " pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:14 crc kubenswrapper[4902]: I0121 17:10:14.066915 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wpmr\" (UniqueName: \"kubernetes.io/projected/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-kube-api-access-6wpmr\") pod \"redhat-marketplace-xrptr\" (UID: \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\") " pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:14 crc kubenswrapper[4902]: I0121 17:10:14.247486 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:14 crc kubenswrapper[4902]: I0121 17:10:14.868423 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrptr"] Jan 21 17:10:15 crc kubenswrapper[4902]: I0121 17:10:15.698394 4902 generic.go:334] "Generic (PLEG): container finished" podID="46dc05bc-4996-4bc3-8dc1-dd22a85dca93" containerID="5f53ab1a513e4038c92eb57d149299ec1c118002ffa90fea235e036be3465e7c" exitCode=0 Jan 21 17:10:15 crc kubenswrapper[4902]: I0121 17:10:15.698579 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrptr" event={"ID":"46dc05bc-4996-4bc3-8dc1-dd22a85dca93","Type":"ContainerDied","Data":"5f53ab1a513e4038c92eb57d149299ec1c118002ffa90fea235e036be3465e7c"} Jan 21 17:10:15 crc kubenswrapper[4902]: I0121 17:10:15.698890 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrptr" event={"ID":"46dc05bc-4996-4bc3-8dc1-dd22a85dca93","Type":"ContainerStarted","Data":"5fd788724e971f3bd232e168dba85b05b4b9912e82e5a1ffb45644da3784ff0f"} Jan 21 17:10:15 crc kubenswrapper[4902]: I0121 17:10:15.700533 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:10:17 crc kubenswrapper[4902]: I0121 17:10:17.720763 4902 generic.go:334] "Generic (PLEG): container finished" podID="46dc05bc-4996-4bc3-8dc1-dd22a85dca93" containerID="2b284c349a5f6c29e6c1c37d364dd66df7ab429bc257dabb8dfa27f40ed79bc3" exitCode=0 Jan 21 17:10:17 crc kubenswrapper[4902]: I0121 17:10:17.721164 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrptr" event={"ID":"46dc05bc-4996-4bc3-8dc1-dd22a85dca93","Type":"ContainerDied","Data":"2b284c349a5f6c29e6c1c37d364dd66df7ab429bc257dabb8dfa27f40ed79bc3"} Jan 21 17:10:17 crc kubenswrapper[4902]: I0121 17:10:17.769390 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:10:17 crc kubenswrapper[4902]: I0121 17:10:17.769647 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:10:18 crc kubenswrapper[4902]: I0121 17:10:18.731500 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrptr" event={"ID":"46dc05bc-4996-4bc3-8dc1-dd22a85dca93","Type":"ContainerStarted","Data":"c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb"} Jan 21 17:10:18 crc kubenswrapper[4902]: I0121 17:10:18.750949 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xrptr" podStartSLOduration=3.330580668 podStartE2EDuration="5.750932857s" podCreationTimestamp="2026-01-21 17:10:13 +0000 UTC" firstStartedPulling="2026-01-21 17:10:15.700350694 +0000 UTC m=+9377.777183723" lastFinishedPulling="2026-01-21 17:10:18.120702883 +0000 UTC m=+9380.197535912" observedRunningTime="2026-01-21 17:10:18.74997631 +0000 UTC m=+9380.826809339" watchObservedRunningTime="2026-01-21 17:10:18.750932857 +0000 UTC m=+9380.827765886" Jan 21 17:10:24 crc kubenswrapper[4902]: I0121 17:10:24.248580 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:24 crc kubenswrapper[4902]: I0121 17:10:24.248987 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:24 crc kubenswrapper[4902]: I0121 17:10:24.315359 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:24 crc kubenswrapper[4902]: I0121 17:10:24.880283 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:24 crc kubenswrapper[4902]: I0121 17:10:24.939073 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrptr"] Jan 21 17:10:26 crc kubenswrapper[4902]: I0121 17:10:26.836662 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xrptr" podUID="46dc05bc-4996-4bc3-8dc1-dd22a85dca93" containerName="registry-server" containerID="cri-o://c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb" gracePeriod=2 Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.608921 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.760970 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wpmr\" (UniqueName: \"kubernetes.io/projected/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-kube-api-access-6wpmr\") pod \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\" (UID: \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\") " Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.761063 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-catalog-content\") pod \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\" (UID: \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\") " Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.761266 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-utilities\") pod \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\" (UID: \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\") " Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.762847 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-utilities" (OuterVolumeSpecName: "utilities") pod "46dc05bc-4996-4bc3-8dc1-dd22a85dca93" (UID: "46dc05bc-4996-4bc3-8dc1-dd22a85dca93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.769251 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-kube-api-access-6wpmr" (OuterVolumeSpecName: "kube-api-access-6wpmr") pod "46dc05bc-4996-4bc3-8dc1-dd22a85dca93" (UID: "46dc05bc-4996-4bc3-8dc1-dd22a85dca93"). InnerVolumeSpecName "kube-api-access-6wpmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.793497 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46dc05bc-4996-4bc3-8dc1-dd22a85dca93" (UID: "46dc05bc-4996-4bc3-8dc1-dd22a85dca93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.849857 4902 generic.go:334] "Generic (PLEG): container finished" podID="46dc05bc-4996-4bc3-8dc1-dd22a85dca93" containerID="c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb" exitCode=0 Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.849924 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrptr" event={"ID":"46dc05bc-4996-4bc3-8dc1-dd22a85dca93","Type":"ContainerDied","Data":"c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb"} Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.849974 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrptr" event={"ID":"46dc05bc-4996-4bc3-8dc1-dd22a85dca93","Type":"ContainerDied","Data":"5fd788724e971f3bd232e168dba85b05b4b9912e82e5a1ffb45644da3784ff0f"} Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.849996 4902 scope.go:117] "RemoveContainer" containerID="c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.850258 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.864007 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.864276 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wpmr\" (UniqueName: \"kubernetes.io/projected/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-kube-api-access-6wpmr\") on node \"crc\" DevicePath \"\"" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.864420 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.877140 4902 scope.go:117] "RemoveContainer" containerID="2b284c349a5f6c29e6c1c37d364dd66df7ab429bc257dabb8dfa27f40ed79bc3" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.902993 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrptr"] Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.910479 4902 scope.go:117] "RemoveContainer" containerID="5f53ab1a513e4038c92eb57d149299ec1c118002ffa90fea235e036be3465e7c" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.914725 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrptr"] Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.978414 4902 scope.go:117] "RemoveContainer" containerID="c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb" Jan 21 17:10:27 crc kubenswrapper[4902]: E0121 17:10:27.983175 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb\": container with ID starting with c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb not found: ID does not exist" containerID="c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.983224 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb"} err="failed to get container status \"c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb\": rpc error: code = NotFound desc = could not find container \"c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb\": container with ID starting with c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb not found: ID does not exist" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.983256 4902 scope.go:117] "RemoveContainer" containerID="2b284c349a5f6c29e6c1c37d364dd66df7ab429bc257dabb8dfa27f40ed79bc3" Jan 21 17:10:27 crc kubenswrapper[4902]: E0121 17:10:27.984120 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b284c349a5f6c29e6c1c37d364dd66df7ab429bc257dabb8dfa27f40ed79bc3\": container with ID starting with 2b284c349a5f6c29e6c1c37d364dd66df7ab429bc257dabb8dfa27f40ed79bc3 not found: ID does not exist" containerID="2b284c349a5f6c29e6c1c37d364dd66df7ab429bc257dabb8dfa27f40ed79bc3" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.984139 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b284c349a5f6c29e6c1c37d364dd66df7ab429bc257dabb8dfa27f40ed79bc3"} err="failed to get container status \"2b284c349a5f6c29e6c1c37d364dd66df7ab429bc257dabb8dfa27f40ed79bc3\": rpc error: code = NotFound desc = could not find container \"2b284c349a5f6c29e6c1c37d364dd66df7ab429bc257dabb8dfa27f40ed79bc3\": container with ID starting with 2b284c349a5f6c29e6c1c37d364dd66df7ab429bc257dabb8dfa27f40ed79bc3 not found: ID does not exist" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.984153 4902 scope.go:117] "RemoveContainer" containerID="5f53ab1a513e4038c92eb57d149299ec1c118002ffa90fea235e036be3465e7c" Jan 21 17:10:27 crc kubenswrapper[4902]: E0121 17:10:27.985508 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f53ab1a513e4038c92eb57d149299ec1c118002ffa90fea235e036be3465e7c\": container with ID starting with 5f53ab1a513e4038c92eb57d149299ec1c118002ffa90fea235e036be3465e7c not found: ID does not exist" containerID="5f53ab1a513e4038c92eb57d149299ec1c118002ffa90fea235e036be3465e7c" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.985530 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f53ab1a513e4038c92eb57d149299ec1c118002ffa90fea235e036be3465e7c"} err="failed to get container status \"5f53ab1a513e4038c92eb57d149299ec1c118002ffa90fea235e036be3465e7c\": rpc error: code = NotFound desc = could not find container \"5f53ab1a513e4038c92eb57d149299ec1c118002ffa90fea235e036be3465e7c\": container with ID starting with 5f53ab1a513e4038c92eb57d149299ec1c118002ffa90fea235e036be3465e7c not found: ID does not exist" Jan 21 17:10:28 crc kubenswrapper[4902]: I0121 17:10:28.308536 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46dc05bc-4996-4bc3-8dc1-dd22a85dca93" path="/var/lib/kubelet/pods/46dc05bc-4996-4bc3-8dc1-dd22a85dca93/volumes" Jan 21 17:10:47 crc kubenswrapper[4902]: I0121 17:10:47.769774 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:10:47 crc kubenswrapper[4902]: I0121 17:10:47.771359 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:10:47 crc kubenswrapper[4902]: I0121 17:10:47.771497 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 17:10:47 crc kubenswrapper[4902]: I0121 17:10:47.772416 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"df1e3c29d75db17b6274424fd93ca7c5fe90dd1bfa5747bd7c348f540e868b0b"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:10:47 crc kubenswrapper[4902]: I0121 17:10:47.772558 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://df1e3c29d75db17b6274424fd93ca7c5fe90dd1bfa5747bd7c348f540e868b0b" gracePeriod=600 Jan 21 17:10:48 crc kubenswrapper[4902]: I0121 17:10:48.061876 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="df1e3c29d75db17b6274424fd93ca7c5fe90dd1bfa5747bd7c348f540e868b0b" exitCode=0 Jan 21 17:10:48 crc kubenswrapper[4902]: I0121 17:10:48.061927 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"df1e3c29d75db17b6274424fd93ca7c5fe90dd1bfa5747bd7c348f540e868b0b"} Jan 21 17:10:48 crc kubenswrapper[4902]: I0121 17:10:48.062252 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:10:49 crc kubenswrapper[4902]: I0121 17:10:49.072789 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1"} Jan 21 17:13:17 crc kubenswrapper[4902]: I0121 17:13:17.770395 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:13:17 crc kubenswrapper[4902]: I0121 17:13:17.771204 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:13:47 crc kubenswrapper[4902]: I0121 17:13:47.770004 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:13:47 crc kubenswrapper[4902]: I0121 17:13:47.770521 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:14:01 crc kubenswrapper[4902]: I0121 17:14:01.893148 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-594ns"] Jan 21 17:14:01 crc kubenswrapper[4902]: E0121 17:14:01.894424 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46dc05bc-4996-4bc3-8dc1-dd22a85dca93" containerName="registry-server" Jan 21 17:14:01 crc kubenswrapper[4902]: I0121 17:14:01.894448 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="46dc05bc-4996-4bc3-8dc1-dd22a85dca93" containerName="registry-server" Jan 21 17:14:01 crc kubenswrapper[4902]: E0121 17:14:01.894480 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46dc05bc-4996-4bc3-8dc1-dd22a85dca93" containerName="extract-utilities" Jan 21 17:14:01 crc kubenswrapper[4902]: I0121 17:14:01.894491 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="46dc05bc-4996-4bc3-8dc1-dd22a85dca93" containerName="extract-utilities" Jan 21 17:14:01 crc kubenswrapper[4902]: E0121 17:14:01.894544 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46dc05bc-4996-4bc3-8dc1-dd22a85dca93" containerName="extract-content" Jan 21 17:14:01 crc kubenswrapper[4902]: I0121 17:14:01.894557 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="46dc05bc-4996-4bc3-8dc1-dd22a85dca93" containerName="extract-content" Jan 21 17:14:01 crc kubenswrapper[4902]: I0121 17:14:01.894883 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="46dc05bc-4996-4bc3-8dc1-dd22a85dca93" containerName="registry-server" Jan 21 17:14:01 crc kubenswrapper[4902]: I0121 17:14:01.898094 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:01 crc kubenswrapper[4902]: I0121 17:14:01.904230 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-594ns"] Jan 21 17:14:02 crc kubenswrapper[4902]: I0121 17:14:02.039366 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00bd4cb-eeca-472b-a935-c33859f82a60-catalog-content\") pod \"certified-operators-594ns\" (UID: \"e00bd4cb-eeca-472b-a935-c33859f82a60\") " pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:02 crc kubenswrapper[4902]: I0121 17:14:02.039575 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwq25\" (UniqueName: \"kubernetes.io/projected/e00bd4cb-eeca-472b-a935-c33859f82a60-kube-api-access-fwq25\") pod \"certified-operators-594ns\" (UID: \"e00bd4cb-eeca-472b-a935-c33859f82a60\") " pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:02 crc kubenswrapper[4902]: I0121 17:14:02.039898 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00bd4cb-eeca-472b-a935-c33859f82a60-utilities\") pod \"certified-operators-594ns\" (UID: \"e00bd4cb-eeca-472b-a935-c33859f82a60\") " pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:02 crc kubenswrapper[4902]: I0121 17:14:02.142395 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00bd4cb-eeca-472b-a935-c33859f82a60-catalog-content\") pod \"certified-operators-594ns\" (UID: \"e00bd4cb-eeca-472b-a935-c33859f82a60\") " pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:02 crc kubenswrapper[4902]: I0121 17:14:02.142585 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwq25\" (UniqueName: \"kubernetes.io/projected/e00bd4cb-eeca-472b-a935-c33859f82a60-kube-api-access-fwq25\") pod \"certified-operators-594ns\" (UID: \"e00bd4cb-eeca-472b-a935-c33859f82a60\") " pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:02 crc kubenswrapper[4902]: I0121 17:14:02.142753 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00bd4cb-eeca-472b-a935-c33859f82a60-utilities\") pod \"certified-operators-594ns\" (UID: \"e00bd4cb-eeca-472b-a935-c33859f82a60\") " pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:02 crc kubenswrapper[4902]: I0121 17:14:02.142896 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00bd4cb-eeca-472b-a935-c33859f82a60-catalog-content\") pod \"certified-operators-594ns\" (UID: \"e00bd4cb-eeca-472b-a935-c33859f82a60\") " pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:02 crc kubenswrapper[4902]: I0121 17:14:02.143385 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00bd4cb-eeca-472b-a935-c33859f82a60-utilities\") pod \"certified-operators-594ns\" (UID: \"e00bd4cb-eeca-472b-a935-c33859f82a60\") " pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:02 crc kubenswrapper[4902]: I0121 17:14:02.167956 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwq25\" (UniqueName: \"kubernetes.io/projected/e00bd4cb-eeca-472b-a935-c33859f82a60-kube-api-access-fwq25\") pod \"certified-operators-594ns\" (UID: \"e00bd4cb-eeca-472b-a935-c33859f82a60\") " pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:02 crc kubenswrapper[4902]: I0121 17:14:02.227680 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:02 crc kubenswrapper[4902]: I0121 17:14:02.901934 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-594ns"] Jan 21 17:14:03 crc kubenswrapper[4902]: I0121 17:14:03.550574 4902 generic.go:334] "Generic (PLEG): container finished" podID="e00bd4cb-eeca-472b-a935-c33859f82a60" containerID="ff1a11196f0be5d82d9d764421c88d666cb7dec161fe9c65dbcc82e7d1635ea4" exitCode=0 Jan 21 17:14:03 crc kubenswrapper[4902]: I0121 17:14:03.550811 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-594ns" event={"ID":"e00bd4cb-eeca-472b-a935-c33859f82a60","Type":"ContainerDied","Data":"ff1a11196f0be5d82d9d764421c88d666cb7dec161fe9c65dbcc82e7d1635ea4"} Jan 21 17:14:03 crc kubenswrapper[4902]: I0121 17:14:03.550832 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-594ns" event={"ID":"e00bd4cb-eeca-472b-a935-c33859f82a60","Type":"ContainerStarted","Data":"ce31d363464120b5424072d2a0c77ed3f84a85f51a0782a1b19cce1c51bb85b0"} Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.099323 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c5vrg"] Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.103890 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.130148 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c5vrg"] Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.191970 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdbsl\" (UniqueName: \"kubernetes.io/projected/670e29d4-f2fe-4d3d-be51-61fa2dc71666-kube-api-access-fdbsl\") pod \"community-operators-c5vrg\" (UID: \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\") " pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.192247 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/670e29d4-f2fe-4d3d-be51-61fa2dc71666-utilities\") pod \"community-operators-c5vrg\" (UID: \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\") " pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.192363 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/670e29d4-f2fe-4d3d-be51-61fa2dc71666-catalog-content\") pod \"community-operators-c5vrg\" (UID: \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\") " pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.294810 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdbsl\" (UniqueName: \"kubernetes.io/projected/670e29d4-f2fe-4d3d-be51-61fa2dc71666-kube-api-access-fdbsl\") pod \"community-operators-c5vrg\" (UID: \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\") " pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.294904 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/670e29d4-f2fe-4d3d-be51-61fa2dc71666-utilities\") pod \"community-operators-c5vrg\" (UID: \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\") " pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.294933 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/670e29d4-f2fe-4d3d-be51-61fa2dc71666-catalog-content\") pod \"community-operators-c5vrg\" (UID: \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\") " pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.295628 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/670e29d4-f2fe-4d3d-be51-61fa2dc71666-utilities\") pod \"community-operators-c5vrg\" (UID: \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\") " pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.296166 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/670e29d4-f2fe-4d3d-be51-61fa2dc71666-catalog-content\") pod \"community-operators-c5vrg\" (UID: \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\") " pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.315363 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdbsl\" (UniqueName: \"kubernetes.io/projected/670e29d4-f2fe-4d3d-be51-61fa2dc71666-kube-api-access-fdbsl\") pod \"community-operators-c5vrg\" (UID: \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\") " pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.431278 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.595851 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-594ns" event={"ID":"e00bd4cb-eeca-472b-a935-c33859f82a60","Type":"ContainerStarted","Data":"7955b44168e0fd9457897d2aba6c4bc74d0803c2ce4a3e4315838b80494ee340"} Jan 21 17:14:05 crc kubenswrapper[4902]: I0121 17:14:05.006173 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c5vrg"] Jan 21 17:14:05 crc kubenswrapper[4902]: W0121 17:14:05.015190 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod670e29d4_f2fe_4d3d_be51_61fa2dc71666.slice/crio-97709cf162310f5fb973c5cad674496bec1c683ab5d0fb7ba1ec778e7b52dcd8 WatchSource:0}: Error finding container 97709cf162310f5fb973c5cad674496bec1c683ab5d0fb7ba1ec778e7b52dcd8: Status 404 returned error can't find the container with id 97709cf162310f5fb973c5cad674496bec1c683ab5d0fb7ba1ec778e7b52dcd8 Jan 21 17:14:05 crc kubenswrapper[4902]: I0121 17:14:05.605871 4902 generic.go:334] "Generic (PLEG): container finished" podID="670e29d4-f2fe-4d3d-be51-61fa2dc71666" containerID="74213e6a6c4851868f7d2dc49d4dfe0de8491fb0ab0fd9ab28a64bff22dcfd84" exitCode=0 Jan 21 17:14:05 crc kubenswrapper[4902]: I0121 17:14:05.606161 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5vrg" event={"ID":"670e29d4-f2fe-4d3d-be51-61fa2dc71666","Type":"ContainerDied","Data":"74213e6a6c4851868f7d2dc49d4dfe0de8491fb0ab0fd9ab28a64bff22dcfd84"} Jan 21 17:14:05 crc kubenswrapper[4902]: I0121 17:14:05.606188 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5vrg" event={"ID":"670e29d4-f2fe-4d3d-be51-61fa2dc71666","Type":"ContainerStarted","Data":"97709cf162310f5fb973c5cad674496bec1c683ab5d0fb7ba1ec778e7b52dcd8"} Jan 21 17:14:05 crc kubenswrapper[4902]: I0121 17:14:05.611815 4902 generic.go:334] "Generic (PLEG): container finished" podID="e00bd4cb-eeca-472b-a935-c33859f82a60" containerID="7955b44168e0fd9457897d2aba6c4bc74d0803c2ce4a3e4315838b80494ee340" exitCode=0 Jan 21 17:14:05 crc kubenswrapper[4902]: I0121 17:14:05.611847 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-594ns" event={"ID":"e00bd4cb-eeca-472b-a935-c33859f82a60","Type":"ContainerDied","Data":"7955b44168e0fd9457897d2aba6c4bc74d0803c2ce4a3e4315838b80494ee340"} Jan 21 17:14:06 crc kubenswrapper[4902]: I0121 17:14:06.627160 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-594ns" event={"ID":"e00bd4cb-eeca-472b-a935-c33859f82a60","Type":"ContainerStarted","Data":"2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347"} Jan 21 17:14:06 crc kubenswrapper[4902]: I0121 17:14:06.638275 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5vrg" event={"ID":"670e29d4-f2fe-4d3d-be51-61fa2dc71666","Type":"ContainerStarted","Data":"c5cb1c68d424ef9a16252edab44341e6b10d4d4cffb018f9ae25dc4d78655a3c"} Jan 21 17:14:06 crc kubenswrapper[4902]: I0121 17:14:06.655673 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-594ns" podStartSLOduration=3.140535176 podStartE2EDuration="5.655651207s" podCreationTimestamp="2026-01-21 17:14:01 +0000 UTC" firstStartedPulling="2026-01-21 17:14:03.552373978 +0000 UTC m=+9605.629207007" lastFinishedPulling="2026-01-21 17:14:06.067490009 +0000 UTC m=+9608.144323038" observedRunningTime="2026-01-21 17:14:06.650602365 +0000 UTC m=+9608.727435394" watchObservedRunningTime="2026-01-21 17:14:06.655651207 +0000 UTC m=+9608.732484246" Jan 21 17:14:07 crc kubenswrapper[4902]: I0121 17:14:07.653449 4902 generic.go:334] "Generic (PLEG): container finished" podID="670e29d4-f2fe-4d3d-be51-61fa2dc71666" containerID="c5cb1c68d424ef9a16252edab44341e6b10d4d4cffb018f9ae25dc4d78655a3c" exitCode=0 Jan 21 17:14:07 crc kubenswrapper[4902]: I0121 17:14:07.653890 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5vrg" event={"ID":"670e29d4-f2fe-4d3d-be51-61fa2dc71666","Type":"ContainerDied","Data":"c5cb1c68d424ef9a16252edab44341e6b10d4d4cffb018f9ae25dc4d78655a3c"} Jan 21 17:14:08 crc kubenswrapper[4902]: I0121 17:14:08.675092 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5vrg" event={"ID":"670e29d4-f2fe-4d3d-be51-61fa2dc71666","Type":"ContainerStarted","Data":"61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6"} Jan 21 17:14:08 crc kubenswrapper[4902]: I0121 17:14:08.694726 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c5vrg" podStartSLOduration=2.253752409 podStartE2EDuration="4.694706589s" podCreationTimestamp="2026-01-21 17:14:04 +0000 UTC" firstStartedPulling="2026-01-21 17:14:05.61054326 +0000 UTC m=+9607.687376289" lastFinishedPulling="2026-01-21 17:14:08.05149744 +0000 UTC m=+9610.128330469" observedRunningTime="2026-01-21 17:14:08.69259867 +0000 UTC m=+9610.769431699" watchObservedRunningTime="2026-01-21 17:14:08.694706589 +0000 UTC m=+9610.771539628" Jan 21 17:14:12 crc kubenswrapper[4902]: I0121 17:14:12.228176 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:12 crc kubenswrapper[4902]: I0121 17:14:12.228765 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:12 crc kubenswrapper[4902]: I0121 17:14:12.305996 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:12 crc kubenswrapper[4902]: I0121 17:14:12.770715 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:13 crc kubenswrapper[4902]: I0121 17:14:13.679944 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-594ns"] Jan 21 17:14:14 crc kubenswrapper[4902]: I0121 17:14:14.431464 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:14 crc kubenswrapper[4902]: I0121 17:14:14.432599 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:14 crc kubenswrapper[4902]: I0121 17:14:14.498377 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:14 crc kubenswrapper[4902]: I0121 17:14:14.735813 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-594ns" podUID="e00bd4cb-eeca-472b-a935-c33859f82a60" containerName="registry-server" containerID="cri-o://2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347" gracePeriod=2 Jan 21 17:14:14 crc kubenswrapper[4902]: I0121 17:14:14.791616 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.245063 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.372636 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00bd4cb-eeca-472b-a935-c33859f82a60-utilities\") pod \"e00bd4cb-eeca-472b-a935-c33859f82a60\" (UID: \"e00bd4cb-eeca-472b-a935-c33859f82a60\") " Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.372870 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwq25\" (UniqueName: \"kubernetes.io/projected/e00bd4cb-eeca-472b-a935-c33859f82a60-kube-api-access-fwq25\") pod \"e00bd4cb-eeca-472b-a935-c33859f82a60\" (UID: \"e00bd4cb-eeca-472b-a935-c33859f82a60\") " Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.372910 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00bd4cb-eeca-472b-a935-c33859f82a60-catalog-content\") pod \"e00bd4cb-eeca-472b-a935-c33859f82a60\" (UID: \"e00bd4cb-eeca-472b-a935-c33859f82a60\") " Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.373772 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e00bd4cb-eeca-472b-a935-c33859f82a60-utilities" (OuterVolumeSpecName: "utilities") pod "e00bd4cb-eeca-472b-a935-c33859f82a60" (UID: "e00bd4cb-eeca-472b-a935-c33859f82a60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.377978 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e00bd4cb-eeca-472b-a935-c33859f82a60-kube-api-access-fwq25" (OuterVolumeSpecName: "kube-api-access-fwq25") pod "e00bd4cb-eeca-472b-a935-c33859f82a60" (UID: "e00bd4cb-eeca-472b-a935-c33859f82a60"). InnerVolumeSpecName "kube-api-access-fwq25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.424784 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e00bd4cb-eeca-472b-a935-c33859f82a60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e00bd4cb-eeca-472b-a935-c33859f82a60" (UID: "e00bd4cb-eeca-472b-a935-c33859f82a60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.476206 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwq25\" (UniqueName: \"kubernetes.io/projected/e00bd4cb-eeca-472b-a935-c33859f82a60-kube-api-access-fwq25\") on node \"crc\" DevicePath \"\"" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.476243 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00bd4cb-eeca-472b-a935-c33859f82a60-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.476294 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00bd4cb-eeca-472b-a935-c33859f82a60-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.747018 4902 generic.go:334] "Generic (PLEG): container finished" podID="e00bd4cb-eeca-472b-a935-c33859f82a60" containerID="2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347" exitCode=0 Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.747143 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.747138 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-594ns" event={"ID":"e00bd4cb-eeca-472b-a935-c33859f82a60","Type":"ContainerDied","Data":"2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347"} Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.747271 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-594ns" event={"ID":"e00bd4cb-eeca-472b-a935-c33859f82a60","Type":"ContainerDied","Data":"ce31d363464120b5424072d2a0c77ed3f84a85f51a0782a1b19cce1c51bb85b0"} Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.747317 4902 scope.go:117] "RemoveContainer" containerID="2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.767723 4902 scope.go:117] "RemoveContainer" containerID="7955b44168e0fd9457897d2aba6c4bc74d0803c2ce4a3e4315838b80494ee340" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.785072 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-594ns"] Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.797206 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-594ns"] Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.813451 4902 scope.go:117] "RemoveContainer" containerID="ff1a11196f0be5d82d9d764421c88d666cb7dec161fe9c65dbcc82e7d1635ea4" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.867172 4902 scope.go:117] "RemoveContainer" containerID="2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347" Jan 21 17:14:15 crc kubenswrapper[4902]: E0121 17:14:15.867625 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347\": container with ID starting with 2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347 not found: ID does not exist" containerID="2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.867679 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347"} err="failed to get container status \"2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347\": rpc error: code = NotFound desc = could not find container \"2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347\": container with ID starting with 2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347 not found: ID does not exist" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.867716 4902 scope.go:117] "RemoveContainer" containerID="7955b44168e0fd9457897d2aba6c4bc74d0803c2ce4a3e4315838b80494ee340" Jan 21 17:14:15 crc kubenswrapper[4902]: E0121 17:14:15.868024 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7955b44168e0fd9457897d2aba6c4bc74d0803c2ce4a3e4315838b80494ee340\": container with ID starting with 7955b44168e0fd9457897d2aba6c4bc74d0803c2ce4a3e4315838b80494ee340 not found: ID does not exist" containerID="7955b44168e0fd9457897d2aba6c4bc74d0803c2ce4a3e4315838b80494ee340" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.868100 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7955b44168e0fd9457897d2aba6c4bc74d0803c2ce4a3e4315838b80494ee340"} err="failed to get container status \"7955b44168e0fd9457897d2aba6c4bc74d0803c2ce4a3e4315838b80494ee340\": rpc error: code = NotFound desc = could not find container \"7955b44168e0fd9457897d2aba6c4bc74d0803c2ce4a3e4315838b80494ee340\": container with ID starting with 7955b44168e0fd9457897d2aba6c4bc74d0803c2ce4a3e4315838b80494ee340 not found: ID does not exist" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.868127 4902 scope.go:117] "RemoveContainer" containerID="ff1a11196f0be5d82d9d764421c88d666cb7dec161fe9c65dbcc82e7d1635ea4" Jan 21 17:14:15 crc kubenswrapper[4902]: E0121 17:14:15.868343 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff1a11196f0be5d82d9d764421c88d666cb7dec161fe9c65dbcc82e7d1635ea4\": container with ID starting with ff1a11196f0be5d82d9d764421c88d666cb7dec161fe9c65dbcc82e7d1635ea4 not found: ID does not exist" containerID="ff1a11196f0be5d82d9d764421c88d666cb7dec161fe9c65dbcc82e7d1635ea4" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.868370 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff1a11196f0be5d82d9d764421c88d666cb7dec161fe9c65dbcc82e7d1635ea4"} err="failed to get container status \"ff1a11196f0be5d82d9d764421c88d666cb7dec161fe9c65dbcc82e7d1635ea4\": rpc error: code = NotFound desc = could not find container \"ff1a11196f0be5d82d9d764421c88d666cb7dec161fe9c65dbcc82e7d1635ea4\": container with ID starting with ff1a11196f0be5d82d9d764421c88d666cb7dec161fe9c65dbcc82e7d1635ea4 not found: ID does not exist" Jan 21 17:14:16 crc kubenswrapper[4902]: I0121 17:14:16.319093 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e00bd4cb-eeca-472b-a935-c33859f82a60" path="/var/lib/kubelet/pods/e00bd4cb-eeca-472b-a935-c33859f82a60/volumes" Jan 21 17:14:16 crc kubenswrapper[4902]: I0121 17:14:16.682978 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c5vrg"] Jan 21 17:14:17 crc kubenswrapper[4902]: I0121 17:14:17.778848 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:14:17 crc kubenswrapper[4902]: I0121 17:14:17.779222 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:14:17 crc kubenswrapper[4902]: I0121 17:14:17.779278 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 17:14:17 crc kubenswrapper[4902]: I0121 17:14:17.780918 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:14:17 crc kubenswrapper[4902]: I0121 17:14:17.781011 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" gracePeriod=600 Jan 21 17:14:17 crc kubenswrapper[4902]: I0121 17:14:17.795539 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c5vrg" podUID="670e29d4-f2fe-4d3d-be51-61fa2dc71666" containerName="registry-server" containerID="cri-o://61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6" gracePeriod=2 Jan 21 17:14:18 crc kubenswrapper[4902]: E0121 17:14:18.422612 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.774487 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.818969 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" exitCode=0 Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.819030 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1"} Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.819077 4902 scope.go:117] "RemoveContainer" containerID="df1e3c29d75db17b6274424fd93ca7c5fe90dd1bfa5747bd7c348f540e868b0b" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.819842 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:14:18 crc kubenswrapper[4902]: E0121 17:14:18.820228 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.826226 4902 generic.go:334] "Generic (PLEG): container finished" podID="670e29d4-f2fe-4d3d-be51-61fa2dc71666" containerID="61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6" exitCode=0 Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.826254 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5vrg" event={"ID":"670e29d4-f2fe-4d3d-be51-61fa2dc71666","Type":"ContainerDied","Data":"61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6"} Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.826273 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5vrg" event={"ID":"670e29d4-f2fe-4d3d-be51-61fa2dc71666","Type":"ContainerDied","Data":"97709cf162310f5fb973c5cad674496bec1c683ab5d0fb7ba1ec778e7b52dcd8"} Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.826319 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.850027 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/670e29d4-f2fe-4d3d-be51-61fa2dc71666-catalog-content\") pod \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\" (UID: \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\") " Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.850267 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdbsl\" (UniqueName: \"kubernetes.io/projected/670e29d4-f2fe-4d3d-be51-61fa2dc71666-kube-api-access-fdbsl\") pod \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\" (UID: \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\") " Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.850627 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/670e29d4-f2fe-4d3d-be51-61fa2dc71666-utilities\") pod \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\" (UID: \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\") " Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.851722 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/670e29d4-f2fe-4d3d-be51-61fa2dc71666-utilities" (OuterVolumeSpecName: "utilities") pod "670e29d4-f2fe-4d3d-be51-61fa2dc71666" (UID: "670e29d4-f2fe-4d3d-be51-61fa2dc71666"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.863591 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/670e29d4-f2fe-4d3d-be51-61fa2dc71666-kube-api-access-fdbsl" (OuterVolumeSpecName: "kube-api-access-fdbsl") pod "670e29d4-f2fe-4d3d-be51-61fa2dc71666" (UID: "670e29d4-f2fe-4d3d-be51-61fa2dc71666"). InnerVolumeSpecName "kube-api-access-fdbsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.877798 4902 scope.go:117] "RemoveContainer" containerID="61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.922107 4902 scope.go:117] "RemoveContainer" containerID="c5cb1c68d424ef9a16252edab44341e6b10d4d4cffb018f9ae25dc4d78655a3c" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.941514 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/670e29d4-f2fe-4d3d-be51-61fa2dc71666-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "670e29d4-f2fe-4d3d-be51-61fa2dc71666" (UID: "670e29d4-f2fe-4d3d-be51-61fa2dc71666"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.944230 4902 scope.go:117] "RemoveContainer" containerID="74213e6a6c4851868f7d2dc49d4dfe0de8491fb0ab0fd9ab28a64bff22dcfd84" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.953701 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/670e29d4-f2fe-4d3d-be51-61fa2dc71666-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.953725 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/670e29d4-f2fe-4d3d-be51-61fa2dc71666-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.953736 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdbsl\" (UniqueName: \"kubernetes.io/projected/670e29d4-f2fe-4d3d-be51-61fa2dc71666-kube-api-access-fdbsl\") on node \"crc\" DevicePath \"\"" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.984803 4902 scope.go:117] "RemoveContainer" containerID="61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6" Jan 21 17:14:18 crc kubenswrapper[4902]: E0121 17:14:18.985349 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6\": container with ID starting with 61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6 not found: ID does not exist" containerID="61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.985411 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6"} err="failed to get container status \"61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6\": rpc error: code = NotFound desc = could not find container \"61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6\": container with ID starting with 61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6 not found: ID does not exist" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.985443 4902 scope.go:117] "RemoveContainer" containerID="c5cb1c68d424ef9a16252edab44341e6b10d4d4cffb018f9ae25dc4d78655a3c" Jan 21 17:14:18 crc kubenswrapper[4902]: E0121 17:14:18.985911 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5cb1c68d424ef9a16252edab44341e6b10d4d4cffb018f9ae25dc4d78655a3c\": container with ID starting with c5cb1c68d424ef9a16252edab44341e6b10d4d4cffb018f9ae25dc4d78655a3c not found: ID does not exist" containerID="c5cb1c68d424ef9a16252edab44341e6b10d4d4cffb018f9ae25dc4d78655a3c" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.985941 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5cb1c68d424ef9a16252edab44341e6b10d4d4cffb018f9ae25dc4d78655a3c"} err="failed to get container status \"c5cb1c68d424ef9a16252edab44341e6b10d4d4cffb018f9ae25dc4d78655a3c\": rpc error: code = NotFound desc = could not find container \"c5cb1c68d424ef9a16252edab44341e6b10d4d4cffb018f9ae25dc4d78655a3c\": container with ID starting with c5cb1c68d424ef9a16252edab44341e6b10d4d4cffb018f9ae25dc4d78655a3c not found: ID does not exist" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.985959 4902 scope.go:117] "RemoveContainer" containerID="74213e6a6c4851868f7d2dc49d4dfe0de8491fb0ab0fd9ab28a64bff22dcfd84" Jan 21 17:14:18 crc kubenswrapper[4902]: E0121 17:14:18.986307 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74213e6a6c4851868f7d2dc49d4dfe0de8491fb0ab0fd9ab28a64bff22dcfd84\": container with ID starting with 74213e6a6c4851868f7d2dc49d4dfe0de8491fb0ab0fd9ab28a64bff22dcfd84 not found: ID does not exist" containerID="74213e6a6c4851868f7d2dc49d4dfe0de8491fb0ab0fd9ab28a64bff22dcfd84" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.986349 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74213e6a6c4851868f7d2dc49d4dfe0de8491fb0ab0fd9ab28a64bff22dcfd84"} err="failed to get container status \"74213e6a6c4851868f7d2dc49d4dfe0de8491fb0ab0fd9ab28a64bff22dcfd84\": rpc error: code = NotFound desc = could not find container \"74213e6a6c4851868f7d2dc49d4dfe0de8491fb0ab0fd9ab28a64bff22dcfd84\": container with ID starting with 74213e6a6c4851868f7d2dc49d4dfe0de8491fb0ab0fd9ab28a64bff22dcfd84 not found: ID does not exist" Jan 21 17:14:19 crc kubenswrapper[4902]: I0121 17:14:19.176530 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c5vrg"] Jan 21 17:14:19 crc kubenswrapper[4902]: I0121 17:14:19.187931 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c5vrg"] Jan 21 17:14:20 crc kubenswrapper[4902]: I0121 17:14:20.312825 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="670e29d4-f2fe-4d3d-be51-61fa2dc71666" path="/var/lib/kubelet/pods/670e29d4-f2fe-4d3d-be51-61fa2dc71666/volumes" Jan 21 17:14:33 crc kubenswrapper[4902]: I0121 17:14:33.296082 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:14:33 crc kubenswrapper[4902]: E0121 17:14:33.297559 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:14:44 crc kubenswrapper[4902]: I0121 17:14:44.295921 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:14:44 crc kubenswrapper[4902]: E0121 17:14:44.297023 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:14:57 crc kubenswrapper[4902]: I0121 17:14:57.295806 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:14:57 crc kubenswrapper[4902]: E0121 17:14:57.296516 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.161895 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4"] Jan 21 17:15:00 crc kubenswrapper[4902]: E0121 17:15:00.164117 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00bd4cb-eeca-472b-a935-c33859f82a60" containerName="extract-utilities" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.164232 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00bd4cb-eeca-472b-a935-c33859f82a60" containerName="extract-utilities" Jan 21 17:15:00 crc kubenswrapper[4902]: E0121 17:15:00.164323 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="670e29d4-f2fe-4d3d-be51-61fa2dc71666" containerName="extract-utilities" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.164416 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="670e29d4-f2fe-4d3d-be51-61fa2dc71666" containerName="extract-utilities" Jan 21 17:15:00 crc kubenswrapper[4902]: E0121 17:15:00.164493 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00bd4cb-eeca-472b-a935-c33859f82a60" containerName="registry-server" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.164564 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00bd4cb-eeca-472b-a935-c33859f82a60" containerName="registry-server" Jan 21 17:15:00 crc kubenswrapper[4902]: E0121 17:15:00.164650 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="670e29d4-f2fe-4d3d-be51-61fa2dc71666" containerName="registry-server" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.164730 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="670e29d4-f2fe-4d3d-be51-61fa2dc71666" containerName="registry-server" Jan 21 17:15:00 crc kubenswrapper[4902]: E0121 17:15:00.164806 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="670e29d4-f2fe-4d3d-be51-61fa2dc71666" containerName="extract-content" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.164883 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="670e29d4-f2fe-4d3d-be51-61fa2dc71666" containerName="extract-content" Jan 21 17:15:00 crc kubenswrapper[4902]: E0121 17:15:00.164988 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00bd4cb-eeca-472b-a935-c33859f82a60" containerName="extract-content" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.165067 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00bd4cb-eeca-472b-a935-c33859f82a60" containerName="extract-content" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.165332 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="670e29d4-f2fe-4d3d-be51-61fa2dc71666" containerName="registry-server" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.165439 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e00bd4cb-eeca-472b-a935-c33859f82a60" containerName="registry-server" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.166345 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.173490 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.174496 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.176342 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4"] Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.271319 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67c24b2b-2c72-4d64-b26d-00f594c5c656-config-volume\") pod \"collect-profiles-29483595-694q4\" (UID: \"67c24b2b-2c72-4d64-b26d-00f594c5c656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.271930 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gdn5\" (UniqueName: \"kubernetes.io/projected/67c24b2b-2c72-4d64-b26d-00f594c5c656-kube-api-access-2gdn5\") pod \"collect-profiles-29483595-694q4\" (UID: \"67c24b2b-2c72-4d64-b26d-00f594c5c656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.272057 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67c24b2b-2c72-4d64-b26d-00f594c5c656-secret-volume\") pod \"collect-profiles-29483595-694q4\" (UID: \"67c24b2b-2c72-4d64-b26d-00f594c5c656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.374027 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gdn5\" (UniqueName: \"kubernetes.io/projected/67c24b2b-2c72-4d64-b26d-00f594c5c656-kube-api-access-2gdn5\") pod \"collect-profiles-29483595-694q4\" (UID: \"67c24b2b-2c72-4d64-b26d-00f594c5c656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.374731 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67c24b2b-2c72-4d64-b26d-00f594c5c656-secret-volume\") pod \"collect-profiles-29483595-694q4\" (UID: \"67c24b2b-2c72-4d64-b26d-00f594c5c656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.375749 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67c24b2b-2c72-4d64-b26d-00f594c5c656-config-volume\") pod \"collect-profiles-29483595-694q4\" (UID: \"67c24b2b-2c72-4d64-b26d-00f594c5c656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.378089 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67c24b2b-2c72-4d64-b26d-00f594c5c656-config-volume\") pod \"collect-profiles-29483595-694q4\" (UID: \"67c24b2b-2c72-4d64-b26d-00f594c5c656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.612373 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67c24b2b-2c72-4d64-b26d-00f594c5c656-secret-volume\") pod \"collect-profiles-29483595-694q4\" (UID: \"67c24b2b-2c72-4d64-b26d-00f594c5c656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.612485 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gdn5\" (UniqueName: \"kubernetes.io/projected/67c24b2b-2c72-4d64-b26d-00f594c5c656-kube-api-access-2gdn5\") pod \"collect-profiles-29483595-694q4\" (UID: \"67c24b2b-2c72-4d64-b26d-00f594c5c656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.799326 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:01 crc kubenswrapper[4902]: I0121 17:15:01.286543 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4"] Jan 21 17:15:01 crc kubenswrapper[4902]: I0121 17:15:01.296907 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" event={"ID":"67c24b2b-2c72-4d64-b26d-00f594c5c656","Type":"ContainerStarted","Data":"c805a1433d193153366f8c1693f631697a32b49eeb06fb692a163f2de93d7135"} Jan 21 17:15:02 crc kubenswrapper[4902]: I0121 17:15:02.310704 4902 generic.go:334] "Generic (PLEG): container finished" podID="67c24b2b-2c72-4d64-b26d-00f594c5c656" containerID="b311c7ab0782213c3bd1384e2339658ebe74e6b6859aa04130a324203f27a684" exitCode=0 Jan 21 17:15:02 crc kubenswrapper[4902]: I0121 17:15:02.315522 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" event={"ID":"67c24b2b-2c72-4d64-b26d-00f594c5c656","Type":"ContainerDied","Data":"b311c7ab0782213c3bd1384e2339658ebe74e6b6859aa04130a324203f27a684"} Jan 21 17:15:03 crc kubenswrapper[4902]: I0121 17:15:03.717732 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:03 crc kubenswrapper[4902]: I0121 17:15:03.750657 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gdn5\" (UniqueName: \"kubernetes.io/projected/67c24b2b-2c72-4d64-b26d-00f594c5c656-kube-api-access-2gdn5\") pod \"67c24b2b-2c72-4d64-b26d-00f594c5c656\" (UID: \"67c24b2b-2c72-4d64-b26d-00f594c5c656\") " Jan 21 17:15:03 crc kubenswrapper[4902]: I0121 17:15:03.750718 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67c24b2b-2c72-4d64-b26d-00f594c5c656-config-volume\") pod \"67c24b2b-2c72-4d64-b26d-00f594c5c656\" (UID: \"67c24b2b-2c72-4d64-b26d-00f594c5c656\") " Jan 21 17:15:03 crc kubenswrapper[4902]: I0121 17:15:03.750774 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67c24b2b-2c72-4d64-b26d-00f594c5c656-secret-volume\") pod \"67c24b2b-2c72-4d64-b26d-00f594c5c656\" (UID: \"67c24b2b-2c72-4d64-b26d-00f594c5c656\") " Jan 21 17:15:03 crc kubenswrapper[4902]: I0121 17:15:03.752068 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67c24b2b-2c72-4d64-b26d-00f594c5c656-config-volume" (OuterVolumeSpecName: "config-volume") pod "67c24b2b-2c72-4d64-b26d-00f594c5c656" (UID: "67c24b2b-2c72-4d64-b26d-00f594c5c656"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:15:03 crc kubenswrapper[4902]: I0121 17:15:03.773716 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67c24b2b-2c72-4d64-b26d-00f594c5c656-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "67c24b2b-2c72-4d64-b26d-00f594c5c656" (UID: "67c24b2b-2c72-4d64-b26d-00f594c5c656"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:15:03 crc kubenswrapper[4902]: I0121 17:15:03.773881 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67c24b2b-2c72-4d64-b26d-00f594c5c656-kube-api-access-2gdn5" (OuterVolumeSpecName: "kube-api-access-2gdn5") pod "67c24b2b-2c72-4d64-b26d-00f594c5c656" (UID: "67c24b2b-2c72-4d64-b26d-00f594c5c656"). InnerVolumeSpecName "kube-api-access-2gdn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:15:03 crc kubenswrapper[4902]: I0121 17:15:03.853124 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gdn5\" (UniqueName: \"kubernetes.io/projected/67c24b2b-2c72-4d64-b26d-00f594c5c656-kube-api-access-2gdn5\") on node \"crc\" DevicePath \"\"" Jan 21 17:15:03 crc kubenswrapper[4902]: I0121 17:15:03.853160 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67c24b2b-2c72-4d64-b26d-00f594c5c656-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:15:03 crc kubenswrapper[4902]: I0121 17:15:03.853171 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67c24b2b-2c72-4d64-b26d-00f594c5c656-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:15:04 crc kubenswrapper[4902]: I0121 17:15:04.345953 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" event={"ID":"67c24b2b-2c72-4d64-b26d-00f594c5c656","Type":"ContainerDied","Data":"c805a1433d193153366f8c1693f631697a32b49eeb06fb692a163f2de93d7135"} Jan 21 17:15:04 crc kubenswrapper[4902]: I0121 17:15:04.346925 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c805a1433d193153366f8c1693f631697a32b49eeb06fb692a163f2de93d7135" Jan 21 17:15:04 crc kubenswrapper[4902]: I0121 17:15:04.347408 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:04 crc kubenswrapper[4902]: I0121 17:15:04.815720 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf"] Jan 21 17:15:04 crc kubenswrapper[4902]: I0121 17:15:04.827250 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf"] Jan 21 17:15:06 crc kubenswrapper[4902]: I0121 17:15:06.320667 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8598a357-73ed-4850-bbd3-ce46d3d9a623" path="/var/lib/kubelet/pods/8598a357-73ed-4850-bbd3-ce46d3d9a623/volumes" Jan 21 17:15:10 crc kubenswrapper[4902]: I0121 17:15:10.295442 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:15:10 crc kubenswrapper[4902]: E0121 17:15:10.296194 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:15:24 crc kubenswrapper[4902]: I0121 17:15:24.298376 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:15:24 crc kubenswrapper[4902]: E0121 17:15:24.299468 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:15:37 crc kubenswrapper[4902]: I0121 17:15:37.351124 4902 scope.go:117] "RemoveContainer" containerID="1150c7694232d9425d7e1595d33c3ffaecb94a439744ef680974e317c8ea6ae2" Jan 21 17:15:39 crc kubenswrapper[4902]: I0121 17:15:39.295676 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:15:39 crc kubenswrapper[4902]: E0121 17:15:39.296191 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.325425 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rvs44"] Jan 21 17:15:42 crc kubenswrapper[4902]: E0121 17:15:42.326584 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c24b2b-2c72-4d64-b26d-00f594c5c656" containerName="collect-profiles" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.326607 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c24b2b-2c72-4d64-b26d-00f594c5c656" containerName="collect-profiles" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.326995 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c24b2b-2c72-4d64-b26d-00f594c5c656" containerName="collect-profiles" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.329891 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.386729 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvs44"] Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.405971 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jthp\" (UniqueName: \"kubernetes.io/projected/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-kube-api-access-6jthp\") pod \"redhat-operators-rvs44\" (UID: \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\") " pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.406278 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-utilities\") pod \"redhat-operators-rvs44\" (UID: \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\") " pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.406408 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-catalog-content\") pod \"redhat-operators-rvs44\" (UID: \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\") " pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.509333 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jthp\" (UniqueName: \"kubernetes.io/projected/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-kube-api-access-6jthp\") pod \"redhat-operators-rvs44\" (UID: \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\") " pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.509479 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-utilities\") pod \"redhat-operators-rvs44\" (UID: \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\") " pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.509533 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-catalog-content\") pod \"redhat-operators-rvs44\" (UID: \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\") " pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.510272 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-utilities\") pod \"redhat-operators-rvs44\" (UID: \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\") " pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.510280 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-catalog-content\") pod \"redhat-operators-rvs44\" (UID: \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\") " pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.533671 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jthp\" (UniqueName: \"kubernetes.io/projected/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-kube-api-access-6jthp\") pod \"redhat-operators-rvs44\" (UID: \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\") " pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.715996 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:43 crc kubenswrapper[4902]: I0121 17:15:43.227369 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvs44"] Jan 21 17:15:43 crc kubenswrapper[4902]: W0121 17:15:43.625881 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dec7b84_10f0_4e8b_a421_1ecf8e9f4f20.slice/crio-f58bcf3190a13953897768469416feb9d118e469c63386293d2490eae70ed3b6 WatchSource:0}: Error finding container f58bcf3190a13953897768469416feb9d118e469c63386293d2490eae70ed3b6: Status 404 returned error can't find the container with id f58bcf3190a13953897768469416feb9d118e469c63386293d2490eae70ed3b6 Jan 21 17:15:43 crc kubenswrapper[4902]: I0121 17:15:43.854412 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvs44" event={"ID":"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20","Type":"ContainerStarted","Data":"f58bcf3190a13953897768469416feb9d118e469c63386293d2490eae70ed3b6"} Jan 21 17:15:44 crc kubenswrapper[4902]: I0121 17:15:44.865788 4902 generic.go:334] "Generic (PLEG): container finished" podID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" containerID="0e23a41ee716f884d9a85c55bc9de17dd1e34c3c252f673cf0241a783856f12f" exitCode=0 Jan 21 17:15:44 crc kubenswrapper[4902]: I0121 17:15:44.865827 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvs44" event={"ID":"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20","Type":"ContainerDied","Data":"0e23a41ee716f884d9a85c55bc9de17dd1e34c3c252f673cf0241a783856f12f"} Jan 21 17:15:44 crc kubenswrapper[4902]: I0121 17:15:44.868218 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:15:46 crc kubenswrapper[4902]: I0121 17:15:46.895401 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvs44" event={"ID":"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20","Type":"ContainerStarted","Data":"0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5"} Jan 21 17:15:46 crc kubenswrapper[4902]: E0121 17:15:46.964660 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dec7b84_10f0_4e8b_a421_1ecf8e9f4f20.slice/crio-0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5.scope\": RecentStats: unable to find data in memory cache]" Jan 21 17:15:49 crc kubenswrapper[4902]: I0121 17:15:49.941823 4902 generic.go:334] "Generic (PLEG): container finished" podID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" containerID="0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5" exitCode=0 Jan 21 17:15:49 crc kubenswrapper[4902]: I0121 17:15:49.942273 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvs44" event={"ID":"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20","Type":"ContainerDied","Data":"0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5"} Jan 21 17:15:50 crc kubenswrapper[4902]: I0121 17:15:50.956503 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvs44" event={"ID":"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20","Type":"ContainerStarted","Data":"7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd"} Jan 21 17:15:50 crc kubenswrapper[4902]: I0121 17:15:50.986641 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rvs44" podStartSLOduration=3.418622988 podStartE2EDuration="8.986620337s" podCreationTimestamp="2026-01-21 17:15:42 +0000 UTC" firstStartedPulling="2026-01-21 17:15:44.868009419 +0000 UTC m=+9706.944842448" lastFinishedPulling="2026-01-21 17:15:50.436006758 +0000 UTC m=+9712.512839797" observedRunningTime="2026-01-21 17:15:50.985027582 +0000 UTC m=+9713.061860651" watchObservedRunningTime="2026-01-21 17:15:50.986620337 +0000 UTC m=+9713.063453356" Jan 21 17:15:52 crc kubenswrapper[4902]: I0121 17:15:52.716661 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:52 crc kubenswrapper[4902]: I0121 17:15:52.716981 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:53 crc kubenswrapper[4902]: I0121 17:15:53.294822 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:15:53 crc kubenswrapper[4902]: E0121 17:15:53.295178 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:15:53 crc kubenswrapper[4902]: I0121 17:15:53.785497 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rvs44" podUID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" containerName="registry-server" probeResult="failure" output=< Jan 21 17:15:53 crc kubenswrapper[4902]: timeout: failed to connect service ":50051" within 1s Jan 21 17:15:53 crc kubenswrapper[4902]: > Jan 21 17:16:02 crc kubenswrapper[4902]: I0121 17:16:02.775655 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:16:02 crc kubenswrapper[4902]: I0121 17:16:02.839109 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:16:03 crc kubenswrapper[4902]: I0121 17:16:03.029112 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvs44"] Jan 21 17:16:04 crc kubenswrapper[4902]: I0121 17:16:04.127994 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rvs44" podUID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" containerName="registry-server" containerID="cri-o://7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd" gracePeriod=2 Jan 21 17:16:04 crc kubenswrapper[4902]: I0121 17:16:04.616300 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:16:04 crc kubenswrapper[4902]: I0121 17:16:04.739173 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jthp\" (UniqueName: \"kubernetes.io/projected/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-kube-api-access-6jthp\") pod \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\" (UID: \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\") " Jan 21 17:16:04 crc kubenswrapper[4902]: I0121 17:16:04.739396 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-catalog-content\") pod \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\" (UID: \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\") " Jan 21 17:16:04 crc kubenswrapper[4902]: I0121 17:16:04.739492 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-utilities\") pod \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\" (UID: \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\") " Jan 21 17:16:04 crc kubenswrapper[4902]: I0121 17:16:04.739894 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-utilities" (OuterVolumeSpecName: "utilities") pod "9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" (UID: "9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:16:04 crc kubenswrapper[4902]: I0121 17:16:04.740493 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:04 crc kubenswrapper[4902]: I0121 17:16:04.745293 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-kube-api-access-6jthp" (OuterVolumeSpecName: "kube-api-access-6jthp") pod "9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" (UID: "9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20"). InnerVolumeSpecName "kube-api-access-6jthp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:04 crc kubenswrapper[4902]: I0121 17:16:04.842097 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jthp\" (UniqueName: \"kubernetes.io/projected/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-kube-api-access-6jthp\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:04 crc kubenswrapper[4902]: I0121 17:16:04.871219 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" (UID: "9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:16:04 crc kubenswrapper[4902]: I0121 17:16:04.944476 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.141511 4902 generic.go:334] "Generic (PLEG): container finished" podID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" containerID="7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd" exitCode=0 Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.141582 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.141596 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvs44" event={"ID":"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20","Type":"ContainerDied","Data":"7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd"} Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.141841 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvs44" event={"ID":"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20","Type":"ContainerDied","Data":"f58bcf3190a13953897768469416feb9d118e469c63386293d2490eae70ed3b6"} Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.141890 4902 scope.go:117] "RemoveContainer" containerID="7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd" Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.175480 4902 scope.go:117] "RemoveContainer" containerID="0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5" Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.210637 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvs44"] Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.211196 4902 scope.go:117] "RemoveContainer" containerID="0e23a41ee716f884d9a85c55bc9de17dd1e34c3c252f673cf0241a783856f12f" Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.224651 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rvs44"] Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.254442 4902 scope.go:117] "RemoveContainer" containerID="7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd" Jan 21 17:16:05 crc kubenswrapper[4902]: E0121 17:16:05.254995 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd\": container with ID starting with 7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd not found: ID does not exist" containerID="7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd" Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.255025 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd"} err="failed to get container status \"7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd\": rpc error: code = NotFound desc = could not find container \"7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd\": container with ID starting with 7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd not found: ID does not exist" Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.255089 4902 scope.go:117] "RemoveContainer" containerID="0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5" Jan 21 17:16:05 crc kubenswrapper[4902]: E0121 17:16:05.255780 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5\": container with ID starting with 0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5 not found: ID does not exist" containerID="0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5" Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.255832 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5"} err="failed to get container status \"0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5\": rpc error: code = NotFound desc = could not find container \"0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5\": container with ID starting with 0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5 not found: ID does not exist" Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.255866 4902 scope.go:117] "RemoveContainer" containerID="0e23a41ee716f884d9a85c55bc9de17dd1e34c3c252f673cf0241a783856f12f" Jan 21 17:16:05 crc kubenswrapper[4902]: E0121 17:16:05.257866 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e23a41ee716f884d9a85c55bc9de17dd1e34c3c252f673cf0241a783856f12f\": container with ID starting with 0e23a41ee716f884d9a85c55bc9de17dd1e34c3c252f673cf0241a783856f12f not found: ID does not exist" containerID="0e23a41ee716f884d9a85c55bc9de17dd1e34c3c252f673cf0241a783856f12f" Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.258402 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e23a41ee716f884d9a85c55bc9de17dd1e34c3c252f673cf0241a783856f12f"} err="failed to get container status \"0e23a41ee716f884d9a85c55bc9de17dd1e34c3c252f673cf0241a783856f12f\": rpc error: code = NotFound desc = could not find container \"0e23a41ee716f884d9a85c55bc9de17dd1e34c3c252f673cf0241a783856f12f\": container with ID starting with 0e23a41ee716f884d9a85c55bc9de17dd1e34c3c252f673cf0241a783856f12f not found: ID does not exist" Jan 21 17:16:06 crc kubenswrapper[4902]: I0121 17:16:06.318108 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" path="/var/lib/kubelet/pods/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20/volumes" Jan 21 17:16:08 crc kubenswrapper[4902]: I0121 17:16:08.313697 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:16:08 crc kubenswrapper[4902]: E0121 17:16:08.316802 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:16:19 crc kubenswrapper[4902]: I0121 17:16:19.298360 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:16:19 crc kubenswrapper[4902]: E0121 17:16:19.299786 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:16:32 crc kubenswrapper[4902]: I0121 17:16:32.296121 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:16:32 crc kubenswrapper[4902]: E0121 17:16:32.297662 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:16:46 crc kubenswrapper[4902]: I0121 17:16:46.295308 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:16:46 crc kubenswrapper[4902]: E0121 17:16:46.296289 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:16:58 crc kubenswrapper[4902]: I0121 17:16:58.301612 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:16:58 crc kubenswrapper[4902]: E0121 17:16:58.302498 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:17:09 crc kubenswrapper[4902]: I0121 17:17:09.294889 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:17:09 crc kubenswrapper[4902]: E0121 17:17:09.295961 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:17:21 crc kubenswrapper[4902]: I0121 17:17:21.296884 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:17:21 crc kubenswrapper[4902]: E0121 17:17:21.297871 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:17:32 crc kubenswrapper[4902]: I0121 17:17:32.303288 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:17:32 crc kubenswrapper[4902]: E0121 17:17:32.304333 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:17:45 crc kubenswrapper[4902]: I0121 17:17:45.295295 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:17:45 crc kubenswrapper[4902]: E0121 17:17:45.296192 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:17:57 crc kubenswrapper[4902]: I0121 17:17:57.295467 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:17:57 crc kubenswrapper[4902]: E0121 17:17:57.296123 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:18:12 crc kubenswrapper[4902]: I0121 17:18:12.295164 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:18:12 crc kubenswrapper[4902]: E0121 17:18:12.295874 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:18:24 crc kubenswrapper[4902]: I0121 17:18:24.295433 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:18:24 crc kubenswrapper[4902]: E0121 17:18:24.296337 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:18:37 crc kubenswrapper[4902]: I0121 17:18:37.306025 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:18:37 crc kubenswrapper[4902]: E0121 17:18:37.306887 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:18:48 crc kubenswrapper[4902]: I0121 17:18:48.313084 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:18:48 crc kubenswrapper[4902]: E0121 17:18:48.314564 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:19:01 crc kubenswrapper[4902]: I0121 17:19:01.294962 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:19:01 crc kubenswrapper[4902]: E0121 17:19:01.295606 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:19:12 crc kubenswrapper[4902]: I0121 17:19:12.295008 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:19:12 crc kubenswrapper[4902]: E0121 17:19:12.295763 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:19:25 crc kubenswrapper[4902]: I0121 17:19:25.296178 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:19:26 crc kubenswrapper[4902]: I0121 17:19:26.548465 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"5e6bee27c568351479893fd8644172bc1970f833c3f9b00f5a27074b919cd4b1"} Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.180467 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6frl2"] Jan 21 17:21:01 crc kubenswrapper[4902]: E0121 17:21:01.181678 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" containerName="registry-server" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.181692 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" containerName="registry-server" Jan 21 17:21:01 crc kubenswrapper[4902]: E0121 17:21:01.181704 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" containerName="extract-utilities" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.181714 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" containerName="extract-utilities" Jan 21 17:21:01 crc kubenswrapper[4902]: E0121 17:21:01.181738 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" containerName="extract-content" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.181745 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" containerName="extract-content" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.182023 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" containerName="registry-server" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.183904 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.200246 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6frl2"] Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.341461 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecbac09f-37bd-4a3e-bf8c-5fe146260041-utilities\") pod \"redhat-marketplace-6frl2\" (UID: \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\") " pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.341767 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f67wt\" (UniqueName: \"kubernetes.io/projected/ecbac09f-37bd-4a3e-bf8c-5fe146260041-kube-api-access-f67wt\") pod \"redhat-marketplace-6frl2\" (UID: \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\") " pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.341802 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecbac09f-37bd-4a3e-bf8c-5fe146260041-catalog-content\") pod \"redhat-marketplace-6frl2\" (UID: \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\") " pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.444170 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecbac09f-37bd-4a3e-bf8c-5fe146260041-utilities\") pod \"redhat-marketplace-6frl2\" (UID: \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\") " pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.444329 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f67wt\" (UniqueName: \"kubernetes.io/projected/ecbac09f-37bd-4a3e-bf8c-5fe146260041-kube-api-access-f67wt\") pod \"redhat-marketplace-6frl2\" (UID: \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\") " pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.444368 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecbac09f-37bd-4a3e-bf8c-5fe146260041-catalog-content\") pod \"redhat-marketplace-6frl2\" (UID: \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\") " pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.444812 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecbac09f-37bd-4a3e-bf8c-5fe146260041-utilities\") pod \"redhat-marketplace-6frl2\" (UID: \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\") " pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.444957 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecbac09f-37bd-4a3e-bf8c-5fe146260041-catalog-content\") pod \"redhat-marketplace-6frl2\" (UID: \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\") " pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.466192 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f67wt\" (UniqueName: \"kubernetes.io/projected/ecbac09f-37bd-4a3e-bf8c-5fe146260041-kube-api-access-f67wt\") pod \"redhat-marketplace-6frl2\" (UID: \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\") " pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.503768 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:02 crc kubenswrapper[4902]: I0121 17:21:02.027558 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6frl2"] Jan 21 17:21:02 crc kubenswrapper[4902]: I0121 17:21:02.636178 4902 generic.go:334] "Generic (PLEG): container finished" podID="ecbac09f-37bd-4a3e-bf8c-5fe146260041" containerID="7f6d4871b5cfa39a7307810ea537afda726bf16068efc742569f2d1358c166b7" exitCode=0 Jan 21 17:21:02 crc kubenswrapper[4902]: I0121 17:21:02.636330 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6frl2" event={"ID":"ecbac09f-37bd-4a3e-bf8c-5fe146260041","Type":"ContainerDied","Data":"7f6d4871b5cfa39a7307810ea537afda726bf16068efc742569f2d1358c166b7"} Jan 21 17:21:02 crc kubenswrapper[4902]: I0121 17:21:02.636579 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6frl2" event={"ID":"ecbac09f-37bd-4a3e-bf8c-5fe146260041","Type":"ContainerStarted","Data":"69d9d18b10b5abdd62316b10a1673d000d4b4df0964929f0f8317380d299210f"} Jan 21 17:21:02 crc kubenswrapper[4902]: I0121 17:21:02.639255 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:21:03 crc kubenswrapper[4902]: I0121 17:21:03.649366 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6frl2" event={"ID":"ecbac09f-37bd-4a3e-bf8c-5fe146260041","Type":"ContainerStarted","Data":"fe3de5fc93197fb01fc562294f68aecd488a190d304cf7e878d1701ca21b4481"} Jan 21 17:21:04 crc kubenswrapper[4902]: I0121 17:21:04.670133 4902 generic.go:334] "Generic (PLEG): container finished" podID="ecbac09f-37bd-4a3e-bf8c-5fe146260041" containerID="fe3de5fc93197fb01fc562294f68aecd488a190d304cf7e878d1701ca21b4481" exitCode=0 Jan 21 17:21:04 crc kubenswrapper[4902]: I0121 17:21:04.670539 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6frl2" event={"ID":"ecbac09f-37bd-4a3e-bf8c-5fe146260041","Type":"ContainerDied","Data":"fe3de5fc93197fb01fc562294f68aecd488a190d304cf7e878d1701ca21b4481"} Jan 21 17:21:05 crc kubenswrapper[4902]: I0121 17:21:05.686442 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6frl2" event={"ID":"ecbac09f-37bd-4a3e-bf8c-5fe146260041","Type":"ContainerStarted","Data":"fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be"} Jan 21 17:21:05 crc kubenswrapper[4902]: I0121 17:21:05.713489 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6frl2" podStartSLOduration=2.142636655 podStartE2EDuration="4.713472887s" podCreationTimestamp="2026-01-21 17:21:01 +0000 UTC" firstStartedPulling="2026-01-21 17:21:02.638914028 +0000 UTC m=+10024.715747057" lastFinishedPulling="2026-01-21 17:21:05.20975023 +0000 UTC m=+10027.286583289" observedRunningTime="2026-01-21 17:21:05.70719768 +0000 UTC m=+10027.784030709" watchObservedRunningTime="2026-01-21 17:21:05.713472887 +0000 UTC m=+10027.790305916" Jan 21 17:21:11 crc kubenswrapper[4902]: I0121 17:21:11.505573 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:11 crc kubenswrapper[4902]: I0121 17:21:11.506179 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:11 crc kubenswrapper[4902]: I0121 17:21:11.560779 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:11 crc kubenswrapper[4902]: I0121 17:21:11.807641 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:11 crc kubenswrapper[4902]: I0121 17:21:11.895024 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6frl2"] Jan 21 17:21:13 crc kubenswrapper[4902]: I0121 17:21:13.766327 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6frl2" podUID="ecbac09f-37bd-4a3e-bf8c-5fe146260041" containerName="registry-server" containerID="cri-o://fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be" gracePeriod=2 Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.257995 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.429666 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecbac09f-37bd-4a3e-bf8c-5fe146260041-utilities\") pod \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\" (UID: \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\") " Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.430384 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecbac09f-37bd-4a3e-bf8c-5fe146260041-catalog-content\") pod \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\" (UID: \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\") " Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.430635 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f67wt\" (UniqueName: \"kubernetes.io/projected/ecbac09f-37bd-4a3e-bf8c-5fe146260041-kube-api-access-f67wt\") pod \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\" (UID: \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\") " Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.431140 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecbac09f-37bd-4a3e-bf8c-5fe146260041-utilities" (OuterVolumeSpecName: "utilities") pod "ecbac09f-37bd-4a3e-bf8c-5fe146260041" (UID: "ecbac09f-37bd-4a3e-bf8c-5fe146260041"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.432106 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecbac09f-37bd-4a3e-bf8c-5fe146260041-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.435592 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecbac09f-37bd-4a3e-bf8c-5fe146260041-kube-api-access-f67wt" (OuterVolumeSpecName: "kube-api-access-f67wt") pod "ecbac09f-37bd-4a3e-bf8c-5fe146260041" (UID: "ecbac09f-37bd-4a3e-bf8c-5fe146260041"). InnerVolumeSpecName "kube-api-access-f67wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.470134 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecbac09f-37bd-4a3e-bf8c-5fe146260041-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ecbac09f-37bd-4a3e-bf8c-5fe146260041" (UID: "ecbac09f-37bd-4a3e-bf8c-5fe146260041"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.539173 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecbac09f-37bd-4a3e-bf8c-5fe146260041-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.539205 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f67wt\" (UniqueName: \"kubernetes.io/projected/ecbac09f-37bd-4a3e-bf8c-5fe146260041-kube-api-access-f67wt\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.785928 4902 generic.go:334] "Generic (PLEG): container finished" podID="ecbac09f-37bd-4a3e-bf8c-5fe146260041" containerID="fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be" exitCode=0 Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.786009 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6frl2" event={"ID":"ecbac09f-37bd-4a3e-bf8c-5fe146260041","Type":"ContainerDied","Data":"fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be"} Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.786148 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6frl2" event={"ID":"ecbac09f-37bd-4a3e-bf8c-5fe146260041","Type":"ContainerDied","Data":"69d9d18b10b5abdd62316b10a1673d000d4b4df0964929f0f8317380d299210f"} Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.786192 4902 scope.go:117] "RemoveContainer" containerID="fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.786426 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.847708 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6frl2"] Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.851508 4902 scope.go:117] "RemoveContainer" containerID="fe3de5fc93197fb01fc562294f68aecd488a190d304cf7e878d1701ca21b4481" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.862735 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6frl2"] Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.891973 4902 scope.go:117] "RemoveContainer" containerID="7f6d4871b5cfa39a7307810ea537afda726bf16068efc742569f2d1358c166b7" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.928213 4902 scope.go:117] "RemoveContainer" containerID="fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be" Jan 21 17:21:14 crc kubenswrapper[4902]: E0121 17:21:14.929849 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be\": container with ID starting with fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be not found: ID does not exist" containerID="fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.929912 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be"} err="failed to get container status \"fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be\": rpc error: code = NotFound desc = could not find container \"fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be\": container with ID starting with fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be not found: ID does not exist" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.929935 4902 scope.go:117] "RemoveContainer" containerID="fe3de5fc93197fb01fc562294f68aecd488a190d304cf7e878d1701ca21b4481" Jan 21 17:21:14 crc kubenswrapper[4902]: E0121 17:21:14.934432 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe3de5fc93197fb01fc562294f68aecd488a190d304cf7e878d1701ca21b4481\": container with ID starting with fe3de5fc93197fb01fc562294f68aecd488a190d304cf7e878d1701ca21b4481 not found: ID does not exist" containerID="fe3de5fc93197fb01fc562294f68aecd488a190d304cf7e878d1701ca21b4481" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.934466 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe3de5fc93197fb01fc562294f68aecd488a190d304cf7e878d1701ca21b4481"} err="failed to get container status \"fe3de5fc93197fb01fc562294f68aecd488a190d304cf7e878d1701ca21b4481\": rpc error: code = NotFound desc = could not find container \"fe3de5fc93197fb01fc562294f68aecd488a190d304cf7e878d1701ca21b4481\": container with ID starting with fe3de5fc93197fb01fc562294f68aecd488a190d304cf7e878d1701ca21b4481 not found: ID does not exist" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.934482 4902 scope.go:117] "RemoveContainer" containerID="7f6d4871b5cfa39a7307810ea537afda726bf16068efc742569f2d1358c166b7" Jan 21 17:21:14 crc kubenswrapper[4902]: E0121 17:21:14.935001 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f6d4871b5cfa39a7307810ea537afda726bf16068efc742569f2d1358c166b7\": container with ID starting with 7f6d4871b5cfa39a7307810ea537afda726bf16068efc742569f2d1358c166b7 not found: ID does not exist" containerID="7f6d4871b5cfa39a7307810ea537afda726bf16068efc742569f2d1358c166b7" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.935033 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f6d4871b5cfa39a7307810ea537afda726bf16068efc742569f2d1358c166b7"} err="failed to get container status \"7f6d4871b5cfa39a7307810ea537afda726bf16068efc742569f2d1358c166b7\": rpc error: code = NotFound desc = could not find container \"7f6d4871b5cfa39a7307810ea537afda726bf16068efc742569f2d1358c166b7\": container with ID starting with 7f6d4871b5cfa39a7307810ea537afda726bf16068efc742569f2d1358c166b7 not found: ID does not exist" Jan 21 17:21:16 crc kubenswrapper[4902]: I0121 17:21:16.320016 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecbac09f-37bd-4a3e-bf8c-5fe146260041" path="/var/lib/kubelet/pods/ecbac09f-37bd-4a3e-bf8c-5fe146260041/volumes" Jan 21 17:21:47 crc kubenswrapper[4902]: I0121 17:21:47.769830 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:21:47 crc kubenswrapper[4902]: I0121 17:21:47.770524 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:22:17 crc kubenswrapper[4902]: I0121 17:22:17.769513 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:22:17 crc kubenswrapper[4902]: I0121 17:22:17.770145 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:22:47 crc kubenswrapper[4902]: I0121 17:22:47.776633 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:22:47 crc kubenswrapper[4902]: I0121 17:22:47.777290 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:22:47 crc kubenswrapper[4902]: I0121 17:22:47.777350 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 17:22:47 crc kubenswrapper[4902]: I0121 17:22:47.778227 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5e6bee27c568351479893fd8644172bc1970f833c3f9b00f5a27074b919cd4b1"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:22:47 crc kubenswrapper[4902]: I0121 17:22:47.778306 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://5e6bee27c568351479893fd8644172bc1970f833c3f9b00f5a27074b919cd4b1" gracePeriod=600 Jan 21 17:22:48 crc kubenswrapper[4902]: I0121 17:22:48.875938 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="5e6bee27c568351479893fd8644172bc1970f833c3f9b00f5a27074b919cd4b1" exitCode=0 Jan 21 17:22:48 crc kubenswrapper[4902]: I0121 17:22:48.876025 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"5e6bee27c568351479893fd8644172bc1970f833c3f9b00f5a27074b919cd4b1"} Jan 21 17:22:48 crc kubenswrapper[4902]: I0121 17:22:48.877448 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e"} Jan 21 17:22:48 crc kubenswrapper[4902]: I0121 17:22:48.877551 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:24:40 crc kubenswrapper[4902]: I0121 17:24:40.965917 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-st2m5"] Jan 21 17:24:40 crc kubenswrapper[4902]: E0121 17:24:40.983208 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecbac09f-37bd-4a3e-bf8c-5fe146260041" containerName="registry-server" Jan 21 17:24:40 crc kubenswrapper[4902]: I0121 17:24:40.983467 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecbac09f-37bd-4a3e-bf8c-5fe146260041" containerName="registry-server" Jan 21 17:24:40 crc kubenswrapper[4902]: E0121 17:24:40.983575 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecbac09f-37bd-4a3e-bf8c-5fe146260041" containerName="extract-utilities" Jan 21 17:24:40 crc kubenswrapper[4902]: I0121 17:24:40.983662 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecbac09f-37bd-4a3e-bf8c-5fe146260041" containerName="extract-utilities" Jan 21 17:24:40 crc kubenswrapper[4902]: E0121 17:24:40.983775 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecbac09f-37bd-4a3e-bf8c-5fe146260041" containerName="extract-content" Jan 21 17:24:40 crc kubenswrapper[4902]: I0121 17:24:40.983856 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecbac09f-37bd-4a3e-bf8c-5fe146260041" containerName="extract-content" Jan 21 17:24:40 crc kubenswrapper[4902]: I0121 17:24:40.984410 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecbac09f-37bd-4a3e-bf8c-5fe146260041" containerName="registry-server" Jan 21 17:24:40 crc kubenswrapper[4902]: I0121 17:24:40.988481 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-st2m5"] Jan 21 17:24:40 crc kubenswrapper[4902]: I0121 17:24:40.988781 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:41 crc kubenswrapper[4902]: I0121 17:24:41.024577 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-utilities\") pod \"community-operators-st2m5\" (UID: \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\") " pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:41 crc kubenswrapper[4902]: I0121 17:24:41.024670 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9blfh\" (UniqueName: \"kubernetes.io/projected/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-kube-api-access-9blfh\") pod \"community-operators-st2m5\" (UID: \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\") " pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:41 crc kubenswrapper[4902]: I0121 17:24:41.024818 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-catalog-content\") pod \"community-operators-st2m5\" (UID: \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\") " pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:41 crc kubenswrapper[4902]: I0121 17:24:41.127743 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-catalog-content\") pod \"community-operators-st2m5\" (UID: \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\") " pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:41 crc kubenswrapper[4902]: I0121 17:24:41.128098 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-utilities\") pod \"community-operators-st2m5\" (UID: \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\") " pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:41 crc kubenswrapper[4902]: I0121 17:24:41.128290 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-catalog-content\") pod \"community-operators-st2m5\" (UID: \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\") " pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:41 crc kubenswrapper[4902]: I0121 17:24:41.128727 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-utilities\") pod \"community-operators-st2m5\" (UID: \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\") " pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:41 crc kubenswrapper[4902]: I0121 17:24:41.129332 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9blfh\" (UniqueName: \"kubernetes.io/projected/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-kube-api-access-9blfh\") pod \"community-operators-st2m5\" (UID: \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\") " pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:41 crc kubenswrapper[4902]: I0121 17:24:41.150188 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9blfh\" (UniqueName: \"kubernetes.io/projected/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-kube-api-access-9blfh\") pod \"community-operators-st2m5\" (UID: \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\") " pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:41 crc kubenswrapper[4902]: I0121 17:24:41.332600 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:41 crc kubenswrapper[4902]: I0121 17:24:41.890140 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-st2m5"] Jan 21 17:24:42 crc kubenswrapper[4902]: I0121 17:24:42.149609 4902 generic.go:334] "Generic (PLEG): container finished" podID="0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" containerID="2bf183c9fad0e08c87d322a8fb144d6c166cf69e7ccd0145fc6e2547cd5e21ea" exitCode=0 Jan 21 17:24:42 crc kubenswrapper[4902]: I0121 17:24:42.149657 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-st2m5" event={"ID":"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2","Type":"ContainerDied","Data":"2bf183c9fad0e08c87d322a8fb144d6c166cf69e7ccd0145fc6e2547cd5e21ea"} Jan 21 17:24:42 crc kubenswrapper[4902]: I0121 17:24:42.149683 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-st2m5" event={"ID":"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2","Type":"ContainerStarted","Data":"9bf11b95b15931c87e146b54d872973cb5b5c988f66b6c170a9e9e5ee1b3604f"} Jan 21 17:24:43 crc kubenswrapper[4902]: I0121 17:24:43.160746 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-st2m5" event={"ID":"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2","Type":"ContainerStarted","Data":"b31fc7c57aad02620f067cf3d30a505129008496697692d48411e4e1e02b19a9"} Jan 21 17:24:44 crc kubenswrapper[4902]: I0121 17:24:44.188964 4902 generic.go:334] "Generic (PLEG): container finished" podID="0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" containerID="b31fc7c57aad02620f067cf3d30a505129008496697692d48411e4e1e02b19a9" exitCode=0 Jan 21 17:24:44 crc kubenswrapper[4902]: I0121 17:24:44.189094 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-st2m5" event={"ID":"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2","Type":"ContainerDied","Data":"b31fc7c57aad02620f067cf3d30a505129008496697692d48411e4e1e02b19a9"} Jan 21 17:24:45 crc kubenswrapper[4902]: I0121 17:24:45.204121 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-st2m5" event={"ID":"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2","Type":"ContainerStarted","Data":"19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa"} Jan 21 17:24:45 crc kubenswrapper[4902]: I0121 17:24:45.233877 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-st2m5" podStartSLOduration=2.760754568 podStartE2EDuration="5.233854219s" podCreationTimestamp="2026-01-21 17:24:40 +0000 UTC" firstStartedPulling="2026-01-21 17:24:42.153367527 +0000 UTC m=+10244.230200546" lastFinishedPulling="2026-01-21 17:24:44.626467128 +0000 UTC m=+10246.703300197" observedRunningTime="2026-01-21 17:24:45.226958425 +0000 UTC m=+10247.303791494" watchObservedRunningTime="2026-01-21 17:24:45.233854219 +0000 UTC m=+10247.310687248" Jan 21 17:24:51 crc kubenswrapper[4902]: I0121 17:24:51.333369 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:51 crc kubenswrapper[4902]: I0121 17:24:51.333842 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:51 crc kubenswrapper[4902]: I0121 17:24:51.407745 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:52 crc kubenswrapper[4902]: I0121 17:24:52.374168 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:52 crc kubenswrapper[4902]: I0121 17:24:52.460767 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-st2m5"] Jan 21 17:24:54 crc kubenswrapper[4902]: I0121 17:24:54.307151 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-st2m5" podUID="0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" containerName="registry-server" containerID="cri-o://19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa" gracePeriod=2 Jan 21 17:24:54 crc kubenswrapper[4902]: I0121 17:24:54.782269 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:54 crc kubenswrapper[4902]: I0121 17:24:54.921998 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-catalog-content\") pod \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\" (UID: \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\") " Jan 21 17:24:54 crc kubenswrapper[4902]: I0121 17:24:54.922344 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-utilities\") pod \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\" (UID: \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\") " Jan 21 17:24:54 crc kubenswrapper[4902]: I0121 17:24:54.922440 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9blfh\" (UniqueName: \"kubernetes.io/projected/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-kube-api-access-9blfh\") pod \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\" (UID: \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\") " Jan 21 17:24:54 crc kubenswrapper[4902]: I0121 17:24:54.924176 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-utilities" (OuterVolumeSpecName: "utilities") pod "0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" (UID: "0846ffa8-a7a2-47b2-b8dc-aa69b321fef2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:24:54 crc kubenswrapper[4902]: I0121 17:24:54.931449 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-kube-api-access-9blfh" (OuterVolumeSpecName: "kube-api-access-9blfh") pod "0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" (UID: "0846ffa8-a7a2-47b2-b8dc-aa69b321fef2"). InnerVolumeSpecName "kube-api-access-9blfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:24:54 crc kubenswrapper[4902]: I0121 17:24:54.989654 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" (UID: "0846ffa8-a7a2-47b2-b8dc-aa69b321fef2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.024893 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.024922 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.024938 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9blfh\" (UniqueName: \"kubernetes.io/projected/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-kube-api-access-9blfh\") on node \"crc\" DevicePath \"\"" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.323230 4902 generic.go:334] "Generic (PLEG): container finished" podID="0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" containerID="19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa" exitCode=0 Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.324330 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-st2m5" event={"ID":"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2","Type":"ContainerDied","Data":"19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa"} Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.324480 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-st2m5" event={"ID":"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2","Type":"ContainerDied","Data":"9bf11b95b15931c87e146b54d872973cb5b5c988f66b6c170a9e9e5ee1b3604f"} Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.324609 4902 scope.go:117] "RemoveContainer" containerID="19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.324907 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.356286 4902 scope.go:117] "RemoveContainer" containerID="b31fc7c57aad02620f067cf3d30a505129008496697692d48411e4e1e02b19a9" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.391475 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-st2m5"] Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.402280 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-st2m5"] Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.413758 4902 scope.go:117] "RemoveContainer" containerID="2bf183c9fad0e08c87d322a8fb144d6c166cf69e7ccd0145fc6e2547cd5e21ea" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.453725 4902 scope.go:117] "RemoveContainer" containerID="19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa" Jan 21 17:24:55 crc kubenswrapper[4902]: E0121 17:24:55.454181 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa\": container with ID starting with 19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa not found: ID does not exist" containerID="19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.454250 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa"} err="failed to get container status \"19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa\": rpc error: code = NotFound desc = could not find container \"19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa\": container with ID starting with 19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa not found: ID does not exist" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.454283 4902 scope.go:117] "RemoveContainer" containerID="b31fc7c57aad02620f067cf3d30a505129008496697692d48411e4e1e02b19a9" Jan 21 17:24:55 crc kubenswrapper[4902]: E0121 17:24:55.454606 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b31fc7c57aad02620f067cf3d30a505129008496697692d48411e4e1e02b19a9\": container with ID starting with b31fc7c57aad02620f067cf3d30a505129008496697692d48411e4e1e02b19a9 not found: ID does not exist" containerID="b31fc7c57aad02620f067cf3d30a505129008496697692d48411e4e1e02b19a9" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.454642 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b31fc7c57aad02620f067cf3d30a505129008496697692d48411e4e1e02b19a9"} err="failed to get container status \"b31fc7c57aad02620f067cf3d30a505129008496697692d48411e4e1e02b19a9\": rpc error: code = NotFound desc = could not find container \"b31fc7c57aad02620f067cf3d30a505129008496697692d48411e4e1e02b19a9\": container with ID starting with b31fc7c57aad02620f067cf3d30a505129008496697692d48411e4e1e02b19a9 not found: ID does not exist" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.454686 4902 scope.go:117] "RemoveContainer" containerID="2bf183c9fad0e08c87d322a8fb144d6c166cf69e7ccd0145fc6e2547cd5e21ea" Jan 21 17:24:55 crc kubenswrapper[4902]: E0121 17:24:55.454933 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bf183c9fad0e08c87d322a8fb144d6c166cf69e7ccd0145fc6e2547cd5e21ea\": container with ID starting with 2bf183c9fad0e08c87d322a8fb144d6c166cf69e7ccd0145fc6e2547cd5e21ea not found: ID does not exist" containerID="2bf183c9fad0e08c87d322a8fb144d6c166cf69e7ccd0145fc6e2547cd5e21ea" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.454963 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bf183c9fad0e08c87d322a8fb144d6c166cf69e7ccd0145fc6e2547cd5e21ea"} err="failed to get container status \"2bf183c9fad0e08c87d322a8fb144d6c166cf69e7ccd0145fc6e2547cd5e21ea\": rpc error: code = NotFound desc = could not find container \"2bf183c9fad0e08c87d322a8fb144d6c166cf69e7ccd0145fc6e2547cd5e21ea\": container with ID starting with 2bf183c9fad0e08c87d322a8fb144d6c166cf69e7ccd0145fc6e2547cd5e21ea not found: ID does not exist" Jan 21 17:24:56 crc kubenswrapper[4902]: I0121 17:24:56.312315 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" path="/var/lib/kubelet/pods/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2/volumes" Jan 21 17:25:17 crc kubenswrapper[4902]: I0121 17:25:17.770154 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:25:17 crc kubenswrapper[4902]: I0121 17:25:17.770945 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.774658 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g8sp7"] Jan 21 17:25:19 crc kubenswrapper[4902]: E0121 17:25:19.775561 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" containerName="extract-content" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.775576 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" containerName="extract-content" Jan 21 17:25:19 crc kubenswrapper[4902]: E0121 17:25:19.775600 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" containerName="extract-utilities" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.775608 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" containerName="extract-utilities" Jan 21 17:25:19 crc kubenswrapper[4902]: E0121 17:25:19.775628 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" containerName="registry-server" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.775636 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" containerName="registry-server" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.775865 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" containerName="registry-server" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.777769 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.789196 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4knpz\" (UniqueName: \"kubernetes.io/projected/5ee1fb18-dd95-405a-b744-92b02ac80b20-kube-api-access-4knpz\") pod \"certified-operators-g8sp7\" (UID: \"5ee1fb18-dd95-405a-b744-92b02ac80b20\") " pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.789365 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ee1fb18-dd95-405a-b744-92b02ac80b20-catalog-content\") pod \"certified-operators-g8sp7\" (UID: \"5ee1fb18-dd95-405a-b744-92b02ac80b20\") " pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.789523 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ee1fb18-dd95-405a-b744-92b02ac80b20-utilities\") pod \"certified-operators-g8sp7\" (UID: \"5ee1fb18-dd95-405a-b744-92b02ac80b20\") " pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.795238 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g8sp7"] Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.891234 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4knpz\" (UniqueName: \"kubernetes.io/projected/5ee1fb18-dd95-405a-b744-92b02ac80b20-kube-api-access-4knpz\") pod \"certified-operators-g8sp7\" (UID: \"5ee1fb18-dd95-405a-b744-92b02ac80b20\") " pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.891293 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ee1fb18-dd95-405a-b744-92b02ac80b20-catalog-content\") pod \"certified-operators-g8sp7\" (UID: \"5ee1fb18-dd95-405a-b744-92b02ac80b20\") " pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.891334 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ee1fb18-dd95-405a-b744-92b02ac80b20-utilities\") pod \"certified-operators-g8sp7\" (UID: \"5ee1fb18-dd95-405a-b744-92b02ac80b20\") " pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.891792 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ee1fb18-dd95-405a-b744-92b02ac80b20-utilities\") pod \"certified-operators-g8sp7\" (UID: \"5ee1fb18-dd95-405a-b744-92b02ac80b20\") " pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.892730 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ee1fb18-dd95-405a-b744-92b02ac80b20-catalog-content\") pod \"certified-operators-g8sp7\" (UID: \"5ee1fb18-dd95-405a-b744-92b02ac80b20\") " pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.914121 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4knpz\" (UniqueName: \"kubernetes.io/projected/5ee1fb18-dd95-405a-b744-92b02ac80b20-kube-api-access-4knpz\") pod \"certified-operators-g8sp7\" (UID: \"5ee1fb18-dd95-405a-b744-92b02ac80b20\") " pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:20 crc kubenswrapper[4902]: I0121 17:25:20.112504 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:20 crc kubenswrapper[4902]: I0121 17:25:20.666825 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g8sp7"] Jan 21 17:25:21 crc kubenswrapper[4902]: I0121 17:25:21.652924 4902 generic.go:334] "Generic (PLEG): container finished" podID="5ee1fb18-dd95-405a-b744-92b02ac80b20" containerID="66ec7930a0f998b00a52bc887b982ecf3aeaa4dd5522ccf09006cee0c769d966" exitCode=0 Jan 21 17:25:21 crc kubenswrapper[4902]: I0121 17:25:21.653378 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8sp7" event={"ID":"5ee1fb18-dd95-405a-b744-92b02ac80b20","Type":"ContainerDied","Data":"66ec7930a0f998b00a52bc887b982ecf3aeaa4dd5522ccf09006cee0c769d966"} Jan 21 17:25:21 crc kubenswrapper[4902]: I0121 17:25:21.656546 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8sp7" event={"ID":"5ee1fb18-dd95-405a-b744-92b02ac80b20","Type":"ContainerStarted","Data":"767d280e7edd1011a36ba60c402bed1755ea287ad9f9239295ded8a1b49d6336"} Jan 21 17:25:22 crc kubenswrapper[4902]: I0121 17:25:22.673779 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8sp7" event={"ID":"5ee1fb18-dd95-405a-b744-92b02ac80b20","Type":"ContainerStarted","Data":"c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2"} Jan 21 17:25:22 crc kubenswrapper[4902]: E0121 17:25:22.987331 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ee1fb18_dd95_405a_b744_92b02ac80b20.slice/crio-c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ee1fb18_dd95_405a_b744_92b02ac80b20.slice/crio-conmon-c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2.scope\": RecentStats: unable to find data in memory cache]" Jan 21 17:25:23 crc kubenswrapper[4902]: I0121 17:25:23.692615 4902 generic.go:334] "Generic (PLEG): container finished" podID="5ee1fb18-dd95-405a-b744-92b02ac80b20" containerID="c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2" exitCode=0 Jan 21 17:25:23 crc kubenswrapper[4902]: I0121 17:25:23.692682 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8sp7" event={"ID":"5ee1fb18-dd95-405a-b744-92b02ac80b20","Type":"ContainerDied","Data":"c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2"} Jan 21 17:25:25 crc kubenswrapper[4902]: I0121 17:25:25.716699 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8sp7" event={"ID":"5ee1fb18-dd95-405a-b744-92b02ac80b20","Type":"ContainerStarted","Data":"de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae"} Jan 21 17:25:25 crc kubenswrapper[4902]: I0121 17:25:25.754146 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g8sp7" podStartSLOduration=4.283609005 podStartE2EDuration="6.754122604s" podCreationTimestamp="2026-01-21 17:25:19 +0000 UTC" firstStartedPulling="2026-01-21 17:25:21.655333597 +0000 UTC m=+10283.732166636" lastFinishedPulling="2026-01-21 17:25:24.125847206 +0000 UTC m=+10286.202680235" observedRunningTime="2026-01-21 17:25:25.740167241 +0000 UTC m=+10287.817000290" watchObservedRunningTime="2026-01-21 17:25:25.754122604 +0000 UTC m=+10287.830955653" Jan 21 17:25:30 crc kubenswrapper[4902]: I0121 17:25:30.113625 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:30 crc kubenswrapper[4902]: I0121 17:25:30.114270 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:30 crc kubenswrapper[4902]: I0121 17:25:30.208479 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:30 crc kubenswrapper[4902]: I0121 17:25:30.876182 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:30 crc kubenswrapper[4902]: I0121 17:25:30.955547 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g8sp7"] Jan 21 17:25:32 crc kubenswrapper[4902]: I0121 17:25:32.816909 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g8sp7" podUID="5ee1fb18-dd95-405a-b744-92b02ac80b20" containerName="registry-server" containerID="cri-o://de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae" gracePeriod=2 Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.815680 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.826783 4902 generic.go:334] "Generic (PLEG): container finished" podID="5ee1fb18-dd95-405a-b744-92b02ac80b20" containerID="de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae" exitCode=0 Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.826839 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.826849 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8sp7" event={"ID":"5ee1fb18-dd95-405a-b744-92b02ac80b20","Type":"ContainerDied","Data":"de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae"} Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.826884 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8sp7" event={"ID":"5ee1fb18-dd95-405a-b744-92b02ac80b20","Type":"ContainerDied","Data":"767d280e7edd1011a36ba60c402bed1755ea287ad9f9239295ded8a1b49d6336"} Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.826903 4902 scope.go:117] "RemoveContainer" containerID="de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae" Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.883521 4902 scope.go:117] "RemoveContainer" containerID="c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2" Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.914189 4902 scope.go:117] "RemoveContainer" containerID="66ec7930a0f998b00a52bc887b982ecf3aeaa4dd5522ccf09006cee0c769d966" Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.957140 4902 scope.go:117] "RemoveContainer" containerID="de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae" Jan 21 17:25:33 crc kubenswrapper[4902]: E0121 17:25:33.957657 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae\": container with ID starting with de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae not found: ID does not exist" containerID="de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae" Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.957692 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae"} err="failed to get container status \"de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae\": rpc error: code = NotFound desc = could not find container \"de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae\": container with ID starting with de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae not found: ID does not exist" Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.957713 4902 scope.go:117] "RemoveContainer" containerID="c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2" Jan 21 17:25:33 crc kubenswrapper[4902]: E0121 17:25:33.958645 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2\": container with ID starting with c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2 not found: ID does not exist" containerID="c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2" Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.958668 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2"} err="failed to get container status \"c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2\": rpc error: code = NotFound desc = could not find container \"c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2\": container with ID starting with c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2 not found: ID does not exist" Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.958684 4902 scope.go:117] "RemoveContainer" containerID="66ec7930a0f998b00a52bc887b982ecf3aeaa4dd5522ccf09006cee0c769d966" Jan 21 17:25:33 crc kubenswrapper[4902]: E0121 17:25:33.959270 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66ec7930a0f998b00a52bc887b982ecf3aeaa4dd5522ccf09006cee0c769d966\": container with ID starting with 66ec7930a0f998b00a52bc887b982ecf3aeaa4dd5522ccf09006cee0c769d966 not found: ID does not exist" containerID="66ec7930a0f998b00a52bc887b982ecf3aeaa4dd5522ccf09006cee0c769d966" Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.959293 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66ec7930a0f998b00a52bc887b982ecf3aeaa4dd5522ccf09006cee0c769d966"} err="failed to get container status \"66ec7930a0f998b00a52bc887b982ecf3aeaa4dd5522ccf09006cee0c769d966\": rpc error: code = NotFound desc = could not find container \"66ec7930a0f998b00a52bc887b982ecf3aeaa4dd5522ccf09006cee0c769d966\": container with ID starting with 66ec7930a0f998b00a52bc887b982ecf3aeaa4dd5522ccf09006cee0c769d966 not found: ID does not exist" Jan 21 17:25:34 crc kubenswrapper[4902]: I0121 17:25:34.054895 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4knpz\" (UniqueName: \"kubernetes.io/projected/5ee1fb18-dd95-405a-b744-92b02ac80b20-kube-api-access-4knpz\") pod \"5ee1fb18-dd95-405a-b744-92b02ac80b20\" (UID: \"5ee1fb18-dd95-405a-b744-92b02ac80b20\") " Jan 21 17:25:34 crc kubenswrapper[4902]: I0121 17:25:34.055034 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ee1fb18-dd95-405a-b744-92b02ac80b20-utilities\") pod \"5ee1fb18-dd95-405a-b744-92b02ac80b20\" (UID: \"5ee1fb18-dd95-405a-b744-92b02ac80b20\") " Jan 21 17:25:34 crc kubenswrapper[4902]: I0121 17:25:34.055145 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ee1fb18-dd95-405a-b744-92b02ac80b20-catalog-content\") pod \"5ee1fb18-dd95-405a-b744-92b02ac80b20\" (UID: \"5ee1fb18-dd95-405a-b744-92b02ac80b20\") " Jan 21 17:25:34 crc kubenswrapper[4902]: I0121 17:25:34.055873 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ee1fb18-dd95-405a-b744-92b02ac80b20-utilities" (OuterVolumeSpecName: "utilities") pod "5ee1fb18-dd95-405a-b744-92b02ac80b20" (UID: "5ee1fb18-dd95-405a-b744-92b02ac80b20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:25:34 crc kubenswrapper[4902]: I0121 17:25:34.064322 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ee1fb18-dd95-405a-b744-92b02ac80b20-kube-api-access-4knpz" (OuterVolumeSpecName: "kube-api-access-4knpz") pod "5ee1fb18-dd95-405a-b744-92b02ac80b20" (UID: "5ee1fb18-dd95-405a-b744-92b02ac80b20"). InnerVolumeSpecName "kube-api-access-4knpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:25:34 crc kubenswrapper[4902]: I0121 17:25:34.103117 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ee1fb18-dd95-405a-b744-92b02ac80b20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ee1fb18-dd95-405a-b744-92b02ac80b20" (UID: "5ee1fb18-dd95-405a-b744-92b02ac80b20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:25:34 crc kubenswrapper[4902]: I0121 17:25:34.159588 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4knpz\" (UniqueName: \"kubernetes.io/projected/5ee1fb18-dd95-405a-b744-92b02ac80b20-kube-api-access-4knpz\") on node \"crc\" DevicePath \"\"" Jan 21 17:25:34 crc kubenswrapper[4902]: I0121 17:25:34.159852 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ee1fb18-dd95-405a-b744-92b02ac80b20-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:25:34 crc kubenswrapper[4902]: I0121 17:25:34.159944 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ee1fb18-dd95-405a-b744-92b02ac80b20-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:25:34 crc kubenswrapper[4902]: I0121 17:25:34.170336 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g8sp7"] Jan 21 17:25:34 crc kubenswrapper[4902]: I0121 17:25:34.185911 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g8sp7"] Jan 21 17:25:34 crc kubenswrapper[4902]: I0121 17:25:34.321303 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ee1fb18-dd95-405a-b744-92b02ac80b20" path="/var/lib/kubelet/pods/5ee1fb18-dd95-405a-b744-92b02ac80b20/volumes" Jan 21 17:25:47 crc kubenswrapper[4902]: I0121 17:25:47.770331 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:25:47 crc kubenswrapper[4902]: I0121 17:25:47.771271 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:26:17 crc kubenswrapper[4902]: I0121 17:26:17.772133 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:26:17 crc kubenswrapper[4902]: I0121 17:26:17.773058 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:26:17 crc kubenswrapper[4902]: I0121 17:26:17.773122 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 17:26:17 crc kubenswrapper[4902]: I0121 17:26:17.775122 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:26:17 crc kubenswrapper[4902]: I0121 17:26:17.775254 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" gracePeriod=600 Jan 21 17:26:17 crc kubenswrapper[4902]: E0121 17:26:17.942412 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:26:18 crc kubenswrapper[4902]: I0121 17:26:18.302917 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" exitCode=0 Jan 21 17:26:18 crc kubenswrapper[4902]: I0121 17:26:18.311533 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e"} Jan 21 17:26:18 crc kubenswrapper[4902]: I0121 17:26:18.311816 4902 scope.go:117] "RemoveContainer" containerID="5e6bee27c568351479893fd8644172bc1970f833c3f9b00f5a27074b919cd4b1" Jan 21 17:26:18 crc kubenswrapper[4902]: I0121 17:26:18.312865 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:26:18 crc kubenswrapper[4902]: E0121 17:26:18.313233 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.777608 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tfztn"] Jan 21 17:26:22 crc kubenswrapper[4902]: E0121 17:26:22.779325 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ee1fb18-dd95-405a-b744-92b02ac80b20" containerName="registry-server" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.779339 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ee1fb18-dd95-405a-b744-92b02ac80b20" containerName="registry-server" Jan 21 17:26:22 crc kubenswrapper[4902]: E0121 17:26:22.779360 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ee1fb18-dd95-405a-b744-92b02ac80b20" containerName="extract-utilities" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.779366 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ee1fb18-dd95-405a-b744-92b02ac80b20" containerName="extract-utilities" Jan 21 17:26:22 crc kubenswrapper[4902]: E0121 17:26:22.779374 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ee1fb18-dd95-405a-b744-92b02ac80b20" containerName="extract-content" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.779380 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ee1fb18-dd95-405a-b744-92b02ac80b20" containerName="extract-content" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.789653 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ee1fb18-dd95-405a-b744-92b02ac80b20" containerName="registry-server" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.791381 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.809398 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-485pj\" (UniqueName: \"kubernetes.io/projected/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-kube-api-access-485pj\") pod \"redhat-operators-tfztn\" (UID: \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\") " pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.809536 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-catalog-content\") pod \"redhat-operators-tfztn\" (UID: \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\") " pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.809570 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-utilities\") pod \"redhat-operators-tfztn\" (UID: \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\") " pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.818163 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tfztn"] Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.911310 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-catalog-content\") pod \"redhat-operators-tfztn\" (UID: \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\") " pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.911373 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-utilities\") pod \"redhat-operators-tfztn\" (UID: \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\") " pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.911476 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-485pj\" (UniqueName: \"kubernetes.io/projected/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-kube-api-access-485pj\") pod \"redhat-operators-tfztn\" (UID: \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\") " pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.911886 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-catalog-content\") pod \"redhat-operators-tfztn\" (UID: \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\") " pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.912268 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-utilities\") pod \"redhat-operators-tfztn\" (UID: \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\") " pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.935869 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-485pj\" (UniqueName: \"kubernetes.io/projected/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-kube-api-access-485pj\") pod \"redhat-operators-tfztn\" (UID: \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\") " pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:23 crc kubenswrapper[4902]: I0121 17:26:23.121111 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:23 crc kubenswrapper[4902]: I0121 17:26:23.598659 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tfztn"] Jan 21 17:26:24 crc kubenswrapper[4902]: I0121 17:26:24.384032 4902 generic.go:334] "Generic (PLEG): container finished" podID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" containerID="232ab40424e3be90e3da29f1cecb02852c58216654f49b221218b2c376a531d9" exitCode=0 Jan 21 17:26:24 crc kubenswrapper[4902]: I0121 17:26:24.384378 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfztn" event={"ID":"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a","Type":"ContainerDied","Data":"232ab40424e3be90e3da29f1cecb02852c58216654f49b221218b2c376a531d9"} Jan 21 17:26:24 crc kubenswrapper[4902]: I0121 17:26:24.384408 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfztn" event={"ID":"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a","Type":"ContainerStarted","Data":"47fb3fc789982b0726a9eaca33d56d48b53a5ffbb3e37b220abd7de140c09302"} Jan 21 17:26:24 crc kubenswrapper[4902]: I0121 17:26:24.388881 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:26:26 crc kubenswrapper[4902]: I0121 17:26:26.406571 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfztn" event={"ID":"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a","Type":"ContainerStarted","Data":"75b14825d04b1497405658d6882c82cf75e25621b33fb88924b442ab680cdf83"} Jan 21 17:26:27 crc kubenswrapper[4902]: I0121 17:26:27.417426 4902 generic.go:334] "Generic (PLEG): container finished" podID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" containerID="75b14825d04b1497405658d6882c82cf75e25621b33fb88924b442ab680cdf83" exitCode=0 Jan 21 17:26:27 crc kubenswrapper[4902]: I0121 17:26:27.417589 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfztn" event={"ID":"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a","Type":"ContainerDied","Data":"75b14825d04b1497405658d6882c82cf75e25621b33fb88924b442ab680cdf83"} Jan 21 17:26:30 crc kubenswrapper[4902]: I0121 17:26:30.296011 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:26:30 crc kubenswrapper[4902]: E0121 17:26:30.296914 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:26:31 crc kubenswrapper[4902]: I0121 17:26:31.463269 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfztn" event={"ID":"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a","Type":"ContainerStarted","Data":"fb7e180575d42000763d72a5e12d5b9cefd04449586a7b77c1636ec477704dfa"} Jan 21 17:26:33 crc kubenswrapper[4902]: I0121 17:26:33.121468 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:33 crc kubenswrapper[4902]: I0121 17:26:33.121862 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:34 crc kubenswrapper[4902]: I0121 17:26:34.182886 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tfztn" podUID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" containerName="registry-server" probeResult="failure" output=< Jan 21 17:26:34 crc kubenswrapper[4902]: timeout: failed to connect service ":50051" within 1s Jan 21 17:26:34 crc kubenswrapper[4902]: > Jan 21 17:26:43 crc kubenswrapper[4902]: I0121 17:26:43.208862 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:43 crc kubenswrapper[4902]: I0121 17:26:43.243821 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tfztn" podStartSLOduration=17.210243413 podStartE2EDuration="21.243797431s" podCreationTimestamp="2026-01-21 17:26:22 +0000 UTC" firstStartedPulling="2026-01-21 17:26:24.387231262 +0000 UTC m=+10346.464064291" lastFinishedPulling="2026-01-21 17:26:28.42078529 +0000 UTC m=+10350.497618309" observedRunningTime="2026-01-21 17:26:31.488456131 +0000 UTC m=+10353.565289160" watchObservedRunningTime="2026-01-21 17:26:43.243797431 +0000 UTC m=+10365.320630500" Jan 21 17:26:43 crc kubenswrapper[4902]: I0121 17:26:43.284253 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:43 crc kubenswrapper[4902]: I0121 17:26:43.458742 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tfztn"] Jan 21 17:26:44 crc kubenswrapper[4902]: I0121 17:26:44.690311 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tfztn" podUID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" containerName="registry-server" containerID="cri-o://fb7e180575d42000763d72a5e12d5b9cefd04449586a7b77c1636ec477704dfa" gracePeriod=2 Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.297708 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:26:45 crc kubenswrapper[4902]: E0121 17:26:45.298537 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.700577 4902 generic.go:334] "Generic (PLEG): container finished" podID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" containerID="fb7e180575d42000763d72a5e12d5b9cefd04449586a7b77c1636ec477704dfa" exitCode=0 Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.700647 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfztn" event={"ID":"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a","Type":"ContainerDied","Data":"fb7e180575d42000763d72a5e12d5b9cefd04449586a7b77c1636ec477704dfa"} Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.700681 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfztn" event={"ID":"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a","Type":"ContainerDied","Data":"47fb3fc789982b0726a9eaca33d56d48b53a5ffbb3e37b220abd7de140c09302"} Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.700695 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47fb3fc789982b0726a9eaca33d56d48b53a5ffbb3e37b220abd7de140c09302" Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.753581 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.778937 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-485pj\" (UniqueName: \"kubernetes.io/projected/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-kube-api-access-485pj\") pod \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\" (UID: \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\") " Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.779270 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-catalog-content\") pod \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\" (UID: \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\") " Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.779325 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-utilities\") pod \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\" (UID: \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\") " Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.780382 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-utilities" (OuterVolumeSpecName: "utilities") pod "8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" (UID: "8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.788005 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-kube-api-access-485pj" (OuterVolumeSpecName: "kube-api-access-485pj") pod "8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" (UID: "8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a"). InnerVolumeSpecName "kube-api-access-485pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.881221 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-485pj\" (UniqueName: \"kubernetes.io/projected/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-kube-api-access-485pj\") on node \"crc\" DevicePath \"\"" Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.881254 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.907142 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" (UID: "8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.983586 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:26:46 crc kubenswrapper[4902]: I0121 17:26:46.716559 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:46 crc kubenswrapper[4902]: I0121 17:26:46.746716 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tfztn"] Jan 21 17:26:46 crc kubenswrapper[4902]: I0121 17:26:46.758307 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tfztn"] Jan 21 17:26:48 crc kubenswrapper[4902]: I0121 17:26:48.354967 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" path="/var/lib/kubelet/pods/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a/volumes" Jan 21 17:26:59 crc kubenswrapper[4902]: I0121 17:26:59.295433 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:26:59 crc kubenswrapper[4902]: E0121 17:26:59.296215 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:27:12 crc kubenswrapper[4902]: I0121 17:27:12.300948 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:27:12 crc kubenswrapper[4902]: E0121 17:27:12.301693 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:27:25 crc kubenswrapper[4902]: I0121 17:27:25.295028 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:27:25 crc kubenswrapper[4902]: E0121 17:27:25.298690 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:27:39 crc kubenswrapper[4902]: I0121 17:27:39.295279 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:27:39 crc kubenswrapper[4902]: E0121 17:27:39.295876 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:27:50 crc kubenswrapper[4902]: I0121 17:27:50.299181 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:27:50 crc kubenswrapper[4902]: E0121 17:27:50.300165 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:28:02 crc kubenswrapper[4902]: I0121 17:28:02.295513 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:28:02 crc kubenswrapper[4902]: E0121 17:28:02.296377 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:28:13 crc kubenswrapper[4902]: I0121 17:28:13.295438 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:28:13 crc kubenswrapper[4902]: E0121 17:28:13.296221 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:28:28 crc kubenswrapper[4902]: I0121 17:28:28.302573 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:28:28 crc kubenswrapper[4902]: E0121 17:28:28.303464 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:28:41 crc kubenswrapper[4902]: I0121 17:28:41.295420 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:28:41 crc kubenswrapper[4902]: E0121 17:28:41.296308 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:28:55 crc kubenswrapper[4902]: I0121 17:28:55.296231 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:28:55 crc kubenswrapper[4902]: E0121 17:28:55.297337 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:29:08 crc kubenswrapper[4902]: I0121 17:29:08.311347 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:29:08 crc kubenswrapper[4902]: E0121 17:29:08.311997 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:29:20 crc kubenswrapper[4902]: I0121 17:29:20.296187 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:29:20 crc kubenswrapper[4902]: E0121 17:29:20.297263 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:29:31 crc kubenswrapper[4902]: I0121 17:29:31.295575 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:29:31 crc kubenswrapper[4902]: E0121 17:29:31.296713 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:29:45 crc kubenswrapper[4902]: I0121 17:29:45.295271 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:29:45 crc kubenswrapper[4902]: E0121 17:29:45.295993 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:29:57 crc kubenswrapper[4902]: I0121 17:29:57.294568 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:29:57 crc kubenswrapper[4902]: E0121 17:29:57.295266 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.200763 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29"] Jan 21 17:30:00 crc kubenswrapper[4902]: E0121 17:30:00.203144 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" containerName="extract-utilities" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.203285 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" containerName="extract-utilities" Jan 21 17:30:00 crc kubenswrapper[4902]: E0121 17:30:00.203430 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" containerName="registry-server" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.203520 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" containerName="registry-server" Jan 21 17:30:00 crc kubenswrapper[4902]: E0121 17:30:00.203624 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" containerName="extract-content" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.203713 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" containerName="extract-content" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.204111 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" containerName="registry-server" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.205529 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.208674 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.208995 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.243488 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29"] Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.341787 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbbdv\" (UniqueName: \"kubernetes.io/projected/3f3d7108-09c7-4727-8d9a-41c107bf4a09-kube-api-access-vbbdv\") pod \"collect-profiles-29483610-xdc29\" (UID: \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.341943 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f3d7108-09c7-4727-8d9a-41c107bf4a09-config-volume\") pod \"collect-profiles-29483610-xdc29\" (UID: \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.342030 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f3d7108-09c7-4727-8d9a-41c107bf4a09-secret-volume\") pod \"collect-profiles-29483610-xdc29\" (UID: \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.443768 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f3d7108-09c7-4727-8d9a-41c107bf4a09-secret-volume\") pod \"collect-profiles-29483610-xdc29\" (UID: \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.444617 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbbdv\" (UniqueName: \"kubernetes.io/projected/3f3d7108-09c7-4727-8d9a-41c107bf4a09-kube-api-access-vbbdv\") pod \"collect-profiles-29483610-xdc29\" (UID: \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.444801 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f3d7108-09c7-4727-8d9a-41c107bf4a09-config-volume\") pod \"collect-profiles-29483610-xdc29\" (UID: \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.446098 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f3d7108-09c7-4727-8d9a-41c107bf4a09-config-volume\") pod \"collect-profiles-29483610-xdc29\" (UID: \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.468412 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f3d7108-09c7-4727-8d9a-41c107bf4a09-secret-volume\") pod \"collect-profiles-29483610-xdc29\" (UID: \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.472343 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbbdv\" (UniqueName: \"kubernetes.io/projected/3f3d7108-09c7-4727-8d9a-41c107bf4a09-kube-api-access-vbbdv\") pod \"collect-profiles-29483610-xdc29\" (UID: \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.536127 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.996746 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29"] Jan 21 17:30:01 crc kubenswrapper[4902]: I0121 17:30:01.861544 4902 generic.go:334] "Generic (PLEG): container finished" podID="3f3d7108-09c7-4727-8d9a-41c107bf4a09" containerID="edc7eebdd8f8e0aabf90c74f44ac0c18bff134d103535bdb8854d2f5b6b7f0e0" exitCode=0 Jan 21 17:30:01 crc kubenswrapper[4902]: I0121 17:30:01.861632 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" event={"ID":"3f3d7108-09c7-4727-8d9a-41c107bf4a09","Type":"ContainerDied","Data":"edc7eebdd8f8e0aabf90c74f44ac0c18bff134d103535bdb8854d2f5b6b7f0e0"} Jan 21 17:30:01 crc kubenswrapper[4902]: I0121 17:30:01.861944 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" event={"ID":"3f3d7108-09c7-4727-8d9a-41c107bf4a09","Type":"ContainerStarted","Data":"3e4a8f4eeb283b9193d68e292218b38782d8c949d775f5d609078ac16ba2bf16"} Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.319323 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.414068 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbbdv\" (UniqueName: \"kubernetes.io/projected/3f3d7108-09c7-4727-8d9a-41c107bf4a09-kube-api-access-vbbdv\") pod \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\" (UID: \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\") " Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.414369 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f3d7108-09c7-4727-8d9a-41c107bf4a09-config-volume\") pod \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\" (UID: \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\") " Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.414468 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f3d7108-09c7-4727-8d9a-41c107bf4a09-secret-volume\") pod \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\" (UID: \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\") " Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.414917 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f3d7108-09c7-4727-8d9a-41c107bf4a09-config-volume" (OuterVolumeSpecName: "config-volume") pod "3f3d7108-09c7-4727-8d9a-41c107bf4a09" (UID: "3f3d7108-09c7-4727-8d9a-41c107bf4a09"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.415462 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f3d7108-09c7-4727-8d9a-41c107bf4a09-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.420877 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f3d7108-09c7-4727-8d9a-41c107bf4a09-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3f3d7108-09c7-4727-8d9a-41c107bf4a09" (UID: "3f3d7108-09c7-4727-8d9a-41c107bf4a09"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.420904 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f3d7108-09c7-4727-8d9a-41c107bf4a09-kube-api-access-vbbdv" (OuterVolumeSpecName: "kube-api-access-vbbdv") pod "3f3d7108-09c7-4727-8d9a-41c107bf4a09" (UID: "3f3d7108-09c7-4727-8d9a-41c107bf4a09"). InnerVolumeSpecName "kube-api-access-vbbdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.518471 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f3d7108-09c7-4727-8d9a-41c107bf4a09-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.518514 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbbdv\" (UniqueName: \"kubernetes.io/projected/3f3d7108-09c7-4727-8d9a-41c107bf4a09-kube-api-access-vbbdv\") on node \"crc\" DevicePath \"\"" Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.882355 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" event={"ID":"3f3d7108-09c7-4727-8d9a-41c107bf4a09","Type":"ContainerDied","Data":"3e4a8f4eeb283b9193d68e292218b38782d8c949d775f5d609078ac16ba2bf16"} Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.882425 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e4a8f4eeb283b9193d68e292218b38782d8c949d775f5d609078ac16ba2bf16" Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.882771 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:04 crc kubenswrapper[4902]: I0121 17:30:04.404993 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8"] Jan 21 17:30:04 crc kubenswrapper[4902]: I0121 17:30:04.414043 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8"] Jan 21 17:30:06 crc kubenswrapper[4902]: I0121 17:30:06.313379 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="504c5756-9427-4037-be3a-481fc1e8715f" path="/var/lib/kubelet/pods/504c5756-9427-4037-be3a-481fc1e8715f/volumes" Jan 21 17:30:12 crc kubenswrapper[4902]: I0121 17:30:12.294775 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:30:12 crc kubenswrapper[4902]: E0121 17:30:12.295463 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:30:26 crc kubenswrapper[4902]: I0121 17:30:26.294813 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:30:26 crc kubenswrapper[4902]: E0121 17:30:26.295691 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:30:37 crc kubenswrapper[4902]: I0121 17:30:37.830408 4902 scope.go:117] "RemoveContainer" containerID="aa3c7bb404afe310e56cb2617f84d467c8f578e09af1f3e30d342fd88646315e" Jan 21 17:30:38 crc kubenswrapper[4902]: I0121 17:30:38.306840 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:30:38 crc kubenswrapper[4902]: E0121 17:30:38.307532 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:30:50 crc kubenswrapper[4902]: I0121 17:30:50.296344 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:30:50 crc kubenswrapper[4902]: E0121 17:30:50.297529 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:31:02 crc kubenswrapper[4902]: I0121 17:31:02.305290 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:31:02 crc kubenswrapper[4902]: E0121 17:31:02.307985 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:31:17 crc kubenswrapper[4902]: I0121 17:31:17.296127 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:31:17 crc kubenswrapper[4902]: E0121 17:31:17.297373 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:31:22 crc kubenswrapper[4902]: E0121 17:31:22.521675 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/systemd-hostnamed.service\": RecentStats: unable to find data in memory cache]" Jan 21 17:31:28 crc kubenswrapper[4902]: I0121 17:31:28.309769 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:31:29 crc kubenswrapper[4902]: I0121 17:31:29.003229 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"990688b0a47233a469fcab728e8366f3b80f69b4ece9e59cbc348d43edad0605"} Jan 21 17:32:37 crc kubenswrapper[4902]: I0121 17:32:37.955372 4902 scope.go:117] "RemoveContainer" containerID="fb7e180575d42000763d72a5e12d5b9cefd04449586a7b77c1636ec477704dfa" Jan 21 17:32:37 crc kubenswrapper[4902]: I0121 17:32:37.984943 4902 scope.go:117] "RemoveContainer" containerID="75b14825d04b1497405658d6882c82cf75e25621b33fb88924b442ab680cdf83" Jan 21 17:32:38 crc kubenswrapper[4902]: I0121 17:32:38.013913 4902 scope.go:117] "RemoveContainer" containerID="232ab40424e3be90e3da29f1cecb02852c58216654f49b221218b2c376a531d9" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515134206721024446 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015134206722017364 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015134161314016504 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015134161315015455 5ustar corecore